1 安装环境

1.1 安装镜像版本

建议最小化安装,这里用的是CentOS-7-x86_64-Minimal-1511。

1.2 网络规划

本文包含控制节点controller3,计算节点compute11,存储节点cinder各一台,所有密码为pass123456。其它所有计算节点配置基本相同,但每一个计算节点的主机名和IP应该是唯一的。

每个节点上有两块网卡,一块是可以访问外网的192.158.32.0/24段,另一块是内部通信管理网络的172.16.1.0/24段。

网卡配置根据环境,虚拟机或物理机上配置方法请自行百度。

其中,按该文配置的一个控制节点和一个计算节点的IP分别如下:

节点名称 提供网络 自选网络
controller3 192.168.32.134 172.16.1.136
compute11 192.168.32.129 172.16.1.130
cinder 192.168.32.139 172.16.1.138

2 准备条件

2.1 配置国内yum源

在所有节点上:

# yum install -y wget
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# wget -P /etc/yum.repos.d/ http://mirrors.163.com/.help/CentOS7-Base-163.repo
# yum clean all
# yum makecache

2.2 安装常用工具

在所有节点上:

# yum install -y vim net-tools epel-release python-pip

2.3 关闭selinux

在所有节点上:

编辑/etc/selinux/config文件

selinux=disabled

2.4 编辑hosts,修改主机名

在所有节点上:

编辑/etc/hosts

# controller3
192.168.32.134 controller3
# compute11
192.168.32.129 compute11
# cinder
192.168.32.139 cinder

修改主机名,将servername分别在主机上修改为节点名称controller3compute11cinder

hostnamectl set-hostname servername
systemctl restart systemd-hostnamed

验证:分别在各节点间ping每个主机名的联通性。

3 Openstack环境

3.1 NTP

  • 安装配置

在控制节点上:

# yum install -y chrony

编辑文件/etc/chrony.conf添加:

allow 192.168.32.0/24

启动NTP服务并随系统系统

# systemctl enable chronyd.service
# systemctl start chronyd.service

在除控制节点外其它节点上:

# yum install -y chrony

编辑文件/etc/chrony.conf,并注释其它所有server选项

server controller3 iburst
  • 启动服务并设置随系统启动

更改时区:

# timedatectl set-timezone Asia/Shanghai

启动NTP服务并随系统系统

# systemctl enable chronyd.service
# systemctl start chronyd.service

验证:在所有节点上运行chronyc sources,输出结果MS前带*表示同步了相应Name/IP address的时间。

如果时间不同步,则重启服务:

# systemctl restart chronyd.service

3.2 启用OpenStack库

在所有节点上:

# yum install -y centos-release-openstack-ocata
# yum install -y https://rdoproject.org/repos/rdo-release.rpm
# yum install -y python-openstackclient

3.3 数据库

在控制节点上:

# yum install -y mariadb mariadb-server python2-PyMySQL

创建并编辑/etc/my.cnf.d/openstack.cnf文件,注释bind-address行:

[mysqld]
#bind-address = 127.0.0.1 default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8

启动数据库服务,并随系统而启动:

# systemctl enable mariadb.service
# systemctl start mariadb.service

运行数据库初始化安全脚本,设置数据库root用户密码,刚登录数据库时密码默认为空:

mysql_secure_installation

3.4 消息队列

在控制节点上:

# yum install -y rabbitmq-server

# systemctl enable rabbitmq-server.service
# systemctl start rabbitmq-server.service # rabbitmqctl add_user openstack pass123456
Creating user "openstack" ... # rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...

3.5 Memcached 缓存令牌

在控制节点上:

# yum install -y memcached python-memcached

编辑文件/etc/sysconfig/memcached

OPTIONS="-l 127.0.0.1,::1,controller3"

启动memcache服务,并随系统启动:

# systemctl enable memcached.service
# systemctl start memcached.service

4 认证服务

在控制节点上:

4.1 准备条件

首先要为认证服务创建数据库,用root用户登录数据库:

$ mysql -u root -p

创建数据库,并为用户分配权限:

MariaDB [(none)]> CREATE DATABASE keystone;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' \
IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit

4.2 安装配置组件

# yum install -y openstack-keystone httpd mod_wsgi

编辑配置文件/etc/keystone/keystone.conf

配置数据库访问

[database]
# ...
connection = mysql+pymysql://keystone:pass123456@controller3/keystone

配置Fernet 令牌提供者

[token]
# ...
provider = fernet

初始化认证服务数据库、Fernetkey仓库

# su -s /bin/sh -c "keystone-manage db_sync" keystone

# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone

引导认证服务

# keystone-manage bootstrap --bootstrap-password pass123456 \
--bootstrap-admin-url http://controller3:35357/v3/ \
--bootstrap-internal-url http://controller3:5000/v3/ \
--bootstrap-public-url http://controller3:5000/v3/ \
--bootstrap-region-id RegionOne

4.3 配置Apache服务器

编辑/etc/httpd/conf/httpd.conf,配置ServerName为控制节点

ServerName controller3

创建链接文件

# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/

4.4 完成安装

启动 Apache HTTP 服务并配置其随系统启动

# systemctl enable httpd.service
# systemctl start httpd.service

4.5 创建 OpenStack 客户端环境脚本

使用环境变量和命令的组合来配置认证服务,为了更加高效和方便,创建 admindemo项目和用户创建客户端环境变量脚本,为客户端操作加载合适的的凭证。

创建并编辑admin-openrc文件,并添加以下内容:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=pass123456
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://controller3:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

创建并编辑demo-openrc文件,并添加以下内容:

export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=pass123456
export OS_AUTH_TYPE=password
export OS_AUTH_URL=http://controller3:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2

运行admin用户认证脚本. admin-openrc,加载环境变量。

4.6 创建域、项目、用户和角色

本指南有一个service 项目,你添加的每一个服务都有唯一的用户。创建service项目:

$ openstack project create --domain default \
--description "Service Project" service

常规(非管理)任务应该使用无特权的项目和用户。作为例子,本指南创建 demo 项目和用户:

$ openstack project create --domain default \
--description "Demo Project" demo

注意:当为这个项目创建额外用户时,不要重复这一步。

创建demo 用户、角色:

$ openstack user create --domain default \
--password-prompt demo
User Password:
Repeat User Password: $ openstack role create user

user角色添加到demo项目中的user用户中。

$ openstack role add --project demo --user demo user

4.7 验证操作

出于安全性的原因,禁用掉暂时的认证令牌机制。

编辑/etc/keystone/keystone-paste.ini文件,并从[pipeline:public_api][pipeline:admin_api][pipeline:api_v3]选项中删除admin_token_auth

使用admin用户,请求一个认证令牌;

$ openstack --os-auth-url http://controller3:35357/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name admin --os-username admin token issue

使用demo用户,请求认证令牌:

$ openstack --os-auth-url http://controller3:5000/v3 \
--os-project-domain-name default --os-user-domain-name default \
--os-project-name demo --os-username demo token issue

请求认证令牌:

$ openstack token issue

+------------+-----------------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------------+
| expires | 2016-02-12T20:44:35.659723Z |
| id | gAAAAABWvjYj-Zjfg8WXFaQnUd1DMYTBVrKw4h3fIagi5NoEmh21U72SrRv2trl |
| | JWFYhLi2_uPR31Igf6A8mH2Rw9kv_bxNo1jbLNPLGzW_u5FC7InFqx0yYtTwa1e |
| | eq2b0f6-18KZyQhs7F3teAta143kJEWuNEYET-y7u29y0be1_64KYkM7E |
| project_id | 343d245e850143a096806dfaefa9afdc |
| user_id | ac3377633149401296f6c0d92d79dc16 |
+------------+-----------------------------------------------------------------+

5 镜像服务

在控制节点上:

5.1 准备条件

在安装配置镜像服务之前,你必须创建数据库、服务凭证和API端点。

5.1.1 数据库

以root用户连接数据库服务器,创建glance数据库,并赋予适当的权限:

$ mysql -u root -p

MariaDB [(none)]> CREATE DATABASE glance;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' \
IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit

5.1.2 服务凭证

$ . admin-openrc

$ openstack user create --domain default --password-prompt glance

User Password:
Repeat User Password: $ openstack role add --project service --user glance admin $ openstack service create --name glance \
--description "OpenStack Image" image

5.1.3 API 端点

$ openstack endpoint create --region RegionOne \
image public http://controller3:9292 $ openstack endpoint create --region RegionOne \
image internal http://controller3:9292 $ openstack endpoint create --region RegionOne \
image admin http://controller3:9292

5.2 安装配置组件

安装包:

# yum install -y openstack-glance

编辑文件/etc/glance/glance-api.conf

[database]
# ...
connection = mysql+pymysql://glance:pass123456@controller3/glance [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = pass123456 [paste_deploy]
# ...
flavor = keystone [glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/

注意:注释或删除[keystone_authtoken]选项的其它内容。

编辑文件/etc/glance/glance-registry.conf

[database]
# ...
connection = mysql+pymysql://glance:pass123456@controller3/glance [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = pass123456 [paste_deploy]
# ...
flavor = keystone

填充镜像数据库:

# su -s /bin/sh -c "glance-manage db_sync" glance

5.3 完成安装

启动镜像服务并配置随系统启动

# systemctl enable openstack-glance-api.service \
openstack-glance-registry.service
# systemctl start openstack-glance-api.service \
openstack-glance-registry.service

5.4 验证操作

验证使用一个小的Linux系统 CirrOS 来测试OpenStack的部署。

$ . admin-openrc

$ wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img

$ openstack image create "cirros" \
--file cirros-0.3.5-x86_64-disk.img \
--disk-format qcow2 --container-format bare \
--public $ openstack image list +--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 38047887-61a7-41ea-9b49-27987d5e8bb9 | cirros | active |
+--------------------------------------+--------+--------+

6 计算服务

6.1 安装配置控制节点

在控制节点上:

6.1.1 准备条件

在安装配置计算服务之前,你必须创建数据库、服务凭证和API端点。

  • 数据库

以root用户连接数据库服务器,创建如下数据库,并赋予适当的权限:

$ mysql -u root -p

MariaDB [(none)]> CREATE DATABASE nova_api;
MariaDB [(none)]> CREATE DATABASE nova;
MariaDB [(none)]> CREATE DATABASE nova_cell0; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' \
IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' \
IDENTIFIED BY 'pass123456'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' \
IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit
  • 服务凭证

计算服务凭证:

$ openstack user create --domain default --password-prompt nova

User Password:
Repeat User Password: $ openstack role add --project service --user nova admin $ openstack service create --name nova \
--description "OpenStack Compute" compute

Placement服务凭证:

$ openstack user create --domain default --password-prompt placement

User Password:
Repeat User Password: $ openstack role add --project service --user placement admin $ openstack service create --name placement --description "Placement API" placement
  • API 端点

计算服务API 端点:

$ openstack endpoint create --region RegionOne \
compute public http://controller3:8774/v2.1 $ openstack endpoint create --region RegionOne \
compute internal http://controller3:8774/v2.1 $ openstack endpoint create --region RegionOne \
compute admin http://controller3:8774/v2.1

Placement API 端点 :

$ openstack endpoint create --region RegionOne placement public http://controller3:8778

$ openstack endpoint create --region RegionOne placement internal http://controller3:8778

$ openstack endpoint create --region RegionOne placement admin http://controller3:8778

6.1.2 安装配置组件

安装包:

# yum install -y openstack-nova-api openstack-nova-conductor \
openstack-nova-console openstack-nova-novncproxy \
openstack-nova-scheduler openstack-nova-placement-api

编辑/etc/nova/nova.conf文件:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:pass123456@controller3 my_ip = 172.16.1.136 use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver [api_database]
# ...
connection = mysql+pymysql://nova:pass123456@controller3/nova_api [database]
# ...
connection = mysql+pymysql://nova:pass123456@controller3/nova [api]
# ...
auth_strategy = keystone [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = pass123456 [vnc]
enabled = true
# ...
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip [glance]
# ...
api_servers = http://controller3:9292 [oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp [placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller3:35357/v3
username = placement
password = pass123456

编辑/etc/httpd/conf.d/00-nova-placement-api.conf文件添加:

<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>

重启httpd服务:

# systemctl restart httpd

填充nova-api数据库:

# su -s /bin/sh -c "nova-manage api_db sync" nova

注册cell0数据库:

# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

创建cell1单元:

# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
109e1d4b-536a-40d0-83c6-5f121b82b650

填充nova数据库,警告信息可以忽略:

# su -s /bin/sh -c "nova-manage db sync" nova

验证nova cell0cell1是否注册正确:

# nova-manage cell_v2 list_cells
+-------+--------------------------------------+
| Name | UUID |
+-------+--------------------------------------+
| cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
| cell0 | 00000000-0000-0000-0000-000000000000 |
+-------+--------------------------------------+

6.1.3 完成安装

启动计算服务并配置随系统启动:

# systemctl enable openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service
# systemctl start openstack-nova-api.service \
openstack-nova-consoleauth.service openstack-nova-scheduler.service \
openstack-nova-conductor.service openstack-nova-novncproxy.service

6.2 安装配置计算节点

在所有计算节点上:

6.2.1 安装配置组件

安装包:

# yum install -y openstack-nova-compute

编辑/etc/nova/nova.conf文件:

[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:pass123456@controller3 my_ip = 172.16.1.130 use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver [api]
# ...
auth_strategy = keystone [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = pass123456 [vnc]
# ...
enabled = True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller3:6080/vnc_auto.html [glance]
# ...
api_servers = http://controller3:9292 [oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp [placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller3:35357/v3
username = placement
password = pass123456

6.2.2 完成安装

检查你的计算节点是否支持硬件虚拟化:

$ egrep -c '(vmx|svm)' /proc/cpuinfo

如果命令返回值大于等于1,那么不需要配置,否则,需要做一下配置libvirt来使用QEMU而不能用KVM。

编辑/etc/nova/nova.conf文件:

[libvirt]
# ...
virt_type = qemu

启动计算服务及其依赖服务并配置随系统启动:

# systemctl enable libvirtd.service openstack-nova-compute.service
# systemctl start libvirtd.service openstack-nova-compute.service

6.2.3 添加计算节点到cell数据库

注意:下面的命令在控制节点运行。

确认有哪些计算节点主机在数据库:

$ . admin-openrc

$ openstack hypervisor list
+----+---------------------+-----------------+-----------+-------+
| ID | Hypervisor Hostname | Hypervisor Type | Host IP | State |
+----+---------------------+-----------------+-----------+-------+
| 1 | compute1 | QEMU | 10.0.0.31 | up |
+----+---------------------+-----------------+-----------+-------+

发现计算节点主机:

# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

Found 2 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

注意:当你添加一个新的计算节点的时候,需要在控制节点运行nova-manage cell_v2 discover_hosts来注册该新计算节点,或者在/etc/nova/nova.conf配置节点中设置:

[scheduler]
discover_hosts_in_cells_interval = 300

6.3 验证操作

在控制节点上:

$ . admin-openrc

$ openstack compute service list

+----+--------------------+------------+----------+---------+-------+----------------------------+
| Id | Binary | Host | Zone | Status | State | Updated At |
+----+--------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-consoleauth | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2016-02-09T23:11:15.000000 |
| 3 | nova-conductor | controller | internal | enabled | up | 2016-02-09T23:11:16.000000 |
| 4 | nova-compute | compute1 | nova | enabled | up | 2016-02-09T23:11:20.000000 |
+----+--------------------+------------+----------+---------+-------+----------------------------+ $ openstack catalog list +-----------+-----------+-----------------------------------------+
| Name | Type | Endpoints |
+-----------+-----------+-----------------------------------------+
| keystone | identity | RegionOne |
| | | public: http://controller:5000/v3/ |
| | | RegionOne |
| | | internal: http://controller:5000/v3/ |
| | | RegionOne |
| | | admin: http://controller:35357/v3/ |
| | | |
| glance | image | RegionOne |
| | | admin: http://controller:9292 |
| | | RegionOne |
| | | public: http://controller:9292 |
| | | RegionOne |
| | | internal: http://controller:9292 |
| | | |
| nova | compute | RegionOne |
| | | admin: http://controller:8774/v2.1 |
| | | RegionOne |
| | | internal: http://controller:8774/v2.1 |
| | | RegionOne |
| | | public: http://controller:8774/v2.1 |
| | | |
| placement | placement | RegionOne |
| | | public: http://controller:8778 |
| | | RegionOne |
| | | admin: http://controller:8778 |
| | | RegionOne |
| | | internal: http://controller:8778 |
| | | |
+-----------+-----------+-----------------------------------------+ $ openstack image list +--------------------------------------+-------------+-------------+
| ID | Name | Status |
+--------------------------------------+-------------+-------------+
| 9a76d9f9-9620-4f2e-8c69-6c5691fae163 | cirros | active |
+--------------------------------------+-------------+-------------+ # nova-status upgrade check +---------------------------+
| Upgrade Check Results |
+---------------------------+
| Check: Cells v2 |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+---------------------------+
| Check: Resource Providers |
| Result: Success |
| Details: None |
+---------------------------+

7 网络服务

7.1 安装配置控制节点

在控制节点上:

7.1.1 准备条件

在配置OpenStack网络服务之前,你必须创建数据库、服务凭证和API端点。

  • 数据库

以root用户连接数据库服务器,创建glance数据库,并赋予适当的权限:

$ mysql -u root -p

MariaDB [(none)] CREATE DATABASE neutron;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'pass123456'; MariaDB [(none)]> exit
  • 服务凭证

创建neutron服务实体:

$ . admin-openrc

$ openstack user create --domain default --password-prompt neutron

User Password:
Repeat User Password: $ openstack role add --project service --user neutron admin $ openstack service create --name neutron \
--description "OpenStack Networking" network
  • API 端点

创建网络服务API端点:

$ openstack endpoint create --region RegionOne \
network public http://controller3:9696 $ openstack endpoint create --region RegionOne \
network internal http://controller3:9696 $ openstack endpoint create --region RegionOne \
network admin http://controller3:9696

7.1.2 配置网络选项

这里选择自服务网络。

  • 安装组件
# yum install -y openstack-neutron openstack-neutron-ml2 \
openstack-neutron-linuxbridge ebtables
  • 配置服务组件

编辑配置文件/etc/neutron/neutron.conf:

[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true transport_url = rabbit://openstack:pass123456@controller3 auth_strategy = keystone notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true [database]
# ...
connection = mysql+pymysql://neutron:pass123456@controller3/neutron [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = pass123456 [nova]
# ...
auth_url = http://controller3:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = pass123456 [oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
  • 配置 Modular Layer 2 (ML2) 插件

ML2插件使用Linux bridge机制来为实例创建layer-2虚拟网络基础设施。

编辑配置文件/etc/neutron/plugins/ml2/ml2_conf.ini

[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security [ml2_type_flat]
# ...
flat_networks = provider [ml2_type_vxlan]
# ...
vni_ranges = 1:1000 [securitygroup]
# ...
enable_ipset = true

警告:在配置完ML2插件之后,删除可能导致数据库不一致的type_drivers项的值。

  • 7.1.2.4 配置Linux bridge 代理

Linux bridge代理为实例建立layer-2虚拟网络并且处理安全组规则。

编辑配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini:

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan]
enable_vxlan = true
local_ip = 172.16.1.136
l2_population = true [securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

PUBLIC_INTERFACE_NAME替换为底层的物理公共网络接口。

172.16.1.136为计算节点的管理网络的IP地址。

  • 配置layer-3代理

编辑配置文件/etc/neutron/l3_agent.ini

[DEFAULT]
# ...
interface_driver = linuxbridge
  • 7.1.2.6 配置DHCP代理

DHCP代理为虚拟网络提供了DHCP服务。

编辑配置文件/etc/neutron/dhcp_agent.ini

[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true

7.1.3 配置元数据代理

编辑配置文件/etc/neutron/metadata_agent.ini

[DEFAULT]
# ...
nova_metadata_ip = controller3
metadata_proxy_shared_secret = pass123456

7.1.4 配置计算服务使用网络服务

编辑配置文件/etc/nova/nova.conf

[neutron]
# ...
url = http://controller3:9696
auth_url = http://controller3:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = pass123456
service_metadata_proxy = true
metadata_proxy_shared_secret = pass123456

7.1.5 完成安装

# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron # systemctl restart openstack-nova-api.service # systemctl enable neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
# systemctl start neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service # systemctl enable neutron-l3-agent.service
# systemctl start neutron-l3-agent.service

7.2 安装配置计算节点

在计算节点上:

7.2.1 安装组件

# yum install -y openstack-neutron-linuxbridge ebtables ipset

7.2.2 配置通用组件

网络通用组件的配置包括认证机制、消息队列和插件。

编辑配置文件/etc/neutron/neutron.conf

[database] 部分,注释所有connection项,因为计算节点不直接访问数据库。

[DEFAULT]
# ...
transport_url = rabbit://openstack:pass123456@controller3
auth_strategy = keystone [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = pass123456 [oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp

7.2.3 配置网络选项

对应控制节点,这里也选择自服务网络。

7.2.3.1 配置Linux bridge代理

Linux bridge代理为实例建立layer-2虚拟网络并且处理安全组规则。

编辑配置文件/etc/neutron/plugins/ml2/linuxbridge_agent.ini

[linux_bridge]
physical_interface_mappings = provider:PROVIDER_INTERFACE_NAME [vxlan]
enable_vxlan = true
local_ip = 172.16.1.130
l2_population = true [securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver

PROVIDER_INTERFACE_NAME替换为底层的物理公共网络接口。

172.16.1.130为计算节点的管理网络的IP地址。

7.2.4 配置计算服务来使用网络服务

编辑配置文件/etc/nova/nova.conf

[neutron]
# ...
url = http://controller3:9696
auth_url = http://controller3:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = pass123456

7.2.5 完成安装

重启计算服务,启动Linuxbridge代理并配置它开机自启动:

# systemctl restart openstack-nova-compute.service

# systemctl enable neutron-linuxbridge-agent.service
# systemctl start neutron-linuxbridge-agent.service

7.3 验证操作

在控制节点上:

$ . admin-openrc

$ openstack extension list --network

$ openstack network agent list

+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+
| f49a4b81-afd6-4b3d-b923-66c8f0517099 | Metadata agent | controller | None | True | UP | neutron-metadata-agent |
| 27eee952-a748-467b-bf71-941e89846a92 | Linux bridge agent | controller | None | True | UP | neutron-linuxbridge-agent |
| 08905043-5010-4b87-bba5-aedb1956e27a | Linux bridge agent | compute1 | None | True | UP | neutron-linuxbridge-agent |
| 830344ff-dc36-4956-84f4-067af667a0dc | L3 agent | controller | nova | True | UP | neutron-l3-agent |
| dd3644c9-1a3a-435a-9282-eb306b4b0391 | DHCP agent | controller | nova | True | UP | neutron-dhcp-agent |
+--------------------------------------+--------------------+------------+-------------------+-------+-------+---------------------------+

8 控制面板

在控制节点上:

8.1 安装配置组件

安装包:

# yum install -y openstack-dashboard

编辑配置文件/etc/openstack-dashboard/local_settings

OPENSTACK_HOST = "controller3"

ALLOWED_HOSTS = ['*']

SESSION_ENGINE = 'django.contrib.sessions.backends.cache'

CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller3:11211',
}
} OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
} OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"

8.2 完成安装

重启web服务器以及会话存储服务:

# systemctl restart httpd.service memcached.service

8.3 验证操作

在浏览器中输入 http://192.168.32.134/dashboard访问仪表盘。

验证使用 admin 或者demo用户凭证和default域凭证。

9 块存储

9.1 安装配置控制节点

在控制节点上:

9.1.1 准备条件

  • 数据库
$ mysql -u root -p

MariaDB [(none)]> CREATE DATABASE cinder;

MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'controller3' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'pass123456';
MariaDB [(none)]> exit
  • 服务凭证
$ openstack user create --domain default --password-prompt cinder

User Password:
Repeat User Password: $ openstack role add --project service --user cinder admin $ openstack service create --name cinderv2 \
--description "OpenStack Block Storage" volumev2 $ openstack service create --name cinderv3 \
--description "OpenStack Block Storage" volumev3
  • API端点
$ openstack endpoint create --region RegionOne \
volumev2 public http://controller3:8776/v2/%\(project_id\)s $ openstack endpoint create --region RegionOne \
volumev2 internal http://controller3:8776/v2/%\(project_id\)s $ openstack endpoint create --region RegionOne \
volumev2 admin http://controller3:8776/v2/%\(project_id\)s
$ openstack endpoint create --region RegionOne \
volumev3 public http://controller3:8776/v3/%\(project_id\)s $ openstack endpoint create --region RegionOne \
volumev3 internal http://controller3:8776/v3/%\(project_id\)s $ openstack endpoint create --region RegionOne \
volumev3 admin http://controller3:8776/v3/%\(project_id\)s

9.1.2 安装配置组件

  • 安装包
# yum install -y openstack-cinder
  • 配置服务组件

    编辑配置文件/etc/cinder/cinder.conf
[DEFAULT]
# ...
transport_url = rabbit://openstack:pass123456@controller3
auth_strategy = keystone
my_ip = 172.16.1.136 [database]
# ...
connection = mysql+pymysql://cinder:pass123456@controller3/cinder [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = pass123456 [oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp
  • 初始化数据库
# su -s /bin/sh -c "cinder-manage db sync" cinder

9.1.3 配置计算服务使用块存储

编辑配置文件/etc/nova/nova.conf

[cinder]
os_region_name = RegionOne

9.1.4 完成安装

# systemctl restart openstack-nova-api.service
# systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service
# systemctl start openstack-cinder-api.service openstack-cinder-scheduler.service

9.2 安装配置存储节点

在存储节点上:

9.2.1 准备条件

  • 储服务所依赖的包
# yum install lvm2

# systemctl enable lvm2-lvmetad.service
# systemctl start lvm2-lvmetad.service
  • 创建物理卷和组
# pvcreate /dev/sdb

# vgcreate cinder-volumes /dev/sdb

9.2.2 安装配置组件

  • 安装包
# yum install openstack-cinder targetcli python-keystone
  • 配置服务组件

编辑配置文件/etc/cinder/cinder.conf

[DEFAULT]
# ...
transport_url = rabbit://openstack:pass123456@controller3
auth_strategy = keystone
my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS
enabled_backends = lvm
glance_api_servers = http://controller3:9292 [database]
# ...
connection = mysql+pymysql://cinder:pass123456@controller3/cinder [keystone_authtoken]
# ...
auth_uri = http://controller3:5000
auth_url = http://controller3:35357
memcached_servers = controller3:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = cinder
password = pass123456 [lvm]
volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group = cinder-volumes
iscsi_protocol = iscsi
iscsi_helper = lioadm [oslo_concurrency]
# ...
lock_path = /var/lib/cinder/tmp

9.2.3 完成安装

# systemctl enable openstack-cinder-volume.service target.service
# systemctl start openstack-cinder-volume.service target.service

9.3 验证操作

$ . admin-openrc

$ openstack volume service list

+------------------+------------+------+---------+-------+----------------------------+
| Binary | Host | Zone | Status | State | Updated_at |
+------------------+------------+------+---------+-------+----------------------------+
| cinder-scheduler | controller | nova | enabled | up | 2016-09-30T02:27:41.000000 |
| cinder-volume | block@lvm | nova | enabled | up | 2016-09-30T02:27:46.000000 |
+------------------+------------+------+---------+-------+----------------------------+

最新文章

  1. reason: &#39;[&lt;__NSDictionary0 0x7fda88f00c90&gt; setValue:forUndefinedKey:]: this class is not key value c
  2. LCS
  3. android图片的scaleType属性
  4. 常见的sql语句 注意点及用法【区分mysql 和Sqlserver】
  5. Unity 3D 中实现对物体 位置(position) 旋转(rotation) 大小(scale) 的全面控制
  6. CSS自定义select下拉选择框(不用其他标签模拟)
  7. #pragma alloc_text 与 ALLOC_PRAGMA
  8. 怎样在万网加入Lync Online SRV记录
  9. 【jsp网站计数功能】 application session
  10. Python中的模块介绍和使用
  11. CSS3让文本自动换行——word-break属性
  12. Python爬虫入门教程 10-100 图虫网多线程爬取
  13. 爬取页面InsecureRequestWarning: 警告解决笔记
  14. Druid中配置双数据库
  15. centos7 开机启动服务链接说明
  16. MT【48】分式连加形式下求不等式解集的区间长度
  17. CentOS 7安装Ansible
  18. 基于AOP注解实现业务功能的动态配置
  19. xsd与xml和类(class)对象之间的互相转换
  20. Win7 user profile cant logon

热门文章

  1. CodeForces 1107F. Vasya and Endless Credits
  2. Linux/Unix 指令使用说明的格式介绍(the Bash Command &#39;Usage&#39; Syntax)
  3. Luogu 3312 [SDOI2014]数表
  4. svn的revert、checkout、clean up、setting
  5. 图解Laravel的生命周期
  6. boost编译BUG
  7. nfs使用教程
  8. vimtutor总结
  9. 对zabbix监控磁盘性能的补充
  10. 文件上传Django