2-测试机手动安装
在测试机上手动部署安装
[toc]
OpenStack 安装环境搭建
1. 安全:
关闭防火墙与 selinux: systemctl disable firewalld.service systemctl stop firewalld.service setenforce 0 配置文件/etc/selinux/config,将 SELINUX 设置为 disabled。
2. 主机
10.58.10.19 controller 10.58.10.95 computer
2.基础环境配置
1. 启用 OpenStack 仓库
yum install centos-release-openstack-ocata 完成安装: 安装 OpenStack 客户端– yum install python-openstackclient 安装 selinux 安装包– yum install openstack-selinux
2. 设置内外网 IP 对应主机名
修改配置文件/etc/hosts
退出重新登录即可生效
3. MySQL 数据库安装配置
安装相关软件包: yum install mariadb-server python2-PyMySQL
编辑配置文件 vim /etc/my.cnf.d/openstack.cnf
[mysqld]
bind-address = 10.58.10.19
default-storage-engine = innodb
innodb_file_per_table
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8
- 启动数据库服务: systemctl enable mariadb.service systemctl start mariadb.service
- 设置数据库密码: mysql_secure_installation # 设置密码为’Td@123’ 一路 y 回车
- 测试登录: mysql -uroot -pTd@123
- 远程连接 支持 root 用户允许远程连接 mysql 数据库 grant all privileges on . to ‘root’@’%’ identified by ‘Td@123’ with grant option; flush privileges;
4. 消息队列 RabbitMQ 安装与配置
- 安装软件包: yum install rabbitmq-server
- 启用消息队列服务: systemctl enable rabbitmq-server.service systemctl start rabbitmq-server.service
- 添加 opensatck 用户:
rabbitmqctl add_user openstack openstack #添加用户及密码 4.设置权限:
rabbitmqctl set_permissions openstack “.” “.” “.*” #允许配置、写、读访问
rabbitmq-plugins list #查看支持的插件

- 使用此插件实现 web 管理 rabbitmq-plugins enable rabbitmq_management #启动插件 systemctl restart rabbitmq-server.service
- 查看端口
lsof -i:15672

- 访问 RabbitMQ,访问地址是 http://controller:15672/
默认用户名密码都是 guest,浏览器添加 openstack 用户到组并登陆测试,连不上情况一般是防火墙没有关闭所致!

5. Memcached 安装与配置
Memcached 的作用为缓存 tokens。 安装相关软件包: yum install memcached python-memcached 修改配置文件 vim /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="64"
#OPTIONS="-l 127.0.0.1,::1i,controller,computer"
OPTIONS="-vv >> /var/log/memcache/memcached.log 2>&1"
3. Keystone—认证服务
云安全需要考虑数据安全、身份与访问管理安全、虚拟化安全和基础设施安全四个部分。Keystone 为 OpenStack 中的一个独立的提供安全认证的模块,主要负责 OpenStack 用户的身份认证、令牌管理、提供访问资源的服务目录,以及基于用户角色的访问控制。在 OpenStack 整体框架中,Keystone 作用类似于服务总线,其他服务需要通过 Keystone 注册服务端点,其中服务端点为服务的访问点或 URL。 Keystone 几个基本概念:
- User–用户 通过 Keystone 访问 OpenStack 服务的个人、系统或者某个服务,Keystone 通过认证信息验证用户请求合法性。
- Role–角色 一个用户所具有的角色,代表其被赋予的权限。
- Service–服务
- Endpoint–端点 一个可以用来访问某个具体服务的网络地址。
- Token–令牌
- Catalog–服务查询目录
1. 安装前准备
- 创建 keystone 数据库 CREATE DATABASE keystone;
- 授权数据库访问 GRANT ALL PRIVILEGES ON keystone._ TO ‘keystone’@’localhost’ IDENTIFIED BY ‘keystone’; GRANT ALL PRIVILEGES ON keystone._ TO ‘keystone’@’%’ IDENTIFIED BY ‘keystone’; flush privileges;
2. Keystone 组件安装与配置
安装相关软件包: yum install openstack-keystone httpd mod_wsgi
修改配置文件 vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:keystone@controller/keystone
[token]
provider = fernet
- 填充认证服务数据库: su -s /bin/sh -c “keystone-manage db_sync” keystone
- 初始化 Fernet key 仓库: keystone-manage fernet_setup –keystone-user keystone –keystone-group keystone keystone-manage credential_setup –keystone-user keystone –keystone-group keystone
- 引导认证服务:
keystone-manage bootstrap –bootstrap-password admin
–bootstrap-admin-url http://controller:35357/v3/
–bootstrap-internal-url http://controller:5000/v3/
–bootstrap-public-url http://controller:5000/v3/
–bootstrap-region-id RegionOne
3. Apache Http 服务器配置
修改配置文件 vim /etc/httpd/conf/httpd.conf
ServerName controller创建链接: ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
启动服务 systemctl enable httpd.service systemctl start httpd.service
配置管理账户 export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
验证 keystone openstack token issue
4. 创建域/项目/用户/角色
1.创建 admin 项目 openstack project create –domain default –description “Service Project” admin
创建 admin 项目—创建 admin 用户(密码 admin,生产不要这么玩) —创建 admin 角色—把 admin 用户加入到 admin 项目赋予 admin 的角色(三个 admin 的位置:项目,用户,角色)
openstack role add –project admin –user admin admin 2.创建 Demo 项目
openstack project create –domain default –description “Demo Project” demo 3. 创建 Demo 用户
openstack user create –domain default –password-prompt demo (需输入密码,我的"demo") 4. 创建 demo 用户相关的角色
openstack role create user
5. 创建 service 项目,用来管理其他服务用
openstack user create –domain default –password-prompt demo (需输入密码,我的"demo")
5. 验证 keystone
- 查看项目 openstack project list
- 查看 用户 openstack user list
- 查看 role openstack role list
6. keystone 功能验证
- 关闭 token 临时认证机制
vim /etc/keystone/keystone-paste.ini
删除以下三个部分中的 admin_token_auth

- 取消临时环境变量 unset OS_AUTH_URL OS_PASSWORD
- admin 用户 token 认证
openstack –os-auth-url http://controller:35357/v3 –os-project-domain-name default –os-user-domain-name default –os-project-name admin –os-username admin token issue
输入密码 admin
4.demo 用户 token 认证
openstack –os-auth-url http://controller:5000/v3 –os-project-domain-name default –os-user-domain-name default –os-project-name demo –os-username demo token issue
输入密码 demo

7. 创建客户端认证脚本
- 创建文件 admin-openrc.sh echo " export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=admin export OS_USERNAME=admin export OS_PASSWORD=admin export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 " > admin-openrc.sh
- 创建文件 demo-openrc.sh echo " export OS_PROJECT_DOMAIN_NAME=Default export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_NAME=demo export OS_USERNAME=demo export OS_PASSWORD=demo export OS_AUTH_URL=http://controller:35357/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2 " > demo-openrc.sh
- 测试脚本
source admin-openrc.sh
openstack token issue
source demo-openrc.sh
openstack token issue

4. Glance—镜像服务
Glance 为 OpenStack 提供虚拟机的镜像服务,由 glance-api 与 glance-registry 两个服务组成。glance-api 是进入 Glance 的入口,负责接收用户的 RESTful 请求,再通过后台的存储系统完成镜像的存储与获取。
1.安装前准备
- 创建 glance 数据库及后续操作: CREATE DATABASE glance; GRANT ALL PRIVILEGES ON glance._ TO ‘glance’@’localhost’ IDENTIFIED BY ‘glance’; GRANT ALL PRIVILEGES ON glance._ TO ‘glance’@’%’ IDENTIFIED BY ‘glance’; flush privileges;
- 使用 admin 认证 source admin-openrc.sh
- 创建 glance 用户
openstack user create –domain default –password-prompt glance
输入密码:glance
4.将 admin 角色加入 glance 用户及 service 项目
openstack role add –project service –user glance admin 5.创建 glance 服务实体
openstack service create –name glance –description “OpenStack Image” image
6.创建镜像服务 API 接入点
openstack endpoint create –region RegionOne image public http://controller:9292
openstack endpoint create –region RegionOne image internal http://controller:9292
openstack endpoint create –region RegionOne image admin http://controller:9292

2.glance 组件安装及配置
安装软件包 yum install openstack-glance
编辑文件 vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
# ...
flavor = keystone
[glance_store]
# ...
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
- 编辑文件 vim /etc/glance/glance-registry.conf
[database]
connection = mysql+pymysql://glance:glance@controller/glance
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
# ...
flavor = keystone
查看配置文件的命令
cat /etc/glance/glance-registry.conf|grep -v "^#"|grep -v "^$"cat /etc/glance/glance-api.conf|grep -v "^#"|grep -v "^$"
填充 glance 数据库 su -s /bin/sh -c “glance-manage db_sync” glance

启动 glance systemctl enable openstack-glance-api.service openstack-glance-registry.service systemctl start openstack-glance-api.service openstack-glance-registry.service netstat -lnutp |grep 9191 #registry
tcp 0 0 0.0.0.0:9191 0.0.0.0:* LISTEN 24890/python2
netstat -lnutp |grep 9292 #api
tcp 0 0 0.0.0.0:9292 0.0.0.0:* LISTEN 24877/python2
3. 验证 glance
- 使用 admin 认证 source admin-openrc.sh
- 下载镜像 wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
- 上传镜像至服务器
openstack image create “cirros”
–file cirros-0.3.5-x86_64-disk.img
–disk-format qcow2
–container-format bare
–public
- 查看镜像是否上传成功
openstack image list

- 查看镜像目录是否存在 ll /var/lib/glance/images/
下载 ios 格式镜像,需要用 OZ 工具制造 openstack 镜像,具体操作请见另一篇博客: 实际生产环境下,肯定要使用 ios 镜像进行制作了 http://www.cnblogs.com/kevingrace/p/5821823.html 或者直接下载 centos 的 qcow2 格式镜像进行上传,qcow2 格式镜像直接就可以在 openstack 里使用,不需要进行格式转换! 下载地址:http://cloud.centos.org/centos,可以到里面下载centos5/6/7的qcow2格式的镜像
5. Nova—计算服务
Nova 为 OpenStack 的计算组件,由 API、Compute、Conductor、Scheduler 四个核心服务所组成,服务之间通过 AMQP 消息队列进行通信。 API 是进入 Nova 的 HTTP 接口,Compute 和 VMM 交互运行虚拟机并管理虚拟机的生命周期。Schedular 从可用资源池中选择最合适的计算节点来创建新的虚拟机实例,Conductor 为数据库的访问提供一层安全保障。 虚拟机创建服务流程:首先用户执行 novaclient 提供的用于创建虚拟机的命令,API 服务监听到 novaclient 发送的 HTTP 请求并且将它转换成 AMQP 消息,通过消息队列(Queue)调用 Conductor 服务,Conductor 服务通过消息队列接受到任务之后,先完成一些准备工作,再通过消息队列告诉 Schedular 去选择一个满足虚拟机创建要求的主机,Conductor 拿到 Schedular 提供的目标主机之后,会要求 Compute 服务创建虚拟机。
1.安装前准备工作
添加 nova 数据库 CREATE DATABASE nova; GRANT ALL PRIVILEGES ON nova._ TO ’nova’@’localhost’ IDENTIFIED BY ’nova’; GRANT ALL PRIVILEGES ON nova._ TO ’nova’@’%’ IDENTIFIED BY ’nova’; CREATE DATABASE nova_api; GRANT ALL PRIVILEGES ON nova_api._ TO ’nova’@’localhost’ IDENTIFIED BY ’nova’; GRANT ALL PRIVILEGES ON nova_api._ TO ’nova’@’%’ IDENTIFIED BY ’nova’; CREATE DATABASE nova_cell0; GRANT ALL PRIVILEGES ON nova_cell0._ TO ’nova’@’localhost’ IDENTIFIED BY ’nova’; GRANT ALL PRIVILEGES ON nova_cell0._ TO ’nova’@’%’ IDENTIFIED BY ’nova’; flush privileges;
使用 admin 认证 source admin-openrc.sh
创建 nova 用户 openstack user create –domain default –password-prompt nova

将 admin 角色加给 nova 用户 openstack role add –project service –user nova admin
创建 nova 服务实体 openstack service create –name nova –description “OpenStack Compute” compute
创建计算 API 服务端点 openstack endpoint create –region RegionOne compute public http://controller:8774/v2.1 openstack endpoint create –region RegionOne compute internal http://controller:8774/v2.1 openstack endpoint create –region RegionOne compute admin http://controller:8774/v2.1

创建 placement 用户 openstack user create –domain default –password-prompt placement

将 placement 用户添加到 service 项目及 admin 角色中 openstack role add –project service –user placement admin
创建 placementAPI 实体 openstack service create –name placement –description “Placement API” placement

创建 placementAPI 服务端点 openstack endpoint create –region RegionOne placement public http://controller:8778 openstack endpoint create –region RegionOne placement internal http://controller:8778 openstack endpoint create –region RegionOne placement admin http://controller:8778

2.安装与配置组件(controller)
安装 nova 相关软件包 yum install openstack-nova-api
openstack-nova-conductor
openstack-nova-console
openstack-nova-novncproxy
openstack-nova-scheduler
openstack-nova-placement-api修改配置文件 vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
[api_database]
# ...
connection = mysql+pymysql://nova:nova@controller/nova_api
[database]
# ...
connection = mysql+pymysql://nova:nova@controller/nova
[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack@controller
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[DEFAULT]
# ...
my_ip = 10.58.10.19
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = placement
- 修改配置文件 vim /etc/httpd/conf.d/00-nova-placement-api.conf 末尾增加:
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>

重启 httpd 服务 systemctl restart httpd.service
填充 nova-api 数据库 su -s /bin/sh -c “nova-manage api_db sync” nova

注册 cell0 数据库 su -s /bin/sh -c “nova-manage cell_v2 map_cell0” nova 7.创建 cell1 cell su -s /bin/sh -c “nova-manage cell_v2 create_cell –name=cell1 –verbose” nova
填充 nova 数据库 su -s /bin/sh -c “nova-manage db sync” nova

验证 cell0 和 cell1 nova-manage cell_v2 list_cells

启动服务 systemctl enable openstack-nova-api.service
openstack-nova-consoleauth.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service
systemctl restart openstack-nova-api.service
openstack-nova-consoleauth.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service
3.安装与配置组件(computer)
1.安装与配置 compute 组件 yum install openstack-nova-compute 2.修改文件 vim /etc/nova/nova.conf
[DEFAULT]
# ...
enabled_apis = osapi_compute,metadata
[DEFAULT]
# ...
transport_url = rabbit://openstack:openstack@controller
[api]
# ...
auth_strategy = keystone
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = nova
password = nova
[DEFAULT]
# ...
my_ip = 10.58.10.19
[DEFAULT]
# ...
use_neutron = True
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[vnc]
# ...
enabled = True
vncserver_listen = $my_ip
vncserver_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html #如果控制节点也作为计算节点(单机部署的话),这一行也添加上(这行是计算节点配置的),配置控制节点的公网ip
[glance]
# ...
api_servers = http://controller:9292
[oslo_concurrency]
# ...
lock_path = /var/lib/nova/tmp
[placement]
# ...
os_region_name = RegionOne
project_domain_name = Default
project_name = service
auth_type = password
user_domain_name = Default
auth_url = http://controller:35357/v3
username = placement
password = placement
- 查看硬件支持信息 egrep -c ‘(vmx|svm)’ /proc/cpuinfo
- 修改配置文件 vim /etc/nova/nova.conf
[libvirt]
# ...
virt_type=kvm #如果控制节点也作为计算节点(单机部署的话),这一行也添加上(这行是计算节点配置的)
3.启动服务
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service 4.将计算节点加入 cell 数据库中
source admin-openrc.sh
openstack hypervisor list
su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova

4. nova 功能验证
- 查看机器 list,应该有四个服务时 up 的(conductor, consoleauth, scheduler, compute)
openstack compute service list

- openstack catalog list

- nova-status upgrade check

6. Neutron—网络服务
OpenStack 所在的整个物理网络在 Neutron 中被泛化为网络资源池,Neutron 能够为同一物理网络的每个租户提供独立的虚拟网络环境。 通用配置:一个管理员创建的外部网络对象来负责 OpenStack 环境与 Internet 的连接,一个私有网络提供给租户创建自己的虚拟机。为了使内部网络中的机器能够连接互联网,必须创建一个路由器将内部网络连接到外部网络。在该过程中,Neutron 提供了一个 L3(三层)的抽象 router 与一个 L2(二层)的抽象 network,router 对应于真实网络环境中的路由器,为用户提供路由、NAT 等服务,network 则对应于一个真实物理网络中的二层局域网(LAN)。 另一个重要概念是子网 subnet,功能为附加在二层网络上指明属于这个网络的虚拟机可使用的 IP 地址范围。
1. controller 节点安装与配置
1.创建 neutron 数据库
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron._ TO ’neutron’@’localhost’ IDENTIFIED BY ’neutron’;
GRANT ALL PRIVILEGES ON neutron._ TO ’neutron’@’%’ IDENTIFIED BY ’neutron’; 2.使用 admin 认证
source admin-openrc.sh 3.创建 neutron 用户
openstack user create –domain default –password-prompt neutron
4.将 admin 角色加入 neutron 用户中
openstack role add –project service –user neutron admin 5.创建 neutron 服务实体
openstack service create –name neutron –description “OpenStack Networking” network
6.创建网络服务 API 端点
openstack endpoint create –region RegionOne network public http://controller:9696
openstack endpoint create –region RegionOne network internal http://controller:9696
openstack endpoint create –region RegionOne network admin http://controller:9696

2. 网络类型配置-self-service network
- 安装 neutron 网络组件 yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
- 修改配置文件 cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.back vim /etc/neutron/neutron.conf
[database]
# ...
connection = mysql+pymysql://neutron:neutron@controller/neutron
使能Modular Layer2(ML2)插件、路由服务、重叠IP
[DEFAULT]
# ...
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
# ...
auth_uri = http://controller:5000
auth_url = http://controller:35357
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[nova]
# ...
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
# ...
lock_path = /var/lib/neutron/tmp
- 修改 ModularLayer2 插件配置文件,使能 flat/vlan/vxlan 类型 cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.back vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]
# ...
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
# ...
flat_networks = provider
[ml2_type_vxlan]
# ...
vni_ranges = 1:1000
[securitygroup]
# ...
enable_ipset = true
- 修改 linux bridge agent 配置文件: cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.back vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 10.58.10.19
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- 修改 layer-3 agent 配置文件 cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.back vim /etc/neutron/l3_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
- 修改 DHCP agent 配置文件 cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.back vim /etc/neutron/dhcp_agent.ini
[DEFAULT]
# ...
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
- 修改 metadata agent 配置文件 cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.back
vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret = neutron #这个是 Nova Metadata 与 Neutron 通讯的密钥。两边保持一致即可
- 在计算服务配置文件 nova.conf 中添加 neutron 网络配置 vim /etc/nova/nova.conf
[neutron]
# ...
url = http://controller:9696
auth_url = http://controller:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = neutron
- 建立链接 ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
- 填充 neutron 数据库
su -s /bin/sh -c “neutron-db-manage –config-file /etc/neutron/neutron.conf –config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head” neutron

- 重启 nova-api 服务 systemctl restart openstack-nova-api.service
- 启动服务
systemctl enable neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
neutron-l3-agent.service
systemctl restart neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
neutron-l3-agent.service
4.neutron 功能验证
- openstack extension list –network

- openstack network agent list

7. Horizon—前台界面
模块化的基于 web 的图形界面,通过浏览器访问。Horizon 采用 Django 框架,一种基于 Python 语言的开源 Web 应用程序框架。
1. horizon 安装与配置
- 安装 horizon 软件包 yum install openstack-dashboard
- 修改配置文件 cp /etc/openstack-dashboard/local_settings /etc/openstack-dashboard/local_settings.back vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
ALLOWED_HOSTS = ['*']
SESSION_ENGINE = 'django.contrib.sessions.backends.file'
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': 'controller:11211',
}
}
OPENSTACK_KEYSTONE_URL = "http://%s:5000/v2.0" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
"identity": 3,
"image": 2,
"volume": 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
TIME_ZONE = "TIME_ZONE"
- 启动服务 systemctl restart httpd.service memcached.service
2. horizon 功能验证
- 访问 http://controller/dashboard
域:default
用户名:admin
密码:admin

- 登录成功

8. 增加新节点
1. 安装 OpenStack-compute
- yum 安装 vim /etc/yum.conf yum install openstack-nova-compute openstack-neutron-linuxbridge ebtables ipset
- 拷贝其他 compute 的 nova.conf scp root@compute1:/etc/nova/nova.conf /etc/nova/nova.conf vim /etc/nova/nova.conf
state_path=/var/lib/docker/nova
my_ip = 10.58.10.95
- 启动 compute systemctl enable libvirtd.service openstack-nova-compute.service systemctl start libvirtd.service openstack-nova-compute.service
2. controller 节点同步新的计算节点
- 加载环境变量 source admin-openrc.sh
- 添加新的计算节点
su -s /bin/sh -c “nova-manage cell_v2 discover_hosts –verbose” nova
Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting compute nodes from cell ‘cell1’: 007243d4-2b22-4939-811f-1f8f9dc41020 Found 1 computes in cell: 007243d4-2b22-4939-811f-1f8f9dc41020 Checking host mapping for compute host ‘controller’: e33268d7-1382-4347-b8b9-deccf2a38cdd
- 检查节点 list openstack hypervisor list
- 修改配置文件 scp root@compute1:/etc/neutron/neutron.conf /etc/neutron/neutron.conf cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.back vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:eth0
[vxlan]
enable_vxlan = true
local_ip = 10.58.10.95
l2_population = true
[securitygroup]
# ...
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
- 启动网络服务 systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service systemctl restart neutron-linuxbridge-agent.service
错误
1. 在 keystone 生成数据库的时候 报错:
/usr/lib/python2.7/site-packages/requests/init.py:91: RequestsDependencyWarning: urllib3 (1.24.1
解决:
pip uninstall urllib3 pip uninstall chardet pip install -i https://pypi.tuna.tsinghua.edu.cn/simple requests
2. openstack token issue
Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL. Service Unavailable (HTTP 503)
解决:
机器为了访问外网,开启了代理,导致在访问 http://controller:35357/v3 时候有 有问题 解决方式关闭代理:unset http_proxy 重新测试一下:curl http://controller:35357/v3 问题解决
3. openstack token issue
The request you have made requires authentication. (HTTP 401) (Request-ID: req-719deb9d-9123-4489-b6d5-456fe64b7f58)
解决:
重新初始化 Fernet key 仓库 keystone-manage fernet_setup –keystone-user keystone –keystone-group keystone keystone-manage credential_setup –keystone-user keystone –keystone-group keystone
4. neutron-server 启动不了
/var/log/neutron/server.log NoMatchingPlugin: The plugin password could not be found
解决:
/etc/neutron/neutron.conf 中的 auth_type = password,“password “多了一个空格
5. neutron-linuxbridge-agent 启动不了
/var/log/neutron/linuxbridge-agent.log 2019-05-24 01:17:36.587 15183 ERROR neutron.plugins.ml2.drivers.linuxbridge.agent.linuxbridge_neutron_agent [-] Interface INTERFACE for physical network provider does not exist. Agent terminated!
解决:
vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini 配置错误 physical_interface_mappings = provider:INTERFACE 正确配置:physical_interface_mappings = provider:eth0
6. 安装好 dashboard,启动之后登录报错
/var/log/httpd/error_log RuntimeError: Unable to create a new session key. It is likely that the cache is unavailable.
解决:
vim /etc/openstack-dashboard/local_settings 将 SESSION_ENGINE 值修改 SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’ 改为 SESSION_ENGINE = ‘django.contrib.sessions.backends.file’ 重启 systemctl restart httpd.service memcached.service
7. Could not access KVM kernel module: Permission denied
failed to initialize KVM: Permission denied No accelerator found!
解决:
是权限问题,做如下处理: chown root:kvm /dev/kvm 修改/etc/libvirt/qemu.conf, user=“root"改为 user=“root” group=“root"改为 group=“root” 重启服务 service libvirtd restart,问题解决了
