RDO一体化部署
RDOopenstack
1. 磁盘------100G 处理器 2-2 内存10g —net
2. 时间配置成 ----亚洲-上海
 ----默认最小化安装
 ----磁盘选择自动
 ----网络选择开启,,并修改主机名 ---------------如rdoopenstack
3. 配置root密码 123456
4. 安装vim---- yum install -y vim* ------------安装vim编辑器
5. 网络配置
1)开启网络
vim /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
2)配置静态IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=“static” -----------修改为static
BROADCAST=“192.168.186.255”
IPADDR=“192.168.186.132”
NETMASK=“255.255.255.0”
NETWORK=“192.168.186.0”
GATEWAY=“192.168.186.2”
DNS1=“192.168.186.2”
3)域名服务器
vi /etc/resolv.conf ---------一般来说默认不动
4)IP域名映射
vi /etc/hosts
192.168.186.132 rdoopenstack rdoopenstack.localdomain -------------设置 192.168.186.xxx ipname ipname.domain
5)重启网络
systemctl restart network
ping 172.16.35.51
ping 192.168.186.133
6. 禁用安全
1)防火墙
systemctl disable firewalld
systemctl stop firewalld
2)SELINUX 重启机器
vim /etc/selinux/config
#SELINUX=enforcing
SELINUX=disabled
reboot
7. 切换网络管理器
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
8. 安装Openstack软件包
yum install -y centos-release-openstack-queens
yum update -y
9. 启用Open Stack Queens资源库
yum install -y yum-utils
yum-config-manager --enable centos-openstack-queens
10. 安装yum-plugin-priorities插件
yum install -y yum-plugin-priorities
11. 安装KVM依赖的软件包
cd /etc/yum.repos.d/
curl -O https://trunk.rdoproject.org/centos7/delorean-deps.repo
curl -O https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo
vi delorean.repo---- priority=1 --------------默认为1
12. 更新系统和软件包
yum update -y
reboot
13. 安装OpenStack的部署工具
yum install -y openstack-packstack
14. 使用PackStack以All in one方式部署OpenStack packstack --allinone
15.成功后—ls --cat-admin 查看密码----登陆网页,在管理-角色—编辑角色—修改密码--------密码为admin
Openstack基础配置
配置两个节点:openstackController openstackComputer 1–8是公共配置
1. 磁盘------
openstackController: 磁盘------60G 处理器 1-2 内存2g –net
openstackComputer:磁盘------60G 处理器 2-2 内存4g –net
2. 时间配置成 ----亚洲-上海
 ----默认最小化安装
 ----磁盘选择自动
 ----网络选择开启,,并修改主机名
3. 配置root密码 123456
4. 安装vim---- yum install -y vim*
5. 网络配置
1)开启网络
vim /etc/sysconfig/network
# Created by anaconda
NETWORKING=yes
2)配置静态IP
vi /etc/sysconfig/network-scripts/ifcfg-ens33
BOOTPROTO=“static”
BROADCAST=“192.168.186.255”
openstackController: IPADDR=“192.168.186.133”
openstackComputer:IPADDR=“192.168.186.137”
NETMASK=“255.255.255.0”
NETWORK=“192.168.186.0”
GATEWAY=“192.168.186.2”
DNS1=“192.168.186.2”
3)域名服务器
vi /etc/resolv.conf---------一般来说默认不动
4)IP域名映射
vi /etc/hosts
192.168.186.133 openstackcontroller openstackcontroller.localdomain
192.168.186.137 openstackcomputer openstackcomputer.localdomain
5)重启网络
systemctl restart network
ping 172.16.35.51------------------ping主机ip,检验网络是否连通
ping openstackcomputer ----------------ping主机名,检验网络是否连通
6.禁用安全
1)防火墙
systemctl disable firewalld
systemctl stop firewalld
2)SELINUX 重启机器
vim /etc/selinux/config--------下面两步注释一行,添加一行
#SELINUX=enforcing
SELINUX=disabled
3) 重启机器
reboot
7.切换网络管理器
systemctl disable NetworkManager
systemctl stop NetworkManager
systemctl enable network
systemctl start network
8.配置时间同步
1)安装时间同步工具
yum install -y chrony
2)修改配置文件
vi /etc/chrony.conf------注释以前的server*
openstackController–注册原有的server,添加如下内容
①配置aliyun时间服务器
server time1.aliyun.com iburst
server time2.aliyun.com iburst
server time3.aliyun.com iburst
server time4.aliyun.com iburst
server time5.aliyun.com iburst
server time6.aliyun.com iburst
server time7.aliyun.com iburst
②添加允许网段
openstackController :
allow 192.168.186.0/24
openstackComputer :
server 92.168.186.133 iburst --------将controller作为时间同步服务器
3)重启时间服务,并验证是否成功
systemctl enable chronyd
systemctl start chronyd
chrony sourcestats
openstackController---------配置
9. 数据库服务
1) SQL数据库-----MariaDB
①查看RDOopenstack对应版本,并删除旧版本
RDOopenstack: rpm -qa|grep mariadb -----------查看RDOopenstack mariadb的版本
openstackController: rpm -qa|grep mariadb -----查看openstackController mariadb的版本
yum -y remove maria* ----------删除openstackController旧版本
②重新编写一个mariadb源
vim /etc/yum.repos.d/MariaDB.repo----------编辑一个源,并复制一下代码,其中mariadb修改 为10.3*
# MariaDB 10.3 CentOS repository list - created 2019-01-18 13:10 UTC
# http://downloads.mariadb.org/mariadb/repositories/
[mariadb]
name = MariaDB
#baseurl = http://yum.mariadb.org/10.3/centos7-amd64
baseurl =https://mirrors.ustc.edu.cn/mariadb/yum/10.3/centos7-amd64
#gpgkey=https://yum.mariadb.org/RPM-GPG-KEY-MariaDB
gpgkey=https://mirrors.ustc.edu.cn/mariadb/yum/RPM-GPG-KEY-MariaDB
gpgcheck=1
③下载安装,并重启服务,并再次查看mariadb的状态与版本
yum install -y mariadb mariadb-server python2-PyMySQL ----yum源安装下载mariadb mariadb-server python2-PyMySQL
systemctl enable mariadb -----------开机启动mariadb
systemctl start mariadb --------------开启mariadb
systemctl status mariadb -----------查看mariadb状态
rpm -qa|grep Maria* ----------查看mariadb版本是否与RDOopenstack相同
④进入mysql,并设置密码和授权
mysql -u root -p ----直接回车,没有密码
set password for ‘root’@‘localhost’=password(‘123456’); --------本主机密码设为123456
grant all privileges on . to root@’%’ identified by ‘123456’;
grant all privileges on . to root@‘localhost’ identified by ‘123456’;
2)NOSQL数据库redis
① 添加:两个yum源,并对yum进行更新
cd /etc/yum.repos.d/----------一定要先切换到/etc/yum.repos.d/
curl -O https://trunk.rdoproject.org/centos7/delorean-deps.repo
curl -O https://trunk.rdoproject.org/centos7/current-passedci/delorean.repo
yum update -y -----------更新yum操作
② 重启机器
reboot
③ 安装 redis-3.2* python-redis并启动,查看其状态
yum install -y redis-3.2* python-redis
systemctl enable redis -----开启自启redis服务
systemctl start redis --------开启redis
systemctl status redis ------查看redis状态
④ 查看memcached*版本,并安装新版本
rpm -qa|grep memcached*
yum install -y memcached-1.5* python-memcached
⑤ 打开memcached,并配置ip,随后启动memcached,并验证其状态
vim /etc/sysconfig/memcached
PORT=“11211”
USER=“memcached”
MAXCONN=“1024”
CACHESIZE=“64”
OPTIONS="-l 127.0.0.1,::1,192.168.186.133" ------多加了一个本主机ip
systemctl enable memcached -------开机启动memcached服务
systemctl start memcached ----------启动memcached服务
systemctl status memcached -------查看memcached的状态
10. 消息队列AMQP
1)安装rabbitmq-serve,并重启
RDOopestack r:systemctl status rabbitmq-server
opestackController:yum install -y rabbitmq-server
systemctl enable rabbitmq-server ---------开机启动 rabbitmq-server
systemctl restart rabbitmq-server ---------重启 rabbitmq-server
2)设置账号,给予账号权限
rabbitmqctl add_user openstack openstack --------添加一个openstack 账号
rabbitmqctl set_permissions openstack “." ".” “.*” ----给openstack 赋予权限
rabbitmqctl set_user_tags openstack administrator ------将openstack 设置为管理员
3) 查看状态,并开启相关插件,最后重启服务
rabbitmqctl list_users -----------查看rabbitmqctl 插件的状态
yum install -y net-tools -----安装网络工具
netstat -ntlp|grep 5672 ------查看5672网络端口的状态
/usr/lib/rabbitmq/bin/rabbitmq-plugins list -------无状态
rabbitmq-plugins enable rabbitmq_management -------开机自启动rabbitmqctl 插件
/usr/lib/rabbitmq/bin/rabbitmq-plugins list ------有状态
systemctl restart rabbitmq-server ---------重启 rabbitmq-server服务
4)在浏览器中输入ip:15672验证是否能够登陆,登陆成功则完成
-----输入网址:http://192.168.186.133:15672 ---------------账户密码均为openstack
11. Keystone安装配置
1)下载安装openstack软件仓库(queens版本)
yum install -y centos-release-openstack-queens
yum upgrade -y
reboot
 yum install -y python-openstackclient
 yum install openstack-selinux -y2)创建keystone数据库并授权
mysql -h 127.0.0.1 -u root -p --------------密码为123456
create database keystone;
grant all privileges on keystone.* to ‘keystone’@‘localhost’ identified by ‘keystone’;
grant all privileges on keystone.* to ‘keystone’@‘openstackcontroller’ identified by ‘keystone’;
grant all privileges on keystone.* to ‘keystone’@’%’ identified by ‘keystone’;
3)安装、配置组件
yum install -y openstack-keystone httpd mod_wsgi
openssl rand -hex 10 --------------------- 44aad0194fd5be9038e6vi
4)编辑 /etc/keystone/keystone.conf
vi /etc/keystone/keystone.conf
admin_token = 88dbf6a7680e5c73b2c0 -------------应该在14行附近
connection = mysql+pymysql://keystone:keystone@192.168.186.133/keystone --应该在570行附近
provider = fernet ------------应该在533行附近
5)初始化数据库
keystone-manage db_sync
6)初始化Fernet密钥库以生成令牌(两行命令)
keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
7)对Keystone应用Bootstrap框架执行初始化操作 引导身份认证服务(一行命令,注意空格和-)
keystone-manage bootstrap --bootstrap- keystone keystone --bootstrap-admin-url http://192.168.186.133:5000/v3/ --bootstrap-internal-url http://192.168.186.133:5000/v3/ --bootstrap-public-url http://192.168.186.133:5000/v3/ --bootstrap-region-id RegionOne
8)配置Apache HTTP服务器
vi /etc/httpd/conf/httpd.conf
ServerName 192.168.186.133
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd
systemctl start httpd
在浏览器输入192.168.186.133 验证是否能够出现网页
12. glance安装配置
1) 创建glance数据库并授权
mysql -h 127.0.0.1 -u root -p -------123456
create database glance;
grant all privileges on glance.* to ‘glance’@‘localhost’ identified by ‘glance’;
grant all privileges on glance.* to ‘glance’@’%’ identified by ‘glance’;
grant all privileges on glance.* to ‘glance’@‘openstackcontroller’ identified by ‘glance’;
quit
2) 创建Glance服务
① 这个是一条语句:
keystone-manage bootstrap --bootstrap-password keystone --bootstrap-admin-url http://192.168.186.133:5000/v3/ --bootstrap-internal-url http://192.168.186.133:5000/v3/ --bootstrap-public-url http://192.168.186.133:5000/v3/ --bootstrap-region-id RegionOne
② 手动导入配置 ------------也可以直接配置到/etc/profile 文件中,并source /etc/profile
export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://192.168.186.133:5000/v3
export OS_IDENTITY_API_VERSION=3
③ 添加服务
openstack user create --domain default --password-prompt glance --两边都是glance
openstack user list -------查看openstack的用户列表
openstack project list --------查看openstack的project列表
openstack project create service -----创建一个service的project
openstack role add --project service --user glance admin -----将glance添加为admin
④ 为glance添加镜像描述信息。
openstack service create --name glance --description “Openstack Image” image
3) 创建Glance API端点
openstack endpoint create --region RegionOne image public http://192.168.186.133:9292 ---------创建公网
openstack endpoint create --region RegionOne image internal http://192.168.186.133:9292 ----------创建内网
openstack endpoint create --region RegionOne image admin http://192.168.186.133:9292 -------创建局域网
4) 安装Glance
yum install openstack-glance
5) 配置Glance
① 创建一个文件夹
mkdir -p /var/lib/glance/images
② 修改glance-api.conf的配置文件
vi /etc/glance/glance-api.conf
[database]
connection=mysql+pymysql://glance:glance@192.168.186.133/glance
[keystone_authtoken]
auth_uri=http://192.168.186.133:5000
auth_url=http://192.168.186.133:5000
mamcached_servers=192.168.186.133:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=service
username=glance
password=glance
[paste_deploy]
flavor=keystone
[glance_store]
stores=file,http
default_store=file
filesystem_store_datadir=/var/lib/glance/images
③ 修改glance-registry.conf 配置文件
vi /etc/glance/glance-registry.conf
[database]
connection=mysql+pymysql://glance:glance@192.168.186.133/glance
[keystone_authtoken]
auth_uri=http://192.168.186.133:5000
auth_url=http://192.168.186.133:5000
mamcached_servers=192.168.186.133:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=service
username=glance
password=glance
[paste_deploy]
flavor=keystone
6) 初始化数据库
glance-manage db_sync
7) 启动设置
① 开机自启动
systemctl enable openstack-glance-api
systemctl enable openstack-glance-registry
② 开启两个服务
chown -R glance:glance /var/log/glance/api.log ---------修改api.log权限
systemctl start openstack-glance-api
systemctl start openstack-glance-registry
③ 查看两个服务状态
systemctl status openstack-glance-api
systemctl status openstack-glance-registry
13.nova安装配置
https://docs.openstack.org/nova/queens/install/controller-install-rdo.html
1) 将以下内容配置到/etc/profile
vi /etc/profile
export OS_USERNAME=admin
export OS_PASSWORD=keystone
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://192.168.186.133:5000/v3
export OS_IDENTITY_API_VERSION=3
source /etc/profile ------刷新配置文件
echo $OS_AUTH_URL ----验证是否添加成功
2) 在mysql中创建所需数据库和授权用户
mysql -h localhost -u root -p ------123456
create database nova_api;
create database nova;
create database nova_cell0;
grant all privileges on nova_api.* to ‘nova’@‘localhost’ identified by ‘nova’;
grant all privileges on nova_api.* to ‘nova’@’%’ identified by ‘nova’;
grant all privileges on nova_api.* to ‘nova’@‘openstackcontroller’ identified by ‘nova’;
grant all privileges on nova.* to ‘nova’@‘localhost’ identified by ‘nova’;
grant all privileges on nova.* to ‘nova’@’%’ identified by ‘nova’;
grant all privileges on nova.* to ‘nova’@‘openstackcontroller’ identified by ‘nova’;
grant all privileges on nova_cell0.* to ‘nova’@‘localhost’ identified by ‘nova’;
grant all privileges on nova_cell0.* to ‘nova’@’%’ identified by ‘nova’;
grant all privileges on nova_cell0.* to ‘nova’@‘openstackcontroller’ identified by ‘nova’;
quit
3) 添加nova用户并创建compute 和placement 服务
① 添加nova用户
openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin
②创建compute服务
openstack service create compute
openstack endpoint create --region RegionOne compute public http://192.168.186.133:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://192.168.186.133:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://192.168.186.133:8774/v2.1
③创建placement
openstack service create placement
openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
openstack endpoint create --region RegionOne placement public http://192.168.186.133:8778
openstack endpoint create --region RegionOne placement admin http://192.168.186.133:8778
openstack endpoint create --region RegionOne placement internal http://192.168.186.133:8778
4) 安装nova相应的组件
yum install -y openstack-nova-api
yum install -y openstack-nova-conductor
yum install -y openstack-nova-console
yum install -y openstack-nova-novncproxy
yum install -y openstack-nova-scheduler
5) 编辑nova.conf文件,完成相关配置
vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:openstack@192.168.186.133
[api_database]
connection=mysql+pymysql://nova:nova@192.168.186.133/nova_api
[database]
connection=mysql+pymysql://nova:nova@192.168.186.133/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
#auth_uri=http://192.168.186.133:5000/v3
auth_url=http://192.168.186.133:5000/v3
mamcached_servers=192.168.186.133:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=service
username=nova
password=nova
[DEFAULT]
my_ip=192.168.186.133 ------控制节点的ip
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[vnc]
enabled = true
server_listen=$my_ip
server_proxyclient_address=$my_ip
[glance]
api_servers=http://192.168.186.133:9292
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name=RegionOne
project_domain_name=Default
project_name=service
auth_type=password
user_domain_name=Default
auth_url=http://192.168.186.133:5000/v3
username=placement
password=placement
6) 初始化数据库并在nova中进行相关注册、
①初始化nova_api数据库
nova­manage api_db sync
mysql -h localhost -u root -p ------123456
use nova_ api;
show tables;
quit
② 注册相关服务
nova-manage cell_v2 map_cell0
nova-manage cell_v2 create_cell --name=cell1 --verbose
nova-manage db sync
use nova;
show tables;
quit
③ 验证是否注册成功
nova-manage cell_v2 list_cells --验证
④ 开机自启动nova相关程序
systemctl enable openstack­nova­api
systemctl enable openstack­nova­consoleauth —被弃用
systemctl enable openstack­nova­scheduler
systemctl enable openstack­nova­conductor —
systemctl enable openstack­nova­novncproxy
⑤ 开启nova相关程序
systemctl start openstack­nova­api
systemctl start openstack­nova­consoleauth —被弃用
systemctl start openstack­nova­scheduler
systemctl start openstack­nova­conductor —
systemctl start openstack­nova­novncproxy
⑥ 查看nova相关程序状态
systemctl status openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
7)补充:解决openstack-nova-placement 不能安装的问题
①安装 openstack-placement-api
yum install -y openstack-placement-api
②在数据库,创建placement数据库额相关授权
mysql -u root -p123456
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@‘localhost’ IDENTIFIED BY ‘placement’;
GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@’%’ IDENTIFIED BY ‘placement’;
GRANT ALL PRIVILEGES ON placement.* TO ‘placement’@‘openstackcontroller’ IDENTIFIED BY ‘placement’;
quit
③编写placement.conf的配置文件
vi /etc/placement/placement.conf
[placement_database]
# …
connection = mysql+pymysql://placement:placement@192.168.186.133/placement
[api]
# …
auth_strategy = keystone
[keystone_authtoken]
# …
auth_url=http://192.168.186.133:5000/v3
mamcached_servers=192.168.186.133:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=service
username=placement
password=placement
④初始化placement数据库
placement-manage db sync
⑤编写一个解决bug的配置文件
vi /etc/httpd/conf.d/00-placement-api.conf
<Directory /usr/bin>
= 2.4>
Require all granted
<IfVersion < 2.4>
Order allow,deny
Allow from all
⑥ 重启httpd服务并验证是否安装成功
systemctl restart httpd
placement-status upgrade check
14. Neutron安装配置
1) 创建数据库并授权
①创建数据库
mysql -u root -p123456
create database neutron;
②数据库授权操作
grant all privileges on neutron.* to ‘neutron’@‘localhost’ identified by ‘neutron’;
grant all privileges on neutron.* to ‘neutron’@’%’ identified by ‘neutron’;
grant all privileges on neutron.* to ‘neutron’@‘openstackcontroller’ identified by ‘neutron’;
quit
2) 创建neutron角色和network服务,并进api端口注册
①创建角色和服务
openstack user create --domain default --password-prompt neutron
openstack role add --project service --user neutron admin
openstack service create --name neutron --description “OpenStack Networking” network
② 进行端口api的注册
openstack endpoint create --region RegionOne network public http://192.168.186.133:9696
openstack endpoint create --region RegionOne network internal http://192.168.186.133:9696
openstack endpoint create --region RegionOne network admin http://192.168.186.133:9696
3) 安装neutron相关服务
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables -y
4) 修改 neutron.conf 配置文件
vi /etc/neutron/neutron.conf
[database]
connection = mysql+pymysql://neutron:neutron@192.168.186.133/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:openstack@192.168.186.133
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
auth_uri = http://192.168.186.133:5000
auth_url = http://192.168.186.133:5000
memcached_servers = 192.168.186.133:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
-------nova在配置文件中,没有找到,可以选择在文件最后追加
[nova]
# …
auth_url = http://192.168.186.133:5000 ----------端口从35357改为5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
5) 修改metadata_agent.ini配置文件
vi /etc/neutron/metadata_agent.ini
[DEFAULT]
# …
nova_metadata_host = 192.168.186.133
metadata_proxy_shared_secret = METADATA_SECRET
6) 修改nova.conf配置文件
vi /etc/nova/ nova.conf
[neutron]
url = http://192.168.186.133:9696
auth_url = http://192.168.186.133:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = METADATA_SECRET
7)建立软连接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
8) 初始化数据库和插件更新
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
9)重启nova服务
systemctl restart openstack-nova-api.service
10) 启动相关服务
①开机自启动
systemctl enable neutron-server.service
systemctl enable neutron-linuxbridge-agent.service
systemctl enable neutron-dhcp-agent.service
systemctl enable neutron-metadata-agent.service
②重启启动服务
systemctl restart neutron-server.service
systemctl restart neutron-linuxbridge-agent.service
systemctl restart neutron-dhcp-agent.service
systemctl restart neutron-metadata-agent.service
③查看相关服务状态
systemctl status neutron-server.service
systemctl status neutron-linuxbridge-agent.service
systemctl status neutron-dhcp-agent.service
systemctl status neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
systemctl status neutron-l3-agent.service
11)补充:相关网络插件的配置
①修改ml2_conf.ini配置文件
vi /etc/neutron/plugins/ml2/ml2_conf.ini ---------以下节点需要在最后自行添加
[ml2]
# …
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
# …
flat_networks = provider
[ml2_type_vxlan]
# …
vni_ranges = 1:1000
[securitygroup]
# …
enable_ipset = true
②修改linuxbridge_agent.ini配置文件
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini -----以下节点需要在最后自行添加
[linux_bridge]
physical_interface_mappings = provider:ens33 -------需要修改,但不确定是否正确
[vxlan]
enable_vxlan = true
local_ip = 192.168.186.133 -----------默认为自己的本地ip
l2_population = true
[securitygroup]
# …
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
③修改l3_agent.ini配置文件
vi /etc/neutron/l3_agent.ini -----以下信息需要在最后自行添加
[DEFAULT]
# …
interface_driver = linuxbridge
④修改dhcp_agent.ini配置文件
vi /etc/neutron/dhcp_agent.ini -----以下信息需要在最后自行添加
[DEFAULT]
# …
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
⑤重新更新插件数据库中的信息
neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head
⑥重启nova服务
systemctl restart openstack-nova-api.service
⑦重启neutron相关服务
systemctl restart neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
⑧查看neutron相关服务的状态
systemctl status neutron-server.service \
neutron-linuxbridge-agent.service neutron-dhcp-agent.service \
neutron-metadata-agent.service
15.dashboard
1) 安装openstack-dashboard 组件
yum install openstack-dashboard -y
2) 修改local_settings 配置文件
vi /etc/openstack-dashboard/local_settings
①可以查找到的文件
OPENSTACK_HOST = “192.168.186.133”
ALLOWED_HOSTS = [‘192.168.186.133’,‘192.168.186.137’,‘127.0.0.1’] ----修改为自己的ip
SESSION_ENGINE = ‘django.contrib.sessions.backends.cache’
CACHES = {
‘default’: {
 ‘BACKEND’: ‘django.core.cache.backends.memcached.MemcachedCache’,
 ‘LOCATION’: ‘192.168.186.133:11211’, -------修改为主机ip
},
}
OPENSTACK_KEYSTONE_URL = “http://%s:5000/v3” % OPENSTACK_HOST
②需要在最后添加的
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = {
“identity”: 3,
“image”: 2,
#“volume”: 2,
}
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = “Default”
OPENSTACK_KEYSTONE_DEFAULT_ROLE = “user”
-------------如何配置的是self-service网络,则不需要配置---------------
#OPENSTACK_NEUTRON_NETWORK = {
#
# ‘enable_router’: False,
# ‘enable_quotas’: False,
# ‘enable_distributed_router’: False,
# ‘enable_ha_router’: False,
#‘enable_lb’: False,
#‘enable_firewall’: False,
#‘enable_vpn’: False,
#‘enable_fip_topology_check’: False,
#}
-----------------------------------
TIME_ZONE = “UTC” ---------时区根据自己的时间组件选取
3)修改openstack-dashboard.conf配置文件
vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL} ---------添加在文件前面一部分
4) 启动服务,并验证
systemctl restart httpd.service memcached.service
systemctl status httpd.service memcached.service
192.168.186.133/dashboard
openstackComputer----------------配置
1. nova安装配置
https://docs.openstack.org/nova/queens/install/compute-install-rdo.html
1) 编辑两个源,方便下载nova组件
cd /etc/yum.repos.d/
curl -O https://trunk.rdoproject.org/centos7/delorean-deps.repo
curl -O https://trunk.rdoproject.org/centos7/current-passed-ci/delorean.repo
2) 更新yum中的文件并重启
yum update -y
yum repolist enabled
reboot
3) 安装openstack-nova-compute组件
yum install openstack-nova-compute -y
4) 编写nova.conf,进行相关配置
vi /etc/nova/nova.conf
vi /etc/nova/nova.conf
[DEFAULT]
enabled_apis=osapi_compute,metadata
transport_url=rabbit://openstack:openstack@192.168.186.133
[api_database]
connection=mysql+pymysql://nova:nova@192.168.186.133/nova_api
[database]
connection=mysql+pymysql://nova:nova@192.168.186.133/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
#auth_uri=http://192.168.186.133:5000/v3
auth_url=http://192.168.186.133:5000/v3
mamcached_servers=192.168.186.133:11211
auth_type=password
project_domain_name=Default
user_domain_name=Default
project_name=service
username=nova
password=nova
[DEFAULT]
my_ip=192.168.186.133 -----------------使用控制节点的ip
use_neutron=True
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[vnc] ---------------与控制节点不同
enabled = true
server_listen=0.0.0.0
server_proxyclient_address=$my_ip
novncproxy_base_url=http://192.168.186.133:6080/vnc_auto.html
[glance]
api_servers=http://192.168.186.133:9292
[oslo_concurrency]
lock_path=/var/lib/nova/tmp
[placement]
os_region_name=RegionOne
project_domain_name=Default
project_name=service
auth_type=password
user_domain_name=Default
auth_url=http://192.168.186.133:5000/v3
username=placement
password=placement
5) 解决不支持虚拟化的问题
①检查并解决
egrep -c ‘(vmx|svm)’ /proc/cpuinfo -------0 ------本机没有虚拟环境
vi /etc/nova/nova.conf
[libvirt]
# …
virt_type = qemu ------------
② 启动libvirtd服务
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
6) 在controller进行注册操作
openstack compute service list --service nova-compute
nova-manage cell_v2 discover_hosts
2.neutron安装配置
1) 安装三个相关的服务
yum install openstack-neutron-linuxbridge ebtables ipset -y
2)编写neutron.conf配置文件
vi /etc/neutron/ neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:openstack@192.168.186.133
auth_strategy = keystone
[keystone_authtoken]
auth_uri = http://192.168.186.133:5000
auth_url = http://192.168.186.133:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
vi /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]
physical_interface_mappings = provider:ens33 -----------不确定
[vxlan]
enable_vxlan = true
local_ip = 192.168.186.137 -------------应该是对的
l2_population = true
[securitygroup]
# …
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
3)编写nova.conf配置文件
vi /etc/nova/nova.conf
[neutron]
# …
url = http://192.168.186.133:9696
auth_url = http://192.168.186.133:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
4)重启nova服务
systemctl restart openstack-nova-compute.service
5)启动neutron-linuxbridge-agent服务
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
systemctl status neutron-linuxbridge-agent.service