就当云笔记用了
部署过程参考openstack官方文档https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-queens
也参考了javascript:void(0),扶艾大佬这个系列的文章,没有用本地源
两台服务器均为Centos7.2系统,16G内存
controller节点 IP:192.168.239.128/24 hostname为controller.node
computer节点 IP:192.168.239.129/24 hostname为computer.node
系统设置:
两个节点
1.修改hosts文件
vi /etc/hosts
2.关闭SElinux
Vi /etc/selinux/config
3.关闭防火墙
firewall-cmd --state //查看状态(notrunning/running)
systemctl stop firewalld【.service】 //停止firewall
systemctl disable firewalld【.service】 //禁止开机自启
4.设置时钟同步【NTP】
date查看系统时间 hwclock查看硬件时间
yum install ntpdate -y
timedatectl list-timezones|grep Asia
timedatectl set-timezone Asia/Shanghai
systemctl start ntpdate
yum upgrade
5.装个vim编辑器,也可以不装
yum install vim -y
安装基本环境
1.安装openstack客户端等【两个节点都要】
yum install centos-release-openstack-queens -y
yum install python-openstackclient -y
yum install openstack-selinux -y
yum upgrade -y
2.controller节点配置安装mariadb 【也可用其他的数据库】
yum install mariadb mariadb-server python2-PyMySQL -y
创建+修改配置文件
vim /etc/my.cnf.d/openstack.cn
启用服务+开机自启
systemctl enable mariadb.service
systemctl start mariadb.service
设置数据库 一路y就好,中间会提示设置密码
3. 控制节点,安装消息队列
yum install rabbitmq-server -y
添加openstack用户并设置openstack使用消息队列的权限
rabbitmqctl add_user openstack openstackmqpwd
Creating user "openstack" ...
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
4.安装memcached服务
yum install memcached python-memcached -y
修改/etc/sysconfig/memcached文件
5.控制节点 安装etcd服务
yum install etcd -y
修改配置文件/etc/etcd/etcd.conf
启用服务
Systemctl enable etcd
systemctl start etcd
控制节点安装keystone组件
1.数据库设置
Mysql -u root -pmysqlpwd
Create database keystone;
设置keystone数据库的访问权限
Grant all privileges on keystone.* to ‘keystone’@’localhost’ IDENTIFIED by 'mysqlpwd';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED by 'mysqlpwd';
2.安装配置keystone
yum install openstack-keystone httpd mod_wsgi -y
编辑【database】部分
编辑【token】部分
导入keystone数据库表结构
su -s /bin/sh -c "keystone-manage db_sync" keystone
初始化keystone
keystone-manage bootstrap --bootstrap-password mysqlpwd --bootstrap-admin-url http://controller.node:35357/v3/ --bootstrap-internal-url http://controller.node:5000/v3/ --bootstrap-public-url http://controller.node:5000/v3/ --bootstrap-region-id RegionOne
3.配置apache服务
修改配置文件/etc/httpd/conf/httpd.conf
建立连接,启用服务
ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service
4.配置相关域,项目,角色,用户
导入管理员环境变量
export OS_PA=admin
export OS_PASSWORD=mysqlpwd
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller.node:35357/v3
export OS_IDENTITY_API_VERSION=3
openstack project create --domain default --description "Demo Project" demo //创建demo项目
openstack domain create --description "An Example Domain" example //创建域
openstack user create --domain default --password-prompt demo
创建demo用户,输入demo用户的密码【这里为demopwd】
openstack role create user
openstack role add --project demo --user demo user
5.验证操作
unset OS_AUTH_URL OS_PASSWORD //取消环境变量
openstack --os-auth-url http://controller.node:35357/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name admin --os-username admin token issue //请求身份验证令牌
此命令使用demo用户的密码和 api 端口 5000, 它只允许定期 (非管理员) 访问标识服务 api。
openstack --os-auth-url http://controller.node:5000/v3 --os-project-domain-name Default --os-user-domain-name Default --os-project-name demo --os-username demo token issue
为项目和用户创建客户端环境脚本:
. /home/admin-opensc 运行脚本写入变量
控制节点Glance组件安装
1.数据库设置
mysql -u -root -pmysqlpwdmysql -u root -pmysqlpwd
create database glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'glancepwd';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'glancepwd';
. /home/admin-opensc
openstack user create --domain default --password-prompt glance
创建glance用户,设置密码【此处为glancepwd】
2.设置角色和服务项目
openstack role add --project service --user glance admin
openstack service create --name glance --description "OpenStack Image" image
3.创建镜像服务API
openstack endpoint create --region RegionOne image public http://controller.node:9292
openstack endpoint create --region RegionOne image internal http://controller.node:9292
openstack endpoint create --region RegionOne image admin http://controller.node:9292
4安装配置组件
Vim /etc/glance/glance-api.conf
修改配置文件/etc/glance/glance-api.conf
1924行【database】部分 connection=mysql+pymysql://glance:glancepwd@controller.node
3472行【keystone_authtoken】部分 3501行左右
auth_uri = http://controller.node:5000
auth_url = http://controller.node:5000
3551行 memcached_servers = controller.node:11211
3658行后面增加
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glancepwd
4509L [paste_deploy]部分 flavor = keystone
修改配置文件/etc/glance/glance-registry.conf
1170行【database】 mysql+pymysql://glance:glancepwd@controller.node/glance
1285行【keystone_authtoken】
auth_uri = http://controller.node:5000
auth_url = http://controller.node:5000
memcached_servers = controller.node:11211
auth_type = password
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = glancepwd
2298行 flavor=keystone
填充镜像数据库并启用服务
su -s /bin/sh -c "glance-manage db_sync" glance
systemctl enable openstack-glance-api.service
openstack-glance-registry.service
systemctl start openstack-glance-api.service openstack-glance-registry.service
5. 验证操作 openstack官方也有提供一个测试镜像,那个小一点
获取镜像wget http://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud-1708.qcow2
上传镜像glance image-create --name "centos7.4" --file CentOS-7-x86_64-GenericCloud-1708.qcow2 --disk-format qcow2 --container-format bare --visibility public --progress
查看镜像openstack image list
NOVA组件安装
控制节点:
1.创建数据库,服务凭据和API点
mysql -u -root -pmysqlpwdmysql -u root -pmysqlpwd
CREATE DATABASE nova_cell0;
CREATE DATABASE nova;
CREATE DATABASE nova_api;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'novapwd';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@‘%' IDENTIFIED BY 'novapwd';
2.创建用户并设置密码【此处为novapwd】
openstack user create --domain default --password-prompt nova
openstack role add --project service --user nova admin 将用户nova添加到角色admin
3.创建服务实体和api
openstack service create --name nova --description "OpenStack Compute" compute
openstack endpoint create --region RegionOne compute public http://controller.node:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller.node:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller.node:8774/v2.1
4.创建用户设置密码
openstack user create --domain default --password-prompt placement
openstack role add --project service --user placement admin
在openstack中创建条目,放置API
openstack service create --name placement --description "Placement API" placement
openstack endpoint create --region RegionOne placement public http://controller.node:8778
openstack endpoint create --region RegionOne placement internal http://controller.node:8778
openstack endpoint create --region RegionOne placement admin http://controller.node:8778
5.安装软件包
yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api
6.修改配置文件/etc/nova/nova.conf
1291L: [DEFAULT]中 my_ip=192.168.239.128
1755L: [DEFAULT]中 use_neutron=true
2417L: [DEFAULT]中 firewall_driver=nova.virt.firewall.NoopFirewallDriver
2756L: [DEFAULT]中enabled_apis=osapi_compute,metadata
3155L: [DEFAULT]中 transport_url=rabbit://openstack:openstackmqpwd@controller.node
3220L: [api]中 auth_strategy=keystone
3512L: [api_database]中 connection=mysql+pymysql://nova:novapwd@controller.node/nova_api
4635L: [database]中 connection=mysql+pymysql://nova:novapwd@controller.node/nova
5340L: [glance]中 api_servers=http://controller.node:9292
6117L: [keystone_authtoken]中
6118 auth_url = http://controller.node:5000/v3
6119 memcached_servers = controller.node:11211
6120 auth_type = password
6121 project_domain_name = default
6122 user_domain_name = default
6123 project_name = service
6124 username = nova
6125 password = novapwd
7918L: [oslo_concurrency]中 lock_path=/var/lib/nova/tmp
8800L:[placement] 中
8800 os_region_name = RegionOne
8801 project_domain_name = Default
8802 project_name = service
8803 auth_type = password
8804 user_domain_name = Default
8805 auth_url = http://controller.node:5000/v3
8806 username = placement
8807 password = novapwd
10290L: [vnc]中 enabled=true
10314L: [vnc]中 server_listen=192.168.239.128
10327L: [vnc]中server_proxyclient_address=192.168.239.128
修改配置文件 /etc/httpd/conf.d/00-nova-placement-api.conf
添加
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
7.重启服务,填充数据库
systemctl restart httpd
su -s /bin/sh -c "nova-manage api_db sync" nova
8.注册数据库cellp,创建单元格cell1,填充nova数据库
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
验证nova的cell0和cell1是否正确注册 nova-manage cell_v2 list_cells
9.启用nova服务
systemctl enable openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service openstack-nova-consoleauth.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
计算节点:
1.安装软件包
yum install openstack-nova-compute
2.修改配置文件 /etc/nova/nova.conf
1294L:[Default]中my_ip=192.168.239.129
1755L: [DEFAULT]中 use_neutron=true
2417L: [DEFAULT]中 firewall_driver=nova.virt.firewall.NoopFirewallDriver
2756L:[Default]中 enabled_apis=osapi_compute,metadata
3156L: [Default]中 transport_url=rabbit://openstack:openstackmqpwd@controller.node
3221L:[api]中 auth_strategy=keystone
5340L: [glance]中 api_servers=http://controller.node:9292
6119L:[keystone_authtoken]中
6120 auth_url = http://controller.node:5000/v3
6121 memcached_servers = controller.node:11211
6122 auth_type = password
6123 project_domain_name = default
6124 user_domain_name = default
6125 project_name = service
6126 username = nova
6127 password = novapwd
7921L: [oslo_concurrency]中 lock_path=/var/lib/nova/tmp
8800L:[placement] 中
8800 os_region_name = RegionOne
8801 project_domain_name = Default
8802 project_name = service
8803 auth_type = password
8804 user_domain_name = Default
8805 auth_url = http://controller.node:5000/v3
8806 username = placement
8807 password = novapwd
10290L: [vnc]中 enabled=true
10317L: [vnc]中 server_listen=0.0.0.0
10330L: [vnc]中server_proxyclient_address=192.168.239.129
10348L: [vnc]中 novncproxy_base_url=http://controller.node:6080/vnc_auto.html
3.确定计算节点是否支持虚拟机的硬件加速
egrep -c '(vmx|svm)' /proc/cpuinfo
如果返回大于1或者greate,不需要其他配置
如果返回0或者libvirt,则不支持硬件加速,必须配置为qemu,KVM不能用
编辑配置文件vim /etc/nova/nova.conf
【libvirt】部分 virt_type=qemu
启动计算服务,开机自启
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
4. 启用计算服务
systemctl enable libvirtd.service openstack-nova-compute.service
systemctl start libvirtd.service openstack-nova-compute.service
返回控制节点
1.查看确认数据库中是否存在计算节点主机
openstack compute service list --service nova-compute
2.发现计算主机
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
添加新的计算节点时,必须在控制器节点上运行nova-manage cell_v2 discover_hosts来注册这些新计算节点。或者在/etc/nova/nova.conf中修改配置文件设置发现间隔,在【scheduler】部分discover _host_in_cells_interval=数值
3.操作验证
openstack compute service list //列出服务组件以验证每个进程的成功启动和注册
openstack catalog list //查看API连接点
验证单元格放置API是否成功 nova-status upgrade check
Neutro组件安装
控制节点
1.配置数据库
mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@’localhost' IDENTIFIED BY 'neutronpwd';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'neutronpwd';
2.创建openstack用户并将其加入角色
openstack user create --domain default --password-prompt neutron
输入密码 此处为neutron
openstack role add --project service --user neutron admin
3. 创建服务实体和API
openstack service create --name neutron --description "OpenStack Networking" network
openstack endpoint create --region RegionOne network public http://controller.node:9696
openstack endpoint create --region RegionOne network internal http://controller.node:9696
openstack endpoint create --region RegionOne network admin http://controller.node:9696
4.安装组件【此处采用option2】具体1和2的区别可以参照openstack官方文档
yum install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
5.修改配置文件/etc/neutron/neutron.conf
27L:【DEFAULT】中 auth_strategy = keystone
30L:【DEFAULT】中 core_plugin = ml2
33L:【DEFAULT】中 service_plugins = router
85: allow_overlapping_ips = True
98L:【DEFAULT】中 notify_nova_on_port_status_changes = true
102L:【DEFAULT】中notify_nova_on_port_data_changes = true
570L:【DEFAULT】中transport_url = rabbit://openstack:openstackmqpwd@controller.node
729L:【database】中 connection = mysql+pymysql://neutron:neutronpwd@controller.node/neutron
817L: [keystone_authtoken]
818 auth_uri = http://controller.node:5000
819 auth_url = http://controller.node:35357
820 memcached_servers = controller.node:11211
821 auth_type = password
822 project_domain_name = default
823 user_domain_name = default
824 project_name = service
825 username = neutron
826 password = neutron
1065 [nova]
1066 auth_url = http://controller.node:35357
1067 auth_type = password
1068 project_domain_name = default
1069 user_domain_name = default
1070 region_name = RegionOne
1071 project_name = service
1072 username = nova
1073 password = novapwd
1191L:[oslo_concurrency]中 lock_path = /var/lib/neutron/tmp
6.配置模块化二层组件 使用 linux 桥接机制为实例构建第2层 (桥接和切换) 虚拟网络基础结构/etc/neutron/plugins/ml2/ml2_conf.ini
128L 【ML2】中:
136 type_drivers = flat,vlan,vxlan
141 tenant_network_types = vxlan
145 mechanism_drivers = linuxbridge,l2population
150 extension_drivers = port_security
177L:【ml2_type_flat】中
186 flat_networks = provider
231 [ml2_type_vxlan] 中
239 vni_ranges = 1:1000
247L: [securitygroup]中
263 enable_ipset = true
7.配置linux桥接代理/etc/neutron/plugins/ml2/linuxbridge_agent.ini
146L:【linux_bridge】
157 physical_interface_mappings = provider:ens33
181L: [securitygroup]中
188 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
193 enable_security_group = true
200L: [vxlan]中
208 enable_vxlan = true
234 local_ip = 192.168.239.128
258 l2_population = true
配置三层代理/etc/neutron/l3_agent.ini
16 interface_driver = linuxbridge
配置DHCP代理/etc/neutron/dhcp_agent.ini
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置元数据代理 /etc/neutron/metadata_agent.ini
22 nova_metadata_host = controller.node
34 metadata_proxy_shared_secret = METADATA_SECRET
配置计算服务使用网络服务/etc/nova/nova.conf
7588 [neutron]
7589 url = http://controller.node:9696
7590 auth_url = http://controller.node:35357
7591 auth_type = password
7592 project_domain_name = default
7593 user_domain_name = default
7594 region_name = RegionOne
7595 project_name = service
7596 username = neutron
7597 password = neutron
7598 service_metadata_proxy = true
7599 metadata_proxy_shared_secret = METADATA_SECRET
制作链接
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充数据库
su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启计算API服务,网络服务,三层服务
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl start neutron-server.service neutron-linuxbridge-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
计算节点:
1.安装组件
yum install openstack-neutron-linuxbridge ebtables ipset
2.修改配置文件 /etc/neutron/neutron.conf
【DEFAULT】
27 auth_strategy = keystone
570 transport_url = rabbit://openstack:openstackmqpwd@controller.node
【keystone_authtoken】
818 auth_uri = http://controller.node:5000
819 auth_url = http://controller.node:35357
820 memcached_servers = controller.node:11211
821 auth_type = password
822 project_domain_name = default
823 user_domain_name = default
824 project_name = service
825 username = neutron
826 password = neutron
【oslo_concurrency】
1183 lock_path = /var/lib/neutron/tmp
3.配置linux桥接代理/etc/neutron/plugins/ml2/linuxbridge_agent.ini
【linux_bridge】
157 physical_interface_mappings = provider:ens33
【vxlan】
208 enable_vxlan = true
234 local_ip = 192.168.239.129
258 l2_population = true
【securitygroup】
188 firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
193 enable_security_group = true
4.验证sysctl值【需要回显为1】
sysctl net.bridge.bridge-nf-call-ip6tables
sysctl net.bridge.bridge-nf-call-iptables
5.配置nova使用网络服务 /etc/nova/nova.conf
7591 [neutron]
7593 url = http://controller.node:9696
7594 auth_url = http://controller.node:35357
7595 auth_type = password
7596 project_domain_name = default
7597 user_domain_name = default
7598 region_name = RegionOne
7599 project_name = service
7600 username = neutron
7601 password = neutron
6.重启计算服务,启用桥接代理
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl start neutron-linuxbridge-agent.service
在控制节点验证操作
openstack network agent list
应当有三个控制节点代理和一个计算机点代理
至此必要的组件已经安装完成,可以使用命令行来创建/管理虚拟机与内/外部网络等,推荐安装仪表盘horizon和块存储cinder两个组件
仪表盘组件horizon的安装容我再整理整理。。。。