安装ceph及openstack多节点


文章目录

  • 安装ceph及openstack多节点
  • 网络环境
  • 多网卡改名方式(master node1 node2)
  • 静态ip配置(master node1 node2)
  • 基础配置
  • 更改节点hostname(master node1 node2)
  • 修改hosts (master node1 node2)
  • 生成 SSH 密钥对。 (master)
  • 把公钥拷贝到各 Ceph 节点 (master)
  • 配置yum源 (master node1 node2)
  • 关闭防火墙和selinux (master node1 node2)
  • 设置时区 (master node1 node2)
  • 配置NTP时间同步服务器 (master node1 node2)
  • 安装docker(master node1 node2)
  • 系统分区(master node1 node2)
  • 利用ceph-ansible安装ceph(docker版)
  • ceph-ansible安装
  • docker版ceph部署
  • 物理版ceph部署(仅供参考)
  • 利用kolla-ansible安装openstack(docker版)
  • 环境配置
  • kolla-ansible安装
  • openstack部署
  • 注意事项


网络环境

hostname

网卡

ip

master

em1

10.201.7.10

em2

10.10.10.10

em3

连接在交换机上,但无ip

node1

em1

10.201.7.11

em2

10.10.10.11

node2

em1

10.201.7.12

em2

10.10.10.12

  • 10.201.7.0/24:可用连接到外网,为三台机器的默认路由
  • 10.10.10.0/24:为3台机器之间通信的局域网
  • 所有ip都是使用静态ip形式

多网卡改名方式(master node1 node2)

  • 修改网卡配置文件
    mv /etc/sysconfig/network-scripts/ifcfg-ens32 /etc/sysconfig/network-scripts/ifcfg-em1
  • 编辑网卡name和device
    vim /etc/sysconfig/network-scripts/ifcfg-em1NAME=em1DEVICE=em1
  • 禁用系统可预测命名规则
    vim /etc/default/grub修改GRUB_CMDLINE_LINUX如下:
    GRUB_CMDLINE_LINUX中添加net.ifnames=0 biosdevname=0quiet
    运行grub2-mkconfig -o /boot/grub2/grub.cfg重新生成GRUB配置并更新内核参数。
  • 更改配置
    vim /etc/udev/rules.d/70-persistent-net.rules 更改 ATTR{address}KERNEL(一般不改,不知道可改为*) NAME
SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="00:1e:67:ce:19:58", ATTR{type}=="1", KERNEL=="eth*", NAME="em1"

静态ip配置(master node1 node2)

# ifcfg-em1
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=em1
DEVICE=em1
ONBOOT=yes

GATEWAY=10.201.7.254
IPADDR=10.201.7.10     # ip根据机器不同进行更改 
NETMASK=255.255.255.0
DNS1=61.128.128.68

# ifcfg-em2
TYPE=Ethernet
PROXY_METHOD=none
BROWSER_ONLY=no
BOOTPROTO=static
DEFROUTE=no
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=em2
DEVICE=em2
ONBOOT=yes

GATEWAY=10.10.10.1
IPADDR=10.10.10.10     # ip根据机器不同进行更改 
NETMASK=255.255.255.0

基础配置

更改节点hostname(master node1 node2)

hostnamectl set-hostname master
hostnamectl set-hostname node1
hostnamectl set-hostname node2

修改hosts (master node1 node2)

# master 上先修改
10.201.7.10  master
10.201.7.11  node1
10.201.7.12  node2

scp /etc/hosts root@node1:/etc
scp /etc/hosts root@node2:/etc

生成 SSH 密钥对。 (master)

# 提示 “Enter passphrase” 时,直接回车,口令即为空
ssh-keygen

把公钥拷贝到各 Ceph 节点 (master)

ssh-copy-id root@node1
ssh-copy-id root@node2
ssh-copy-id root@master

配置yum源 (master node1 node2)

yum clean all
[base]
name=CentOS-$releasever - Base
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/os/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=os
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

#released updates
[updates]
name=CentOS-$releasever - Updates
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/updates/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7


#additional packages that may be useful
[extras]
name=CentOS-$releasever - Extras
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/extras/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=extras
enabled=1
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7


#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-$releasever - Plus
baseurl=https://mirrors.tuna.tsinghua.edu.cn/centos/$releasever/centosplus/$basearch/
#mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=centosplus
gpgcheck=1
enabled=0
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
yum install epel-release
sed -e 's!^metalink=!#metalink=!g' \
    -e 's!^#baseurl=!baseurl=!g' \
    -e 's!//download\.fedoraproject\.org/pub!//mirrors.tuna.tsinghua.edu.cn!g' \
    -e 's!http://mirrors\.tuna!https://mirrors.tuna!g' \
    -i /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel-testing.repo
  • 设置yum(docker源)
sudo yum remove docker docker-common docker-selinux docker-engine
sudo yum install -y yum-utils device-mapper-persistent-data lvm2
wget -O /etc/yum.repos.d/docker-ce.repo https://download.docker.com/linux/centos/docker-ce.repo
sudo sed -i 's+download.docker.com+mirrors.tuna.tsinghua.edu.cn/docker-ce+' /etc/yum.repos.d/docker-ce.repo
yum install python-pip
pip install -i https://pypi.tuna.tsinghua.edu.cn/simple pip -U
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple
  • 设置ceph源,该源指定了ceph安装版本 (master node1 node2)
vim /etc/yum.repos.d/ceph.repo
[ceph]
name=Ceph packages for $basearch
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/$basearch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc

[ceph-noarch]
name=Ceph noarch packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/noarch
enabled=1
gpgcheck=1
priority=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc

[ceph-source]
name=Ceph source packages
baseurl=https://mirrors.tuna.tsinghua.edu.cn/ceph/rpm-nautilus/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
priority=1
  • 更新源
yum clean all
yum makecache
yum update

关闭防火墙和selinux (master node1 node2)

systemctl disable --now firewalld
systemctl stop --now firewalld
setenforce 0
sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

设置时区 (master node1 node2)

hwclock
tzselect
# 依次输入 5 9 1 1 
# 得到 TZ='Asia/Shanghai'; export TZ
# 修改profile文件,添加以下两行:
vim /etc/profile
TZ='Asia/Shanghai'
export TZ

配置NTP时间同步服务器 (master node1 node2)

yum -y install ntpdate
ntpdate -u  ntp.api.bz

crontab -e
*/20 * * * * ntpdate -u  ntp.api.bz > /dev/null 2>&1

systemctl reload crond.service

安装docker(master node1 node2)

# 1.设置yum(docker)源
# 2.卸载老版本docker
yum remove -y docker \
              docker-client \
              docker-client-latest \
              docker-common \
              docker-latest \
              docker-latest-logrotate \
              docker-logrotate \
              docker-engine
# 3.安装需要的工具
yum install -y yum-utils device-mapper-persistent-data lvm2
# 4.安装docker
yum install docker-ce docker-ce-cli containerd.io
# 5.修改配置
mkdir -p /etc/systemd/system/docker.service.d
tee /etc/systemd/system/docker.service.d/kolla.conf <<-'EOF'
[Service]
MountFlags=shared
EOF

sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
  "registry-mirrors": ["http://f1361db2.m.daocloud.io"]
}
EOF

# 6.设置开机启动并重启
sudo systemctl enable docker && sudo systemctl daemon-reload && sudo systemctl restart docker

系统分区(master node1 node2)

  • parted或者fdisk进行分区
  • partprobe刷新分区
  • pvcreate创建pv
#parted创建新分区示例
parted
(parted) p
(parted) mkpart 
(parted) xfs 
(parted) 0GB 
(parted) 400GB
(parted) p
(parted) q
partprobe
pvcreate /dev/sda5

#fdisk创建新分区示例
fdisk /dev/sda
命令(输入 m 获取帮助):`n`
Select (default p):`l`
起始 扇区 (1268789248-1874329599,默认为 1268789248):
Last 扇区, +扇区 or +size{K,M,G} (1268789248-1874329599,默认为 1874329599):`+500G`
命令(输入 m 获取帮助):`w`
partprobe
pvcreate /dev/sda5

利用ceph-ansible安装ceph(docker版)

ceph-ansible安装

  • stable-4.0 Supports Ceph version nautilus. This branch requires Ansible version 2.8.
  • 下载4.0稳定版:https://github.com/ceph/ceph-ansible/tree/stable-4.0
virtualenv ceph-env
source ceph-env/bin/activate
pip install --upgrade pip

unzip ceph-ansible-stable-4.0.zip
cd ceph-ansible-stable-4.0 && pip install -r requirements.txt

docker版ceph部署

  • 文件准备
cd ceph-ansible-stable-4.0
cp site-container.yml.sample site-container.yml
cp dummy-ansible-hosts ansible-hosts
cp group_vars/all.yml.sample  group_vars/all.yml
cp group_vars/osds.yml.sample  group_vars/osds.yml
  • 配置修改
`vim ansible-hosts`
[mons]
master
node1
node2

[osds]
master
node1
node2

[mgrs]
master
node1
node2

[clients]
master
node1
node2

[rgws]
master
node1
node2

[mdss]
master
node1
node2
[grafana-server]
master

[nfss]
master


`vim site-container.yml`
- hosts:
  - mons
  - osds
  - mdss
  - rgws
  - nfss
#  - rbdmirrors
  - clients
#  - iscsigws
#  - iscsi-gws # for backward compatibility only!
  - mgrs
  - grafana-server


`vim group_vars/all.yml (cat all.yml | grep -Ev '^$|#')`

generate_fsid: true
monitor_interface: em1
public_network: 10.201.7.0/24
cluster_network: 10.10.10.0/24
osd_objectstore: bluestore
radosgw_interface: em1
ceph_docker_image: "ceph/daemon"
ceph_docker_image_tag: latest-nautilus
ceph_docker_registry: 10.201.7.116:4000 # 私有仓库地址,默认为docker.io
ceph_docker_on_openstack: true
containerized_deployment: true
openstack_config: true
openstack_glance_pool:
  name: "images"
  pg_num: "{{ osd_pool_default_pg_num }}"
  pgp_num: "{{ osd_pool_default_pg_num }}"
  rule_name: "replicated_rule"
  type: 1
  erasure_profile: ""
  expected_num_objects: ""
  application: "rbd"
  size: 3
  min_size: "{{ osd_pool_default_min_size }}"
  pg_autoscale_mode: False
openstack_cinder_pool:
  name: "volumes"
  pg_num: "{{ osd_pool_default_pg_num }}"
  pgp_num: "{{ osd_pool_default_pg_num }}"
  rule_name: "replicated_rule"
  type: 1
  erasure_profile: ""
  expected_num_objects: ""
  application: "rbd"
  size: 3
  min_size: "{{ osd_pool_default_min_size }}"
  pg_autoscale_mode: False
openstack_nova_pool:
  name: "vms"
  pg_num: "{{ osd_pool_default_pg_num }}"
  pgp_num: "{{ osd_pool_default_pg_num }}"
  rule_name: "replicated_rule"
  type: 1
  erasure_profile: ""
  expected_num_objects: ""
  application: "rbd"
  size: 3
  min_size: "{{ osd_pool_default_min_size }}"
  pg_autoscale_mode: False
openstack_cinder_backup_pool:
  name: "backups"
  pg_num: "{{ osd_pool_default_pg_num }}"
  pgp_num: "{{ osd_pool_default_pg_num }}"
  rule_name: "replicated_rule"
  type: 1
  erasure_profile: ""
  expected_num_objects: ""
  application: "rbd"
  size: 3
  min_size: "{{ osd_pool_default_min_size }}"
  pg_autoscale_mode: False
openstack_pools:
  - "{{ openstack_glance_pool }}"
  - "{{ openstack_cinder_pool }}"
  - "{{ openstack_nova_pool }}"
  - "{{ openstack_cinder_backup_pool }}"
openstack_keys:
  - { name: client.glance, caps: { mon: "profile rbd", osd: "profile rbd pool=volumes, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
  - { name: client.cinder, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_glance_pool.name }}"}, mode: "0600" }
  - { name: client.cinder-backup, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" }
  - { name: client.nova, caps: { mon: "profile rbd", osd: "profile rbd pool={{ openstack_glance_pool.name }}, profile rbd pool={{ openstack_nova_pool.name }}, profile rbd pool={{ openstack_cinder_pool.name }}, profile rbd pool={{ openstack_cinder_backup_pool.name }}"}, mode: "0600" }
dashboard_admin_user: admin
dashboard_admin_password: 123456
grafana_admin_user: admin
grafana_admin_password: 123456


`vim group_vars/osds.yml (cat osds.yml | grep -Ev '^$|#')`

devices:
  - /dev/sda5
  • 部署
ansible-playbook -i ansible-hosts  site-container.yml (部署)
docker exec ceph-mon-master ceph -s (校验)
http://10.201.7.127:8443/  (dashboard 校验)
mount -t ceph 10.10.3.10:6789:/ /root/ceph-fs  -o name=admin,secret=AQCEqx5fr9HHIhAAg+y9/irA9vJN0MOQEXXRUw==(cephfs校验)
  • 卸载
ansible-playbook -i ansible-hosts infrastructure-playbooks/purge-container-cluster.yml (卸载)
  • 客户端安装(master node1 node2)
# 在 glance-api 节点,需要为 librbd 绑定 Python
sudo yum install python-ceph
sudo yum install python-rbd

# 在 nova-compute,cinder-backup 和 cinder-volume 节点要用到 Python 和 Ceph 客户端命令行工具:
sudo yum install ceph-common
  • 配置虚拟化(可选)(master node1 node2)
# on the compute nodes, add the secret key to libvirt and remove the temporary copy of the key
# 开启kvm服务,并且设置其开机自动启动
yum install libvirt -y
systemctl start libvirtd
systemctl enable libvirtd
yum install -y virt-manager libvirt-client

# 您不一定需要所有计算节点上都有UUID。然而,从平台一致性的角度来看,最好保持相同的UUID
uuidgen
492e6b76-d5ae-410d-b1e7-1b8b76230e37

cat > secret.xml <<EOF
<secret ephemeral='no' private='no'>
  <uuid>492e6b76-d5ae-410d-b1e7-1b8b76230e37</uuid>
  <usage type='ceph'>
    <name>client.cinder secret</name>
  </usage>
</secret>
EOF
sudo virsh secret-define --file secret.xml
Secret 457eb676-33da-42ec-9a8c-9293d545c337 created
# sudo virsh secret-set-value --secret 492e6b76-d5ae-410d-b1e7-1b8b76230e37 --base64 $(cat client.cinder.key) && rm client.cinder.key secret.xml
sudo virsh secret-set-value --secret 492e6b76-d5ae-410d-b1e7-1b8b76230e37 --base64 AQBVrB5fAAAAABAA4U/FaQyzgXqnDXTQbbVWEQ== && rm secret.xml

virsh secret-list
 UUID                                  Usage
--------------------------------------------------------------------------------
 492e6b76-d5ae-410d-b1e7-1b8b76230e37  ceph client.cinder secret

 # 去除用:virsh secret-undefine 492e6b76-d5ae-410d-b1e7-1b8b76230e37

物理版ceph部署(仅供参考)

  • 文件准备
cd ceph-ansible-stable-4.0
cp site.yml.sample site.yml
cp dummy-ansible-hosts ansible-hosts
cp group_vars/all.yml.sample  group_vars/all.yml
cp group_vars/osds.yml.sample  group_vars/osds.yml
  • 配置修改
`vim ansible-hosts`
[mons]
master
node1 
node2 

[osds]
master 
node1 
node2 

[mgrs]
master 
node1 
node2 

[clients]
master
node1
node2

[rgws]
master 
node1 
node2 

[grafana-server]
master

[nfss]
master

vim site.yml
- hosts:
  - mons
  - osds
 # - mdss
  - rgws
  - nfss
 # - rbdmirrors
  - clients
  - mgrs
 # - iscsigws
 # - iscsi-gws # for backward compatibility only!
  - grafana-server
 # - rgwloadbalancers

`vim group_vars/all.yml (cat all.yml | grep -Ev '^$|#')`

ceph_origin: repository
ceph_repository: community
ceph_mirror: https://mirrors.tuna.tsinghua.edu.cn/ceph/
ceph_stable_key: https://mirrors.tuna.tsinghua.edu.cn/ceph/keys/release.asc
ceph_stable_release: nautilus
ceph_stable_repo: "{{ ceph_mirror }}/rpm-{{ ceph_stable_release }}"
ceph_stable_redhat_distro: el7
monitor_interface: em1
public_network: 10.201.7.0/24
cluster_network: 10.10.10.0/24
osd_objectstore: bluestore
radosgw_interface: "{{ monitor_interface }}"
dashboard_admin_user: admin
dashboard_admin_password: 123456
grafana_admin_user: admin
grafana_admin_password: 123456

`vim group_vars/osds.yml (cat osds.yml | grep -Ev '^$|#')`

devices:
  - /dev/sda5
  • 部署
ansible-playbook -i ansible-hosts  site.yml (部署)
ceph -s (校验)
http://10.201.7.127:8443/  (dashboard 校验)
mount -t ceph 10.10.3.10:6789:/ /root/ceph-fs  -o name=admin,secret=AQCEqx5fr9HHIhAAg+y9/irA9vJN0MOQEXXRUw==(cephfs测试)
  • 卸载
ansible-playbook -i ansible-hosts infrastructure-playbooks/purge-cluster.yml (卸载)

利用kolla-ansible安装openstack(docker版)

环境配置

  • 关闭部分服务
systemctl stop libvirtd.service
systemctl disable libvirtd.service
systemctl stop NetworkManager
systemctl disable NetworkManager
systemctl stop firewalld
systemctl disable firewalld
systemctl stop iptables.service
systemctl disable iptables.service

kolla-ansible安装

  • 下载train版kolla-ansible:https://codeload.github.com/openstack/kolla-ansible/zip/stable/train
virtualenv kolla-env
source kolla-env/bin/activate
pip install --upgrade pip

unzip kolla-ansible-stable-train.zip && cd kolla-ansible-stable-train
pip install -r requirements.txt -r test-requirements.txt
git init
python setup.py install
pip install ansible

openstack部署

  • 文件准备
# 复制openstack配置文件
sudo mkdir -p /etc/kolla
sudo cp  etc/kolla/* /etc/kolla/
sudo cp ansible/inventory/* /etc/kolla/
kolla-genpwd # 生产密码

# 添加openstack对接ceph额外配置文件
mkdir -p /etc/kolla/config/cinder/{cinder-volume,cinder-backup}
cp  /etc/ceph/ceph.client.glance.keyring /etc/kolla/config/glance/
cp  /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-volume/
cp  /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/cinder/cinder-backup/
cp  /etc/ceph/ceph.client.cinder-backup.keyring /etc/kolla/config/cinder/cinder-backup/
cp  /etc/ceph/ceph.client.cinder.keyring /etc/kolla/config/nova/
cp  /etc/ceph/ceph.client.nova.keyring /etc/kolla/config/nova/

cp  /etc/ceph/ceph.conf /etc/kolla/config/glance/
cp	/etc/ceph/ceph.conf /etc/kolla/config/cinder/
cp	/etc/ceph/ceph.conf /etc/kolla/config/nova/

cat > /etc/kolla/config/glance/glance-api.conf <<EOF
[glance_store]
stores = rbd
default_store = rbd
rbd_store_pool = images
rbd_store_user = glance
rbd_store_ceph_conf = /etc/ceph/ceph.conf
rbd_store_chunk_size = 8
EOF

cat > /etc/kolla/config/cinder/cinder-volume.conf <<EOF
[DEFAULT]
enabled_backends=ceph
[ceph]
rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=cinder
backend_host=rbd:volumes
rbd_pool=volumes
volume_backend_name=ceph
volume_driver=cinder.volume.drivers.rbd.RBDDriver
rbd_secret_uuid = 567a4c19-188d-494d-ac0e-7717205514b7  # 取/etc/kolla/passwords.yml:cinder_rbd_secret_uuid
bd_default_features = 1
EOF

cat > /etc/kolla/config/cinder/cinder-backup.conf <<EOF
[DEFAULT]
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size = 134217728
backup_ceph_pool=backups
backup_driver = cinder.backup.drivers.ceph.CephBackupDriver
backup_ceph_stripe_unit = 0
backup_ceph_stripe_count = 0
restore_discard_excess_bytes = true
EOF

cat > /etc/kolla/config/nova/nova-compute.conf <<EOF
[libvirt]
images_rbd_pool=vms
images_type=rbd
images_rbd_ceph_conf=/etc/ceph/ceph.conf
rbd_user=nova
rbd_secret_uuid=ec6a35aa-8c8f-4169-a81a-a03892f0aa03  # 取/etc/kolla/passwords.yml:rbd_secret_uuid
virt_type=qemu

# 完整额外配置结构如下
.
├── cinder
│   ├── ceph.conf
│   ├── cinder-backup
│   │   ├── ceph.client.cinder-backup.keyring
│   │   └── ceph.client.cinder.keyring
│   ├── cinder-backup.conf
│   ├── cinder-volume
│   │   └── ceph.client.cinder.keyring
│   └── cinder-volume.conf
├── glance
│   ├── ceph.client.glance.keyring
│   ├── ceph.conf
│   └── glance-api.conf
└── nova
    ├── ceph.client.cinder.keyring
    ├── ceph.client.nova.keyring
    ├── ceph.conf
    └── nova-compute.conf
  • 配置修改
# admin密码修改
`vim /etc/kolla/passwords.yml`
keystone_admin_password=123456

`vim /etc/kolla/globals.yml ( cat /etc/kolla/globals.yml | grep -Ev '^$|#')`
---
kolla_base_distro: "centos"
kolla_install_type: "source"
openstack_release: "train"
node_custom_config: "/etc/kolla/config"
kolla_internal_vip_address: "10.201.7.200"
network_interface: "em1"
neutron_external_interface: "em3"
enable_haproxy: "yes"
enable_keepalived: "{{ enable_haproxy | bool }}"
enable_ceph: "no"
enable_cinder: "yes"
enable_cinder_backup: "yes"
glance_backend_ceph: "yes"
cinder_backend_ceph: "yes"
nova_backend_ceph: "yes"
nova_compute_virt_type: "qemu"

`vim /etc/kolla/multinode`
[control]
master
node1
node2
[network]
master
[compute]
master
node1
node2
[monitoring]
master
node1
node2
[storage]
master
node1
node2
[deployment]
localhost       ansible_connection=local
  • 部署
kolla-ansible -i /etc/kolla/multinode bootstrap-servers
kolla-ansible -i /etc/kolla/multinode prechecks
kolla-ansible -i /etc/kolla/multinode deploy
kolla-ansible post-deploy
pip install python-openstackclient python-glanceclient python-neutronclient
  • 卸载
cd kolla-ansible-stable-train/tools
./cleanup-containers
./cleanup-host

注意事项

  • 如果compute节点是物理机或开启嵌套虚拟化(CPU硬件加速)的虚拟机: virt_type=kvm
    如果compute节点是未开启嵌套虚拟化的虚拟机:virt_type=qemu
  • 主机compute没有映射到任何单元
    compute节点日志: Instance xxx has allocations against this compute host but is not found in the database.
    解决:添加计算节点到cell数据库:su -s /bin/sh -c “nova-manage cell_v2 discover_hosts --verbose” nova
    如果失败:检查ceph中nova用户的权限和key是否全部正确,如果额外配置中nova的key校验不通过是没法添加计算节点到数据库的