23、安装块存储服务(cinder)
OpenStack Block Storage服务(cinder)将持久存储添加到虚拟机。 块存储提供用于管理卷的基础架构,并与OpenStack Compute进行交互以提供实例的卷。 该服务还能够管理卷快照和卷类型。


cinder-api : 接受API请求,并将其路由到cinder-volume进行操作。


cinder-volume :直接与Block Storage服务进行交互。cinder-volume服务响应发送到块存储服务的读取和写入请求来维护状态。 它可以通过驱动程序架构与各种存储提供商进行交互。


cinder-scheduler daemon  :守护程序选择要在其上创建卷的最佳存储提供程序节点。


cinder-backup daemon  :cinder-backup服务提供任何类型的备份卷到备份存储提供程序。


Messaging queue :块存储过程之间的路线信息




创建该服务的数据库和数据库管理账户
controller#
mysql -u root -p123


CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' \
IDENTIFIED BY 'cinder';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' \
IDENTIFIED BY 'cinder';


创建cinder用户、服务、API
controller#
openstack user create --domain default --password-prompt cinder
#此处需要输入密码cinder


openstack role add --project service --user cinder admin


openstack service create --name cinder --description "OpenStack Block Storage" volume


openstack service create --name cinderv2 --description "OpenStack Block Storage" volumev2

以下命令可能有些乱码,后半部分是这样的:http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume public http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume internal http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volume admin http://controller:8776/v1/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 public http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 internal http://controller:8776/v2/%\(tenant_id\)s
openstack endpoint create --region RegionOne volumev2 admin http://controller:8776/v2/%\(tenant_id\)s






前面的yum没安装openstack-cinder,则在这里安装openstack-cinder

controller#

yum install openstack-cinder -y





配置cinder各个组件的配置文件(备份配置文件,删除配置文件里的所有数据,使用提供的配置):

controller#

vi /etc/cinder/cinder.conf



[database]

connection = mysql+pymysql://cinder:cinder@controller/cinder



[DEFAULT]

transport_url = rabbit://openstack:openstack@controller

auth_strategy = keystone

my_ip = 192.158.215.100



[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cinder

password = cinder



[oslo_concurrency]

lock_path = /var/lib/cinder/tmp



[cinder]

os_region_name = RegionOne





同步数据库

controller#

su -s /bin/sh -c "cinder-manage db sync" cinder



服务开机自起和重启启动

controller#

systemctl restart openstack-nova-api.service



systemctl enable openstack-cinder-api.service openstack-cinder-scheduler.service



systemctl restart openstack-cinder-api.service openstack-cinder-scheduler.service





24、安装cinder节点(本人真实机不足以开启三个节点,cinder节点安装在compute上)



安装LVM作为后端存储

compute#

yum install lvm2 -y



配置LVM(需要在各个使用LVM的服务节点上配置LVM,让其显示,能被扫描到)

(本人的计算节点自带LVM,所以需要加上之前的LVM信息sda)

compute#

vi /etc/lvm/lvm.conf file

devices {

...

filter = [ "a/sda/","a/sdb/", "r/.*/"]





如果存储节点上本身自带LVM(节点操作系统在sda盘的LVM上),则需要加上sda的配置。

cinder#

filter = [ "a/sda/", "a/sdb/", "r/.*/"]



如果计算节点上本身自带LVM(节点操作系统在sda盘的LVM上),则需要配置sda的配置。

compute#

filter = [ "a/sda/", "r/.*/"]





服务开机自起和重启启动

compute#

systemctl enable lvm2-lvmetad.service

systemctl restart lvm2-lvmetad.service





添加一跨新的硬盘(本人添加的是第二块所以是sdb)



建立物理卷和逻辑组

compute#

pvcreate /dev/sdb

vgcreate cinder-volumes /dev/sdb





前面的yum没安装,则在这里安装。

compute#

yum install openstack-cinder targetcli python-keystone -y





配置cinder各个组件的配置文件(备份配置文件,删除配置文件里的所有数据,使用提供的配置):

compute#



vi /etc/cinder/cinder.conf



[database]

connection = mysql+pymysql://cinder:cinder@controller/cinder



[DEFAULT]

transport_url = rabbit://openstack:openstack@controller

auth_strategy = keystone

my_ip = 192.168.215.101

enabled_backends = lvm

glance_api_servers = http://controller:9292



[keystone_authtoken]

auth_uri = http://controller:5000

auth_url = http://controller:35357

memcached_servers = controller:11211

auth_type = password

project_domain_name = default

user_domain_name = default

project_name = service

username = cinder

password = cinder







[lvm]

volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver

volume_group = cinder-volumes

iscsi_protocol = iscsi

iscsi_helper = lioadm



[oslo_concurrency]

lock_path = /var/lib/cinder/tmp





服务开机自起和重启启动

compute#

systemctl enable openstack-cinder-volume.service target.service

systemctl restart openstack-cinder-volume.service target.service



在控制节点查看服务详情

controller#

openstack volume service list

+------------------+----------------+------+---------+-------+-------------------+

| Binary           | Host           | Zone | Status  | State | Updated At        |

+------------------+----------------+------+---------+-------+-------------------+

| cinder-volume    | compute@lvm    | nova | enabled | up    | 2017-05-05T01:14: |

|                  |                |      |         |       | 55.000000         |

| cinder-scheduler | controller     | nova | enabled | up    | 2017-05-05T01:58: |

|                  |                |      |         |       | 39.000000         |

+------------------+----------------+------+---------+-------+-------------------+



创建卷

controller#

命令:openstack volume create --size [多少Gb] [卷名]

例子: openstack volume create --size 1 volume1



查看卷详情

controller#

openstack volume list

+--------------------------------------+--------------+-----------+------+---

| ID | Display Name | Status | Size | Attached

,to |

+--------------------------------------+--------------+-----------+------+---

| a1e8be72-a395-4a6f-8e07-856a57c39524 | volume1 | available | 1 | 

, |

+--------------------------------------+--------------+-----------+------+---









挂载卷到实例

controller#

命令:openstack server add volume 实例 卷名

例子:openstack server add volume test volume1



查看卷详情

controller#

openstack volume list

+--------------------------------------+--------------+--------+------+------

| ID | Display Name | Status | Size | Attached to 

|

+--------------------------------------+--------------+--------+------+------

| a1e8be72-a395-4a6f-8e07-856a57c39524 | volume1 | in-use | 1 | Attached to

test on /dev/vdb |

+--------------------------------------+--------------+--------+------+------



成功使用块存储服务。