1. 需求说明

   glance作为openstack中p_w_picpath服务,支持多种适配器,支持将p_w_picpath存放到本地文件系统,http服务器,ceph分布式文件系统,glusterfs和sleepdog等开源的分布式文件系统上,本文,通过将讲述glance如何和ceph结合。

   目前glance采用的是本地filesystem的方式存储,存放在默认的路径/var/lib/glance/p_w_picpaths下,当把本地的文件系统修改为分布式的文件系统ceph之后,原本在系统中镜像将无法使用,所以建议当前的镜像删除,部署好ceph之后,再统一上传至ceph中存储。

2.原理解析

   使用ceph的rbd接口,需要通过libvirt,所以需要在客户端机器上安装libvirt和qemu,关于ceph和openstack结合的结构如下,同时,在openstack中,需要用到存储的地方有三个:1. glance的镜像,默认的本地存储,路径在/var/lib/glance/p_w_picpaths目录下,2. nova虚拟机存储,默认本地,路径位于/var/lib/nova/instances目录下,3. cinder存储,默认采用LVM的存储方式。

openstack运维实战系列(十七)之glance与ceph结合_ceph

3. glance与ceph联动

1.创建资源池pool

1、ceph默认创建了一个pool:rbd
[root@controller_10_1_2_230 ~]# ceph osd lspools
0 rbd,

[root@controller_10_1_2_230 ~]# ceph osd pool stats
pool rbd id 0
  nothing is going on

2、创建一个pool,指定pg_num的大小为128
[root@controller_10_1_2_230 ~]# ceph osd pool create p_w_picpaths 128
pool 'p_w_picpaths' created

3、查看pool的pg_num和pgp_num大小
[root@controller_10_1_2_230 ~]# ceph osd pool get p_w_picpaths pg_num
pg_num: 128
[root@controller_10_1_2_230 ~]# ceph osd pool get p_w_picpaths pgp_num
pgp_num: 128

4、查看ceph中的pools
[root@controller_10_1_2_230 ~]# ceph osd lspools
0 rbd,1 p_w_picpaths,            
[root@controller_10_1_2_230 ~]# ceph osd pool stats
pool rbd id 0
  nothing is going on

pool p_w_picpaths id 1                #增加了一个pool,id号码是1
  nothing is going on

2.配置ceph客户端

1. glance作为ceph的客户端,即glance-api,需要有ceph的配置文件,从ceph的monitor节点复制一份配置文件过去即可,我所在环境中控制节点和ceph monitor为同一台机器,不需要操作

#如果controller节点和ceph的monitor节点是分开,则需要复制
[root@controller_10_1_2_230 ~]# scp /etc/ceph/ceph.conf root@controller_10_1_2_230:/etc/ceph/
ceph.conf  

2. 安装客户端rpm包

[root@controller_10_1_2_230 ~]# yum install python-rbd -y

3.配置ceph认证

1. 添加认证的key
[root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance mon 'allow r' osd 'class-read object_prefix rbd_children,allow rwx pool=p_w_picpaths'   
[client.glance]
        key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==

2. 查看认证列表
[root@controller_10_1_2_230 ~]# ceph auth list
installed auth entries:

osd.0
        key: AQDsx6lWYGehDxAAGwcYP9jDvH2Zaa8JlGwj1Q==
        caps: [mon] allow profile osd
        caps: [osd] allow *
osd.1
        key: AQD1x6lWQCYBERAAjIKO1LVpj8FvVefDvNQZSA==
        caps: [mon] allow profile osd
        caps: [osd] allow *
client.admin
        key: AQCexqlWQL6OGBAA2v5LsYEB5VgLyq/K2huY3A==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-mds
        key: AQCexqlWUMNRMRAAZEp/UlhQuaixMcNy5d5pPw==
        caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
        key: AQCexqlWQFfpJBAAfPCx4sTLNztBESyFKys9LQ==
        caps: [mon] allow profile bootstrap-osd
client.glance                                             #glance连接ceph的认证信息
        key: AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
        caps: [mon] allow r
        caps: [osd] class-read object_prefix rbd_children,allow rwx pool=p_w_picpaths 
 
3. 将glance生成的key拷贝至
[root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance
[client.glance]
        key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==

#将key导出到客户端              
[root@controller_10_1_2_230 ~]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[client.glance]
        key = AQB526lWOOIQBxAA0ZSk30DBShtti3fKkm4aeA==
[root@controller_10_1_2_230 ~]# chown glance:glance /etc/ceph/ceph.client.glance.keyring 
[root@controller_10_1_2_230 ~]# ll /etc/ceph/ceph.client.glance.keyring 
-rw-r--r-- 1 glance glance 64 Jan 28 17:17 /etc/ceph/ceph.client.glance.keyring

4. 配置glance使用ceph做为后端存储

1、备份glance-api的配置文件,以便于恢复
[root@controller_10_1_2_230 ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.orig

2、修改glance配置文件,连接至ceph
[root@controller_10_1_2_230 ~]# vim /etc/glance/glance-api.conf
[DEFAULT]
notification_driver = messaging
rabbit_hosts = 10.1.2.230:5672
rabbit_retry_interval = 1
rabbit_retry_backoff = 2
rabbit_max_retries = 0
rabbit_ha_queues = True
rabbit_durable_queues = False
rabbit_userid = glance
rabbit_password = GLANCE_MQPASS
rabbit_virtual_host = /glance

default_store=rbd             #glance使用的后端存储
known_stores=glance.store.rbd.Store      #配置rbd的驱动

rbd_store_ceph_conf=/etc/ceph/ceph.conf    #ceph的配置文件,包含有monitor的地址,通过查找monitor,可以获取认证信息
rbd_store_user=glance                      #认证用户,即是刚创建的用户
rbd_store_pool=p_w_picpaths                      #连接的存储池
rbd_store_chunk_size=8                     #设置chunk size,即切割的大小

3. 重启glance服务
[root@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-api restart                 
Stopping openstack-glance-api:                             [  OK  ]
Starting openstack-glance-api:                             [  OK  ]
[root@controller_10_1_2_230 ~]# /etc/init.d/openstack-glance-registry restart
Stopping openstack-glance-registry:                        [  OK  ]
Starting openstack-glance-registry:                        [  OK  ]
[root@controller_10_1_2_230 ~]# tail -2 /etc/glance/glance-api.conf
# location strategy defined by the 'location_strategy' config option.
#store_type_preference =
[root@controller_10_1_2_230 ~]# tail -2 /var/log/glance/registry.log
2016-01-28 18:40:25.231 21890 INFO glance.wsgi.server [-] Started child 21896
2016-01-28 18:40:25.232 21896 INFO glance.wsgi.server [-] (21896) wsgi starting up on http://0.0.0.0:9191/

5. 测试glance和ceph联动情况

[root@controller_10_1_2_230 ~]# glance --debug p_w_picpath-create --name glance_ceph_test --disk-format qcow2  --container-format bare  --file  cirros-0.3.3-x86_64-disk.img    
curl -i -X POST -H 'x-p_w_picpath-meta-container_format: bare' -H 'Transfer-Encoding: chunked' -H 'User-Agent: python-glanceclient' -H 'x-p_w_picpath-meta-size: 13200896' -H 'x-p_w_picpath-meta-is_public: False' -H 'X-Auth-Token: 062af9027a85487997d176c9f1e963f2' -H 'Content-Type: application/octet-stream' -H 'x-p_w_picpath-meta-disk_format: qcow2' -H 'x-p_w_picpath-meta-name: glance_ceph_test' -d '<open file u'cirros-0.3.3-x86_64-disk.img', mode 'rb' at 0x1ba24b0>' http://controller:9292/v1/p_w_picpaths

HTTP/1.1 201 Created
content-length: 489
etag: 133eae9fb1c98f45894a4e60d8736619
location: http://controller:9292/v1/p_w_picpaths/348a90e8-3631-4a66-a45d-590ec6413e7d
date: Thu, 28 Jan 2016 10:42:06 GMT
content-type: application/json
x-openstack-request-id: req-b993bc0b-447e-49b4-a8ce-bd7765199d5a

{"p_w_picpath": {"status": "active", "deleted": false, "container_format": "bare", "min_ram": 0, "updated_at": "2016-01-28T10:42:06", "owner": "ef4b83a909dc4689b663ff2c70022478", "min_disk": 0, "is_public": false, "deleted_at": null, "id": "348a90e8-3631-4a66-a45d-590ec6413e7d", "size": 13200896, "virtual_size": null, "name": "glance_ceph_test", "checksum": "133eae9fb1c98f45894a4e60d8736619", "created_at": "2016-01-28T10:42:04", "disk_format": "qcow2", "properties": {}, "protected": false}}

+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 133eae9fb1c98f45894a4e60d8736619     |
| container_format | bare                                 |
| created_at       | 2016-01-28T10:42:04                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | 348a90e8-3631-4a66-a45d-590ec6413e7d |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | glance_ceph_test                     |
| owner            | ef4b83a909dc4689b663ff2c70022478     |
| protected        | False                                |
| size             | 13200896                             |
| status           | active                               |
| updated_at       | 2016-01-28T10:42:06                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

[root@controller_10_1_2_230 ~]# glance p_w_picpath-list
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| ID                                   | Name                | Disk Format | Container Format | Size     | Status |
+--------------------------------------+---------------------+-------------+------------------+----------+--------+
| 56e96957-1308-45c7-9c66-1afff680b217 | cirros-0.3.3-x86_64 | qcow2       | bare             | 13200896 | active |
| 348a90e8-3631-4a66-a45d-590ec6413e7d | glance_ceph_test    | qcow2       | bare             | 13200896 | active |    #上传成功
+--------------------------------------+---------------------+-------------+------------------+----------+--------+

6.查看ceph池的数据

[root@controller_10_1_2_230 ~]# rados -p p_w_picpaths ls
rbd_directory
rbd_header.10d7caaf292
rbd_data.10dd1fd73446.0000000000000001
rbd_id.348a90e8-3631-4a66-a45d-590ec6413e7d
rbd_header.10dd1fd73446
rbd_data.10d7caaf292.0000000000000000
rbd_data.10dd1fd73446.0000000000000000
rbd_id.8a09b280-5916-44c6-9ce8-33bb57a09dad    @@@glance中的数据存储到了ceph文件系统中@@@

4. 总结

   将openstack的glance的数据存储到ceph中是一种非常好的解决方案,既能够保障p_w_picpath数据的安全性,同时glance和nova在同个存储池中,能够基于copy-on-write的方式快速创建虚拟机,能够在秒级为单位实现vm的创建。