一、检查CEPH和集群各参数

1、检查ceph安装状态:

命令:ceph –s 或者 ceph status(显示结果一样)

示例:

root@node1:/home/ceph/ceph-cluster# ceph -s

       cluster 2f54214e-b6aa-44a4-910c-52442795b037

       health HEALTH_OK

       monmap e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1,quorum 0 node1

       osdmap e56: 5 osds: 5 up, 5 in

       pgmap v289: 192 pgs, 3 pools, 70988 kB data, 26 objects

           376 MB used, 134 GB / 134 GB avail

                 192 active+clean

2、检查集群健康状态:

命令:ceph –w

示例:

root@node1:/home/ceph/ceph-cluster# ceph -w

       cluster 2f54214e-b6aa-44a4-910c-52442795b037

       health HEALTH_OK

       monmap e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1,quorum 0 node1

       osdmap e56: 5 osds: 5 up, 5 in

       pgmap v289: 192 pgs, 3 pools, 70988 kB data, 26 objects

           376 MB used, 134 GB / 134 GB avail

                 192 active+clean

2016-09-08 09:40:18.084097 mon.0 [INF] pgmap v323: 192 pgs: 192active+clean; 8 bytes data, 182 MB used, 134 GB / 134 GB avail

3、检查cephmonitor仲裁状态

命令:ceph quorum_status --formatjson-pretty

示例:

root@node1:/home/ceph/ceph-cluster# ceph quorum_status --formatjson-pretty

{ "election_epoch": 1,

       "quorum": [

         0],

       "quorum_names": [

       "node1"],

      "quorum_leader_name": "node1",

      "monmap": { "epoch": 1,

      "fsid": "2f54214e-b6aa-44a4-910c-52442795b037",

      "modified": "0.000000",

      "created": "0.000000",

      "mons": [

           { "rank": 0,

              "name":"node1",

              "addr":"192.168.2.13:6789\/0"}]}}

4、导出cephmonitor信息

命令:ceph mon dump

示例:

root@node1:/home/ceph/ceph-cluster# ceph mon dump

dumped monmap epoch 1

epoch 1

fsid 2f54214e-b6aa-44a4-910c-52442795b037

last_changed 0.000000

created 0.000000

0: 192.168.2.13:6789/0 mon.node1

5、检查集群使用状态

    命令:ceph df

    示例:

root@node1:/home/ceph/ceph-cluster# ceph df

GLOBAL:

       SIZE     AVAIL     RAW USED     %RAW USED

       134G      134G         376M          0.27

POOLS:

       NAME         ID     USED      %USED     MAX AVAIL     OBJECTS

       data         0           0         0        45882M           0

       metadata     1           0         0        45882M           0

rbd          2      70988k      0.05        45882M          26

6、检查ceph monitorosd和pg(配置组)状态

    命令:ceph mon statceph osd statceph pg stat

示例:

root@node1:/home/ceph/ceph-cluster# ceph mon stat

e1: 1 mons at {node1=192.168.2.13:6789/0}, election epoch 1, quorum0 node1

root@node1:/home/ceph/ceph-cluster# ceph osd stat

       osdmap e56: 5 osds: 5 up, 5 in

root@node1:/home/ceph/ceph-cluster# ceph pg stat

v289: 192 pgs: 192 active+clean; 70988 kB data, 376 MB used, 134 GB/ 134 GB avail

7、列表PG

    命令:ceph pg dump

示例:

root@node1:/home/ceph/ceph-cluster# ceph pg dump

dumped all in format plain

version 289

stamp 2016-09-08 08:44:35.249418

last_osdmap_epoch 56

last_pg_scan 1

full_ratio 0.95

nearfull_ratio 0.85

                             ……………

8、列表ceph存储池

    命令:ceph osd lspools

示例:

root@node1:/home/ceph/ceph-cluster# ceph osd lspools

0 data,1 metadata,2 rbd,

9、检查OSDCRUSH map

    命令:ceph osd tree

示例:

root@node1:/home/ceph/ceph-cluster# ceph osd tree

# id      weight   type name     up/down       reweight

-1  0.15       root default

-2  0.06              host node2

0   0.03                     osd.0      up   1    

3   0.03                     osd.3      up   1    

-3  0.06              host node3

1   0.03                     osd.1      up   1    

4   0.03                     osd.4      up   1    

-4  0.03              host node1

2   0.03                     osd.2      up   1    

10、列表群集的认证秘钥:

    命令:ceph auth list

示例:

root@node1:/home/ceph/ceph-cluster# ceph auth list

installed auth entries:

osd.0

              key:AQCM089X8OHnIhAAnOnRZMuyHVcXa6cnbU2kCw==

              caps: [mon] allow profile osd

              caps: [osd] allow *

osd.1

              key:AQCU089X0KSCIRAAZ3sAKh+Fb1EYV/ROkBd5mA==

              caps: [mon] allow profile osd

              caps: [osd] allow *

osd.2

              key:AQAb1c9XWIuxEBAA3PredgloaENDaCIppxYTbw==

              caps: [mon] allow profile osd

              caps: [osd] allow *

osd.3

              key:AQBF1c9XuBOpMBAAx8ELjaH0b1qwqKNwM17flA==

              caps: [mon] allow profile osd

              caps: [osd] allow *

osd.4

              key:AQBc1c9X4LXCEBAAcq7UVTayMo/e5LBykmZZKg==

              caps: [mon] allow profile osd

              caps: [osd] allow *

client.admin

              key:AQAd089XMI14FRAAdcm/woybc8fEA6dH38AS6g==

              caps: [mds] allow

              caps: [mon] allow *

              caps: [osd] allow *

client.bootstrap-mds

              key:AQAd089X+GahIhAAgC+1MH1v0enAGzKZKUfblg==

              caps: [mon] allow profile bootstrap-mds

client.bootstrap-osd

              key: AQAd089X8B5wHBAAnrM0MQK3to1iBitDzk+LYA==

              caps: [mon] allow profile bootstrap-osd

二、块存储高级管理

1、创建块设备

命令:rbd create {p_w_picpath-name} --size{megabytes} --pool {pool-name} --p_w_picpath-format 2

注意:--p_w_picpath-format 2 用于指定format类型为2,不加则默认为1类型,保护快照功能仅支持2类型。1类型为淘汰类型,一般用2类型,这里演示用

示例:

root@node1:/home/ceph/ceph-cluster# rbd create zhangbo --size 2048--pool rbd

2、列出块设备

命令:rbd ls {pool-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd ls rbd

zhangbo

3、检索块信息

命令:rbd –p_w_picpath {p_w_picpath-name } info

rbd info {p_w_picpath-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd --p_w_picpath zhangbo  info

rbd p_w_picpath 'zhangbo':

               size 2048 MB in 512 objects

               order 22 (4096 kB objects)

               block_name_prefix: rb.0.5e56.2ae8944a

               format: 1

root@node1:/home/ceph/ceph-cluster# rbd info zhangbo

rbd p_w_picpath 'zhangbo':

               size 2048 MB in 512 objects

               order 22 (4096 kB objects)

               block_name_prefix: rb.0.5e56.2ae8944a

               format: 1

4、更改块大小

命令:rbd resize –p_w_picpath {p_w_picpath-name}–size {megabytes}

示例:

root@node1:/home/ceph/ceph-cluster# rbd resize --p_w_picpath zhangbo--size 4096

Resizing p_w_picpath: 100% complete...done.

root@node1:/home/ceph/ceph-cluster# rbd info zhangbo

rbd p_w_picpath 'zhangbo':

               size 4096 MB in 1024 objects

               order 22 (4096 kB objects)

               block_name_prefix: rb.0.5e56.2ae8944a

               format: 1

5、删除块设备

命令:rbd rm {p_w_picpath-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd rm zhangbo

Removing p_w_picpath: 100% complete...done.

root@node1:/home/ceph/ceph-cluster# rbd ls

6、映射块设备:

命令:rbd map {p_w_picpath-name} –pool{pool-name} –id {user-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd map zhangbo --pool rbd --idadmin

7、查看已映射块设备

命令:rbd showmapped

示例:

root@node1:/home/ceph/ceph-cluster# rbd showmapped

id pool p_w_picpath   snapdevice   

0  rbd  zhangbo -   /dev/rbd0

8、取消映射:

命令:rbd unmap/dev/rbd/{pool-name}/{p_w_picpath-name}

示例:

root@node1:/home/ceph/ceph-cluster# rbd unmap /dev/rbd/rbd/zhangbo

root@node1:/home/ceph/ceph-cluster# rbd showmapped

9、格式化:

命令:mkfs.ext4 /dev/rbd0

示例:

root@node1:/home/ceph/ceph-cluster# mkfs.ext4 /dev/rbd0

mke2fs 1.42.9 (4-Feb-2014)

Discarding device blocks: 完成                           

文件系统标签=

OS type: Linux

块大小=4096 (log=2)

分块大小=4096 (log=2)

Stride=1024 blocks, Stripe width=1024 blocks

262144 inodes, 1048576 blocks

52428 blocks (5.00%) reserved for the super user

第一个数据块=0

Maximum filesystem blocks=1073741824

32 block groups

32768 blocks per group, 32768 fragments per group

8192 inodes per group

Superblock backups stored on blocks:

            32768, 98304, 163840, 229376, 294912,819200, 884736

Allocating group tables: 完成                           

正在写入inode: 完成                           

Creating journal (32768 blocks): 完成

Writing superblocks and filesystem accounting information: 完成

10、挂载

命令:mount /dev/rbd0 /mnt/{文件夹名}

示例:

root@node1:/home/ceph/ceph-cluster# mount /dev/rbd0/mnt/ceph-zhangbo/

root@node1:/home/ceph/ceph-cluster# df -h

文件系统        容量  已用 可用 已用% 挂载点

udev            989M  4.0K 989M    1% /dev

tmpfs           201M  1.1M 200M    1% /run

/dev/sda5        19G  4.0G  14G   23% /

none            4.0K     0 4.0K    0% /sys/fs/cgroup

none            5.0M     0 5.0M    0% /run/lock

none           1001M   76K 1001M   1% /run/shm

none            100M   32K 100M    1% /run/user

/dev/sda1       9.3G   60M 8.8G    1% /boot

/dev/sda6        19G  67M   18G    1% /home

/dev/sdc1        27G  169M  27G    1% /var/lib/ceph/osd/ceph-2

/dev/rbd0       3.9G  8.0M 3.6G    1% /mnt/ceph-zhangbo

11、设置开机自动挂载(开机CEPH自动map和mount rbd块设备)

vim /etc/ceph/rbdmap

{poolname}/{p_w_picpathname} id=client,keyring=/etc/ceph/ceph.client.keyring

rbd/zhangbo id=admin,keyring=/etc/ceph/ceph.client.admin.keyring

vim /etc/fstab

/dev/rbd/rbd/zhangbo /mnt/ceph-zhangbo xfs defaults,noatime,_netdev

12、块扩容

命令:rbd resize rbd/zhangbo –size 4096

rbd resize –p_w_picpath zhangbo –size 4096

支持文件系统在线扩容:resize2fs /dev/rbd0

13、使用块设备完整操作流程:

1rbd createzhangbo --size 2048 --pool rbd

2rbd mapzhangbo --pool rbd --id admin

3mkfs.ext4/dev/rbd0

4mount/dev/rbd0 /mnt/ceph-zhangbo/

5、设置开机自动挂载

6、文件系统在线扩容

rbd resize rbd/zhangbo --size 2048

resize2fs /dev/rbd0

7umount/mnt/ceph-zhangbo

8rbd unmap/dev/rbd/rbd/zhangbo

9、删除开机自动挂载添加的内容

10rbd rmzhangbo

三、快照和克隆

1、创建快照:

命令:rbd –pool {pool-name} snapcreate –snap {snap-name} {p_w_picpath-name}

rbd snap create{pool-name}/{p_w_picpath-name}@{snap-name}

        示例:

root@node1:~# rbdsnap create rbd/zhangbo@zhangbo_snap

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME            SIZE

           2zhangbo_snap 1024 MB

2、快照回滚

命令:rbd –pool {pool-name}snap sellback –snap {snap-name} {iname-name}

rbd snap rollback{pool-name}/{p_w_picpath-name}@{snap-name}

         示例:

root@node1:~# rbdsnap rollback rbd/zhangbo@zhangbo_snap

Rolling back tosnapshot: 100% complete...done.

3、清除快照(删除该块设备下所有的快照)

命令:rbd –pool{pool-name} snap purge {p_w_picpath-name}

rbd snap purge{pool-name}/{p_w_picpath-name}

        示例:

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

          2zhangbo_snap  1024 MB

          3zhangbo_snap1 1024 MB

          4zhangbo_snap2 1024 MB

          5zhangbo_snap3 1024 MB

root@node1:~# rbdsnap purge rbd/zhangbo

Removing allsnapshots: 100% complete...done.

root@node1:~# rbdsnap ls rbd/zhangbo

root@node1:~#

4、删除快照(删除指定快照)

命令:rbd snap rm{pool-name}/{p_w_picpath-name}@(snap-name)

示例:

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

           10 zhangbo_snap1 1024 MB

           11 zhangbo_snap2 1024 MB

           12 zhangbo_snap3 1024 MB

root@node1:~# rbdsnap rm rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

           10 zhangbo_snap1 1024 MB

12 zhangbo_snap31024 MB

5、列出快照:

命令:rbd –pool{pool-name} snap ls {p_w_picpath-name}

rbd snap ls{pool-name}/{p_w_picpath-name}

        示例:

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

        16 zhangbo_snap1 1024 MB

        17 zhangbo_snap2 1024 MB

18 zhangbo_snap31024 MB

6、保护快照:

命令:rbd –pool {pool-name}snap pretect –p_w_picpath {p_w_picpath-name} –snap {snapshot-name}

rbd snap pretect{pool-name}/{p_w_picpath-name}@{snapshot-name}

        示例:

root@node1:~# rbdsnap protect rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap rm rbd/zhangbo@zhangbo_snap2

rbd: snapshot'zhangbo_snap2' is protected from removal.

2016-09-0814:05:03.874498 7f35bddad7c0 -1 librbd: removing snapshot from header failed:(16) Device or resource busy

7、取消保护快照

命令:rbd –pool{pool-name} snap unprotect –p_w_picpath {p_w_picpath-name} –snap {snapshot-name}

rbd snapunprotect {pool-name}/{p_w_picpath-name}@{snapshot-name}

        示例:

root@node1:~# rbdsnap unprotect rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap rm rbd/zhangbo@zhangbo_snap2

root@node1:~# rbdsnap ls rbd/zhangbo

SNAPID NAME             SIZE

        22 zhangbo_snap1 1024 MB

        24 zhangbo_snap3 1024 MB

8、快照克隆(需快照保护才能快照克隆)

注意:快照是只读的,而克隆是基于快照可读写的

命令:rbd clone{pool-name}/{parent-p_w_picpath}@{snap-name} {pool-name}/{child-p_w_picpath-name}

示例:

root@node1:~# rbdclone rbd/zhangbo@zhangbo_snap2 rbd/zhangbo-snap-clone

root@node1:~# rbdls

zhangbo

zhangbo-snap-clone

9、创建分层快照和克隆

命令:rbd createzhangbo --size 1024 --p_w_picpath-format 2

rbd snap create{pool-name}/{p_w_picpath-name}@{snap-name}

rbd snap pretect{pool-name}/{p_w_picpath-name}@{snapshot-name}

rbd clone{pool-name}/{parent-p_w_picpath}@{snap-name} {pool-name}/{child-p_w_picpath-name}

10、查看快照的克隆:

命令:rbd --pool{pool-name} children --p_w_picpath {p_w_picpath-name} --snap {snap-name}

rbd children{pool-name}/{p_w_picpath-name}@{snapshot-name}

        示例:

root@node1:~# rbdchildren rbd/zhangbo@zhangbo_snap2

rbd/zhangbo-snap-clone