CEPH集群搭建
系统必要设置; 来源 http://docs.ceph.org.cn/start/intro/ 环境; centos7.5 3.10.0-862.el7.x86_64
节点: 4台 192.168.1.163 admin (部署节点此节点不参加集群区存储,仅操作用,官方建议也奇数节点) 192.168.1.176 node1 192.168.1.177 node2 192.168.1.178 node3
1 (所有节点执行)配置ntp时钟同步 如果没有ntp相关命令,则每台都需要安装ntp yum install -y chrony
开机启动并设置自启 (若局域网有NTP服务器,则修改chrony的配置文件指向NTP服务器即可)
systemctl start chronyd
systemctl enable chronyd
2 (所有节点执行)关闭防火墙 systemctl stop firewalld.service systemctl disable firewalld.service
3 (所有节点执行)关闭selinux sed -i 's/SELINUX=.*/SELINUX=disabled/' /etc/selinux/config setenforce 0
4 (所有节点执行)配置CEPH源,官方源因为需要×××可能无法下载,这里提供官方,网易,阿里云,的任选其一.
官方源: cat > /etc/yum.repos.d/ceph-noarch.repo << EOF [Ceph-noarch] name=Ceph noarch packages baseurl=http://download.ceph.com/rpm-jewel/el7/noarch enabled=1 gpgcheck=1 type=rpm-md gpgkey=https://download.ceph.com/keys/release.asc priority=1 EOF
网易: cat > /etc/yum.repos.d/ceph.repo << EOF [ceph] name=ceph baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_64/ gpgcheck=0 [ceph-noarch] name=cephnoarch baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch/ gpgcheck=0 EOF
更新yum源: yum repolist
- (所有节点执行)配置节点Host,并且更改主机名为IP地址后的内容: 为了方便后边安装,以及 ssh 方式连接各个节点,我们先修改一下各个节点的 Hostname 以及配置 Hosts 在将hosts文件拷贝至各个节点如下:
admin (192.168.1.163) admin 节点: hostnamectl set-hostname admin && exec bash
node1(192.168.1.176) node1节点: hostnamectl set-hostname node1 && exec bash
node2(192.168.1.177) node2节点: hostnamectl set-hostname node2 && exec bash
node3(192.168.1.178) node3节点: hostnamectl set-hostname node3 && exec bash
cat /etc/hostname cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.163 admin 192.168.1.176 node1 192.168.1.177 node2 192.168.1.178 node3
[root@admin ~]# scp /etc/hosts 192.168.1.176:/etc/
[root@admin ~]# scp /etc/hosts 192.168.1.177:/etc/
[root@admin ~]# scp /etc/hosts 192.168.1.178:/etc/
-
(所有节点执行此步)创建用户,CEPH不允许使用root用户来使用ceph命令需要创建普通用户并用sudo授权,且不允许使用"ceph"关键字创建用户,这里我使用的是的"cephd" 且每个节点都要创建用户与sudo授权
-
创建用户 [root@admin ~]# useradd -d /home/cephd -m cephd [root@admin ~]# passwd cephd Changing password for user cephd. New password: BAD PASSWORD: The password fails the dictionary check - it is based on a dictionary word Retype new password: passwd: all authentication tokens updated successfully.
-
sudo授权 [root@admin ~]# echo "cephd ALL =(root) NOPASSWD: ALL" | tee /etc/sudoers.d/ tee: /etc/sudoers.d/: Is a directory cephd ALL =(root) NOPASSWD: ALL
-
(在admin节点操作)su - cephd切换至普通用户创建密钥并发送至各个节点,无须执行密码直接回车即可; [cephd@admin ~ 1]#ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/home/cephd/.ssh/id_rsa): Created directory '/home/cephd/.ssh'. Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /home/cephd/.ssh/id_rsa. Your public key has been saved in /home/cephd/.ssh/id_rsa.pub. The key fingerprint is: SHA256:TRS8K04quzd0fU1OaieJ0KWUQfilT5fM7mP4XgSa3U0 cephd@admin The key's randomart image is: +---[RSA 2048]----+ | +=+ | | ..+ o | | +.* o..E| | .o* .+B+.| | So.=oX. +| | . + + B =. | | . = . o = .| | . + . . +. | | o= . +o. | +----[SHA256]-----+
拷贝密钥至其他节点: ssh-copy-id node1 ssh-copy-id node2 ssh-copy-id node3
测试登录(admin节点): [cephd@admin ~ 5]#ssh node1 [cephd@node1 ~ 1]#exit
若启用防火墙请配置放行端口(这里跳过此步) 到此系统基本设置配置完成... 安装集群
-
(所有节点)安装ceph工具并创建一个带有一个Ceph Monitor和三个Ceph OSD守护进程的Ceph存储集群 安装依赖程序包: yum install -y yum-utils && sudo yum-config-manager --add-repo https://dl.fedoraproject.org/pub/epel/7/x86_64/ && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/dl.fedoraproject.org* yum install ceph
-
(admini节点执行)创建一个CEPH集群,注意切换用户 [root@admin]# mkdir my-cluster [root@admin]# cd my-cluster/
注意: 如果您遇到麻烦并且想要重新开始,请执行以下操作以清除Ceph软件包,并清除其所有数据和配置:
ceph-deploy purge {ceph-node} [{ceph-node}] ceph-deploy purgedata {ceph-node} [{ceph-node}] ceph-deploy forgetkeys rm ceph.* 如果执行purge,则必须重新安装Ceph。最后一个rm 命令删除在先前安装期间由本地ceph-deploy写出的所有文件
- 在您为保存配置详细信息而创建的目录的管理节点上,使用执行以下步骤ceph-deploy。
[cephd@admin my-cluster]$ ceph-deploy install node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f3ca051f368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f3ca15fc230>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts admin node1 node2 node3
[ceph_deploy.install][DEBUG ] Detecting platform for host admin ...
[admin][DEBUG ] connection detected need for sudo
[admin][DEBUG ] connected to host: admin
[admin][DEBUG ] detect platform information from remote host
[admin][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[admin][INFO ] installing Ceph on admin
[admin][INFO ] Running command: sudo yum clean all
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source elrepo epel extras updates
[admin][DEBUG ] Cleaning up everything
[admin][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[admin][DEBUG ] Cleaning up list of fastest mirrors
[admin][INFO ] Running command: sudo yum -y install epel-release
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Determining fastest mirrors
[admin][DEBUG ] * base: mirrors.aliyun.com
[admin][DEBUG ] * elrepo: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * epel: mirror.premi.st
[admin][DEBUG ] * extras: mirrors.aliyun.com
[admin][DEBUG ] * updates: mirrors.huaweicloud.com
[admin][DEBUG ] 12 packages excluded due to repository priority protections
[admin][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[admin][DEBUG ] Nothing to do
[admin][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Loading mirror speeds from cached hostfile
[admin][DEBUG ] * base: mirrors.aliyun.com
[admin][DEBUG ] * elrepo: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * epel: mirror.premi.st
[admin][DEBUG ] * extras: mirrors.aliyun.com
[admin][DEBUG ] * updates: mirrors.huaweicloud.com
[admin][DEBUG ] 12 packages excluded due to repository priority protections
[admin][DEBUG ] Package yum-plugin-priorities-1.1.31-50.el7.noarch already installed and latest version
[admin][DEBUG ] Nothing to do
[admin][DEBUG ] Configure Yum priorities to include obsoletes
[admin][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[admin][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[admin][INFO ] Running command: sudo yum remove -y ceph-release
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Resolving Dependencies
[admin][DEBUG ] --> Running transaction check
[admin][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[admin][DEBUG ] --> Finished Dependency Resolution
[admin][DEBUG ]
[admin][DEBUG ] Dependencies Resolved
[admin][DEBUG ]
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Package Arch Version Repository Size
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Removing:
[admin][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[admin][DEBUG ] [admin][DEBUG ] Transaction Summary [admin][DEBUG ] ================================================================================ [admin][DEBUG ] Remove 1 Package [admin][DEBUG ] [admin][DEBUG ] Installed size: 535
[admin][DEBUG ] Downloading packages: [admin][DEBUG ] Running transaction check [admin][DEBUG ] Running transaction test [admin][DEBUG ] Transaction test succeeded [admin][DEBUG ] Running transaction [admin][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1 [admin][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave [admin][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1 [admin][DEBUG ] [admin][DEBUG ] Removed: [admin][DEBUG ] ceph-release.noarch 0:1-1.el7
[admin][DEBUG ] [admin][DEBUG ] Complete! [admin][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm [admin][DEBUG ] Loaded plugins: fastestmirror, priorities [admin][DEBUG ] Examining /var/tmp/yum-root-m5ETmO/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch [admin][DEBUG ] Marking /var/tmp/yum-root-m5ETmO/ceph-release-1-0.el7.noarch.rpm to be installed [admin][DEBUG ] Resolving Dependencies [admin][DEBUG ] --> Running transaction check [admin][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed [admin][DEBUG ] --> Finished Dependency Resolution [admin][DEBUG ] [admin][DEBUG ] Dependencies Resolved [admin][DEBUG ] [admin][DEBUG ] ================================================================================ [admin][DEBUG ] Package Arch Version Repository Size [admin][DEBUG ] ================================================================================ [admin][DEBUG ] Installing: [admin][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[admin][DEBUG ] [admin][DEBUG ] Transaction Summary [admin][DEBUG ] ================================================================================ [admin][DEBUG ] Install 1 Package [admin][DEBUG ] [admin][DEBUG ] Total size: 535
[admin][DEBUG ] Installed size: 535
[admin][DEBUG ] Downloading packages: [admin][DEBUG ] Running transaction check [admin][DEBUG ] Running transaction test [admin][DEBUG ] Transaction test succeeded [admin][DEBUG ] Running transaction [admin][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1 [admin][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1 [admin][DEBUG ] [admin][DEBUG ] Installed: [admin][DEBUG ] ceph-release.noarch 0:1-1.el7
[admin][DEBUG ] [admin][DEBUG ] Complete! [admin][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority [admin][WARNIN] altered ceph.repo priorities to contain: priority=1 [admin][INFO ] Running command: sudo yum -y install ceph ceph-radosgw [admin][DEBUG ] Loaded plugins: fastestmirror, priorities [admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration [admin][DEBUG ] Loading mirror speeds from cached hostfile [admin][DEBUG ] * base: mirrors.aliyun.com [admin][DEBUG ] * elrepo: mirrors.tuna.tsinghua.edu.cn [admin][DEBUG ] * epel: mirror.premi.st [admin][DEBUG ] * extras: mirrors.aliyun.com [admin][DEBUG ] * updates: mirrors.huaweicloud.com [admin][DEBUG ] 12 packages excluded due to repository priority protections [admin][DEBUG ] Package 2:ceph-10.2.11-0.el7.x86_64 already installed and latest version [admin][DEBUG ] Package 2:ceph-radosgw-10.2.11-0.el7.x86_64 already installed and latest version [admin][DEBUG ] Nothing to do [admin][INFO ] Running command: sudo ceph --version [admin][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e) [ceph_deploy.install][DEBUG ] Detecting platform for host node1 ... [node1][DEBUG ] connection detected need for sudo [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core [node1][INFO ] installing Ceph on node1 [node1][INFO ] Running command: sudo yum clean all [node1][DEBUG ] Loaded plugins: fastestmirror, priorities [node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration [node1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates [node1][DEBUG ] Cleaning up everything [node1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [node1][DEBUG ] Cleaning up list of fastest mirrors [node1][INFO ] Running command: sudo yum -y install epel-release [node1][DEBUG ] Loaded plugins: fastestmirror, priorities [node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration [node1][DEBUG ] Determining fastest mirrors [node1][DEBUG ] * base: mirror.jdcloud.com [node1][DEBUG ] * epel: mirror.premi.st [node1][DEBUG ] * extras: mirror.jdcloud.com [node1][DEBUG ] * updates: mirror.jdcloud.com [node1][WARNIN] http://fedora.cs.nctu.edu.tw/epel/7/x86_64/repodata/d97ad2922a45eb2a5fc007fdd84e7ae4981b257d3b94c3c9f5d7b0dda6baa098-comps-Everything.x86_64.xml.gz: [Er [node1][WARNIN] Trying other mirror. [node1][WARNIN] To address this issue please refer to the below wiki article [node1][WARNIN] [node1][WARNIN] https://wiki.centos.org/yum-errors [node1][WARNIN] [node1][WARNIN] If above article doesn't help to resolve this issue please use https://bugs.centos.org/. [node1][WARNIN] [node1][DEBUG ] 12 packages excluded due to repository priority protections [node1][DEBUG ] Package epel-release-7-11.noarch already installed and latest version [node1][DEBUG ] Nothing to do [node1][INFO ] Running command: sudo yum -y install yum-plugin-priorities [node1][DEBUG ] Loaded plugins: fastestmirror, priorities [node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration [node1][DEBUG ] Loading mirror speeds from cached hostfile [node1][DEBUG ] * base: mirror.jdcloud.com [node1][DEBUG ] * epel: mirror.premi.st [node1][DEBUG ] * extras: mirror.jdcloud.com [node1][DEBUG ] * updates: mirror.jdcloud.com [node1][DEBUG ] 12 packages excluded due to repository priority protections [node1][DEBUG ] Package yum-plugin-priorities-1.1.31-50.el7.noarch already installed and latest version [node1][DEBUG ] Nothing to do [node1][DEBUG ] Configure Yum priorities to include obsoletes [node1][WARNIN] check_obsoletes has been enabled for Yum priorities plugin [node1][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc [node1][INFO ] Running command: sudo yum remove -y ceph-release [node1][DEBUG ] Loaded plugins: fastestmirror, priorities [node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration [node1][DEBUG ] Resolving Dependencies [node1][DEBUG ] --> Running transaction check [node1][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased [node1][DEBUG ] --> Finished Dependency Resolution [node1][DEBUG ] [node1][DEBUG ] Dependencies Resolved [node1][DEBUG ] [node1][DEBUG ] ================================================================================ [node1][DEBUG ] Package Arch Version Repository Size [node1][DEBUG ] ================================================================================ [node1][DEBUG ] Removing: [node1][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[node1][DEBUG ] [node1][DEBUG ] Transaction Summary [node1][DEBUG ] ================================================================================ [node1][DEBUG ] Remove 1 Package [node1][DEBUG ] [node1][DEBUG ] Installed size: 535
[node1][DEBUG ] Downloading packages: [node1][DEBUG ] Running transaction check [node1][DEBUG ] Running transaction test [node1][DEBUG ] Transaction test succeeded [node1][DEBUG ] Running transaction [node1][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1 [node1][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave [node1][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1 [node1][DEBUG ] [node1][DEBUG ] Removed: [node1][DEBUG ] ceph-release.noarch 0:1-1.el7
[node1][DEBUG ] [node1][DEBUG ] Complete! [node1][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm [node1][DEBUG ] Loaded plugins: fastestmirror, priorities [node1][DEBUG ] Examining /var/tmp/yum-root-9yGa_m/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch [node1][DEBUG ] Marking /var/tmp/yum-root-9yGa_m/ceph-release-1-0.el7.noarch.rpm to be installed [node1][DEBUG ] Resolving Dependencies [node1][DEBUG ] --> Running transaction check [node1][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed [node1][DEBUG ] --> Finished Dependency Resolution [node1][DEBUG ] [node1][DEBUG ] Dependencies Resolved [node1][DEBUG ] [node1][DEBUG ] ================================================================================ [node1][DEBUG ] Package Arch Version Repository Size [node1][DEBUG ] ================================================================================ [node1][DEBUG ] Installing: [node1][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[node1][DEBUG ] [node1][DEBUG ] Transaction Summary [node1][DEBUG ] ================================================================================ [node1][DEBUG ] Install 1 Package [node1][DEBUG ] [node1][DEBUG ] Total size: 535
[node1][DEBUG ] Installed size: 535
[node1][DEBUG ] Downloading packages: [node1][DEBUG ] Running transaction check [node1][DEBUG ] Running transaction test [node1][DEBUG ] Transaction test succeeded [node1][DEBUG ] Running transaction [node1][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1 [node1][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1 [node1][DEBUG ] [node1][DEBUG ] Installed: [node1][DEBUG ] ceph-release.noarch 0:1-1.el7
[node1][DEBUG ] [node1][DEBUG ] Complete! [node1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority [node1][WARNIN] altered ceph.repo priorities to contain: priority=1 [node1][INFO ] Running command: sudo yum -y install ceph ceph-radosgw [node1][DEBUG ] Loaded plugins: fastestmirror, priorities [node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration [node1][DEBUG ] Loading mirror speeds from cached hostfile [node1][DEBUG ] * base: mirror.jdcloud.com [node1][DEBUG ] * epel: mirror.premi.st [node1][DEBUG ] * extras: mirror.jdcloud.com [node1][DEBUG ] * updates: mirror.jdcloud.com [node1][DEBUG ] 12 packages excluded due to repository priority protections [node1][DEBUG ] Package 2:ceph-10.2.11-0.el7.x86_64 already installed and latest version [node1][DEBUG ] Package 2:ceph-radosgw-10.2.11-0.el7.x86_64 already installed and latest version [node1][DEBUG ] Nothing to do [node1][INFO ] Running command: sudo ceph --version [node1][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e) [ceph_deploy.install][DEBUG ] Detecting platform for host node2 ... [node2][DEBUG ] connection detected need for sudo [node2][DEBUG ] connected to host: node2 [node2][DEBUG ] detect platform information from remote host [node2][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core [node2][INFO ] installing Ceph on node2 [node2][INFO ] Running command: sudo yum clean all [node2][DEBUG ] Loaded plugins: fastestmirror, priorities [node2][DEBUG ] Cleaning repos: Ceph-noarch base epel extras updates [node2][DEBUG ] Cleaning up everything [node2][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [node2][DEBUG ] Cleaning up list of fastest mirrors [node2][INFO ] Running command: sudo yum -y install epel-release [node2][DEBUG ] Loaded plugins: fastestmirror, priorities [node2][DEBUG ] Determining fastest mirrors [node2][DEBUG ] * base: mirrors.huaweicloud.com [node2][DEBUG ] * epel: fedora.cs.nctu.edu.tw [node2][DEBUG ] * extras: mirrors.huaweicloud.com [node2][DEBUG ] * updates: mirrors.huaweicloud.com [node2][DEBUG ] 1 packages excluded due to repository priority protections [node2][DEBUG ] Package epel-release-7-11.noarch already installed and latest version [node2][DEBUG ] Nothing to do [node2][INFO ] Running command: sudo yum -y install yum-plugin-priorities [node2][DEBUG ] Loaded plugins: fastestmirror, priorities [node2][DEBUG ] Loading mirror speeds from cached hostfile [node2][DEBUG ] * base: mirrors.huaweicloud.com [node2][DEBUG ] * epel: fedora.cs.nctu.edu.tw [node2][DEBUG ] * extras: mirrors.huaweicloud.com [node2][DEBUG ] * updates: mirrors.huaweicloud.com [node2][DEBUG ] 1 packages excluded due to repository priority protections [node2][DEBUG ] Package yum-plugin-priorities-1.1.31-50.el7.noarch already installed and latest version [node2][DEBUG ] Nothing to do [node2][DEBUG ] Configure Yum priorities to include obsoletes [node2][WARNIN] check_obsoletes has been enabled for Yum priorities plugin [node2][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc [node2][INFO ] Running command: sudo yum remove -y ceph-release [node2][DEBUG ] Loaded plugins: fastestmirror, priorities [node2][WARNIN] No Match for argument: ceph-release [node2][DEBUG ] No Packages marked for removal [node2][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm [node2][DEBUG ] Loaded plugins: fastestmirror, priorities [node2][DEBUG ] Examining /var/tmp/yum-root-U2LHeu/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch [node2][DEBUG ] Marking /var/tmp/yum-root-U2LHeu/ceph-release-1-0.el7.noarch.rpm to be installed [node2][DEBUG ] Resolving Dependencies [node2][DEBUG ] --> Running transaction check [node2][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed [node2][DEBUG ] --> Finished Dependency Resolution [node2][DEBUG ] [node2][DEBUG ] Dependencies Resolved [node2][DEBUG ] [node2][DEBUG ] ================================================================================ [node2][DEBUG ] Package Arch Version Repository Size [node2][DEBUG ] ================================================================================ [node2][DEBUG ] Installing: [node2][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[node2][DEBUG ] [node2][DEBUG ] Transaction Summary [node2][DEBUG ] ================================================================================ [node2][DEBUG ] Install 1 Package [node2][DEBUG ] [node2][DEBUG ] Total size: 535
[node2][DEBUG ] Installed size: 535
[node2][DEBUG ] Downloading packages: [node2][DEBUG ] Running transaction check [node2][DEBUG ] Running transaction test [node2][DEBUG ] Transaction test succeeded [node2][DEBUG ] Running transaction [node2][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1 [node2][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1 [node2][DEBUG ] [node2][DEBUG ] Installed: [node2][DEBUG ] ceph-release.noarch 0:1-1.el7
[node2][DEBUG ] [node2][DEBUG ] Complete! [node2][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority [node2][WARNIN] altered ceph.repo priorities to contain: priority=1 [node2][INFO ] Running command: sudo yum -y install ceph ceph-radosgw [node2][DEBUG ] Loaded plugins: fastestmirror, priorities [node2][WARNIN] Repository Ceph-noarch is listed more than once in the configuration [node2][DEBUG ] Loading mirror speeds from cached hostfile [node2][DEBUG ] * base: mirrors.huaweicloud.com [node2][DEBUG ] * epel: fedora.cs.nctu.edu.tw [node2][DEBUG ] * extras: mirrors.huaweicloud.com [node2][DEBUG ] * updates: mirrors.huaweicloud.com [node2][DEBUG ] 12 packages excluded due to repository priority protections [node2][DEBUG ] Package 2:ceph-10.2.11-0.el7.x86_64 already installed and latest version [node2][DEBUG ] Resolving Dependencies [node2][DEBUG ] --> Running transaction check [node2][DEBUG ] ---> Package ceph-radosgw.x86_64 2:10.2.11-0.el7 will be installed [node2][DEBUG ] --> Processing Dependency: mailcap for package: 2:ceph-radosgw-10.2.11-0.el7.x86_64 [node2][DEBUG ] --> Running transaction check [node2][DEBUG ] ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed [node2][DEBUG ] --> Finished Dependency Resolution [node2][DEBUG ] [node2][DEBUG ] Dependencies Resolved [node2][DEBUG ] [node2][DEBUG ] ================================================================================ [node2][DEBUG ] Package Arch Version Repository Size [node2][DEBUG ] ================================================================================ [node2][DEBUG ] Installing: [node2][DEBUG ] ceph-radosgw x86_64 2:10.2.11-0.el7 Ceph 267 k [node2][DEBUG ] Installing for dependencies: [node2][DEBUG ] mailcap noarch 2.1.41-2.el7 base 31 k [node2][DEBUG ] [node2][DEBUG ] Transaction Summary [node2][DEBUG ] ================================================================================ [node2][DEBUG ] Install 1 Package (+1 Dependent package) [node2][DEBUG ] [node2][DEBUG ] Total download size: 297 k [node2][DEBUG ] Installed size: 857 k [node2][DEBUG ] Downloading packages: [node2][DEBUG ] -------------------------------------------------------------------------------- [node2][DEBUG ] Total 384 kB/s | 297 kB 00:00
[node2][DEBUG ] Running transaction check [node2][DEBUG ] Running transaction test [node2][DEBUG ] Transaction test succeeded [node2][DEBUG ] Running transaction [node2][DEBUG ] Installing : mailcap-2.1.41-2.el7.noarch 1/2 [node2][DEBUG ] Installing : 2:ceph-radosgw-10.2.11-0.el7.x86_64 2/2 [node2][DEBUG ] Verifying : mailcap-2.1.41-2.el7.noarch 1/2 [node2][DEBUG ] Verifying : 2:ceph-radosgw-10.2.11-0.el7.x86_64 2/2 [node2][DEBUG ] [node2][DEBUG ] Installed: [node2][DEBUG ] ceph-radosgw.x86_64 2:10.2.11-0.el7
[node2][DEBUG ] [node2][DEBUG ] Dependency Installed: [node2][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node2][DEBUG ] [node2][DEBUG ] Complete! [node2][INFO ] Running command: sudo ceph --version [node2][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e) [ceph_deploy.install][DEBUG ] Detecting platform for host node3 ... [node3][DEBUG ] connection detected need for sudo [node3][DEBUG ] connected to host: node3 [node3][DEBUG ] detect platform information from remote host [node3][DEBUG ] detect machine type [ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core [node3][INFO ] installing Ceph on node3 [node3][INFO ] Running command: sudo yum clean all [node3][DEBUG ] Loaded plugins: fastestmirror [node3][DEBUG ] Cleaning repos: Ceph-noarch base ceph ceph-noarch epel extras updates [node3][DEBUG ] Cleaning up everything [node3][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos [node3][DEBUG ] Cleaning up list of fastest mirrors [node3][INFO ] Running command: sudo yum -y install epel-release [node3][DEBUG ] Loaded plugins: fastestmirror [node3][DEBUG ] Determining fastest mirrors [node3][DEBUG ] * base: mirror.jdcloud.com [node3][DEBUG ] * epel: fedora.cs.nctu.edu.tw [node3][DEBUG ] * extras: mirror.jdcloud.com [node3][DEBUG ] * updates: mirror.jdcloud.com [node3][DEBUG ] Package epel-release-7-11.noarch already installed and latest version [node3][DEBUG ] Nothing to do [node3][INFO ] Running command: sudo yum -y install yum-plugin-priorities [node3][DEBUG ] Loaded plugins: fastestmirror [node3][DEBUG ] Loading mirror speeds from cached hostfile [node3][DEBUG ] * base: mirror.jdcloud.com [node3][DEBUG ] * epel: fedora.cs.nctu.edu.tw [node3][DEBUG ] * extras: mirror.jdcloud.com [node3][DEBUG ] * updates: mirror.jdcloud.com [node3][DEBUG ] Resolving Dependencies [node3][DEBUG ] --> Running transaction check [node3][DEBUG ] ---> Package yum-plugin-priorities.noarch 0:1.1.31-50.el7 will be installed [node3][DEBUG ] --> Finished Dependency Resolution [node3][DEBUG ] [node3][DEBUG ] Dependencies Resolved [node3][DEBUG ] [node3][DEBUG ] ================================================================================ [node3][DEBUG ] Package Arch Version Repository Size [node3][DEBUG ] ================================================================================ [node3][DEBUG ] Installing: [node3][DEBUG ] yum-plugin-priorities noarch 1.1.31-50.el7 base 29 k [node3][DEBUG ] [node3][DEBUG ] Transaction Summary [node3][DEBUG ] ================================================================================ [node3][DEBUG ] Install 1 Package [node3][DEBUG ] [node3][DEBUG ] Total download size: 29 k [node3][DEBUG ] Installed size: 28 k [node3][DEBUG ] Downloading packages: [node3][DEBUG ] Running transaction check [node3][DEBUG ] Running transaction test [node3][DEBUG ] Transaction test succeeded [node3][DEBUG ] Running transaction [node3][DEBUG ] Installing : yum-plugin-priorities-1.1.31-50.el7.noarch 1/1 [node3][DEBUG ] Verifying : yum-plugin-priorities-1.1.31-50.el7.noarch 1/1 [node3][DEBUG ] [node3][DEBUG ] Installed: [node3][DEBUG ] yum-plugin-priorities.noarch 0:1.1.31-50.el7
[node3][DEBUG ] [node3][DEBUG ] Complete! [node3][DEBUG ] Configure Yum priorities to include obsoletes [node3][WARNIN] check_obsoletes has been enabled for Yum priorities plugin [node3][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc [node3][INFO ] Running command: sudo yum remove -y ceph-release [node3][DEBUG ] Loaded plugins: fastestmirror, priorities [node3][WARNIN] No Match for argument: ceph-release [node3][DEBUG ] No Packages marked for removal [node3][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm [node3][DEBUG ] Loaded plugins: fastestmirror, priorities [node3][DEBUG ] Examining /var/tmp/yum-root-eaf0yX/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch [node3][DEBUG ] Marking /var/tmp/yum-root-eaf0yX/ceph-release-1-0.el7.noarch.rpm to be installed [node3][DEBUG ] Resolving Dependencies [node3][DEBUG ] --> Running transaction check [node3][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed [node3][DEBUG ] --> Finished Dependency Resolution [node3][DEBUG ] [node3][DEBUG ] Dependencies Resolved [node3][DEBUG ] [node3][DEBUG ] ================================================================================ [node3][DEBUG ] Package Arch Version Repository Size [node3][DEBUG ] ================================================================================ [node3][DEBUG ] Installing: [node3][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[node3][DEBUG ] [node3][DEBUG ] Transaction Summary [node3][DEBUG ] ================================================================================ [node3][DEBUG ] Install 1 Package [node3][DEBUG ] [node3][DEBUG ] Total size: 535
[node3][DEBUG ] Installed size: 535
[node3][DEBUG ] Downloading packages: [node3][DEBUG ] Running transaction check [node3][DEBUG ] Running transaction test [node3][DEBUG ] Transaction test succeeded [node3][DEBUG ] Running transaction [node3][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1 [node3][DEBUG ] warning: /etc/yum.repos.d/ceph.repo created as /etc/yum.repos.d/ceph.repo.rpmnew [node3][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1 [node3][DEBUG ] [node3][DEBUG ] Installed: [node3][DEBUG ] ceph-release.noarch 0:1-1.el7
[node3][DEBUG ] [node3][DEBUG ] Complete! [node3][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-source'
[cephd@admin my-cluster]$ ceph-deploy install admin node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy install admin node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f0bb429e368>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f0bb537b230>
[ceph_deploy.cli][INFO ] install_mgr : False
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : ['admin', 'node1', 'node2', 'node3']
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] nogpgcheck : False
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts admin node1 node2 node3
[ceph_deploy.install][DEBUG ] Detecting platform for host admin ...
[admin][DEBUG ] connection detected need for sudo
[admin][DEBUG ] connected to host: admin
[admin][DEBUG ] detect platform information from remote host
[admin][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[admin][INFO ] installing Ceph on admin
[admin][INFO ] Running command: sudo yum clean all
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source elrepo epel extras updates
[admin][DEBUG ] Cleaning up everything
[admin][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[admin][DEBUG ] Cleaning up list of fastest mirrors
[admin][INFO ] Running command: sudo yum -y install epel-release
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Determining fastest mirrors
[admin][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * elrepo: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * epel: mirror.premi.st
[admin][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] 12 packages excluded due to repository priority protections
[admin][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[admin][DEBUG ] Nothing to do
[admin][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Loading mirror speeds from cached hostfile
[admin][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * elrepo: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * epel: mirror.premi.st
[admin][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] 12 packages excluded due to repository priority protections
[admin][DEBUG ] Package yum-plugin-priorities-1.1.31-50.el7.noarch already installed and latest version
[admin][DEBUG ] Nothing to do
[admin][DEBUG ] Configure Yum priorities to include obsoletes
[admin][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[admin][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[admin][INFO ] Running command: sudo yum remove -y ceph-release
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Resolving Dependencies
[admin][DEBUG ] --> Running transaction check
[admin][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[admin][DEBUG ] --> Finished Dependency Resolution
[admin][DEBUG ]
[admin][DEBUG ] Dependencies Resolved
[admin][DEBUG ]
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Package Arch Version Repository Size
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Removing:
[admin][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[admin][DEBUG ]
[admin][DEBUG ] Transaction Summary
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Remove 1 Package
[admin][DEBUG ]
[admin][DEBUG ] Installed size: 535
[admin][DEBUG ] Downloading packages:
[admin][DEBUG ] Running transaction check
[admin][DEBUG ] Running transaction test
[admin][DEBUG ] Transaction test succeeded
[admin][DEBUG ] Running transaction
[admin][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[admin][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[admin][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[admin][DEBUG ]
[admin][DEBUG ] Removed:
[admin][DEBUG ] ceph-release.noarch 0:1-1.el7
[admin][DEBUG ]
[admin][DEBUG ] Complete!
[admin][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][DEBUG ] Examining /var/tmp/yum-root-m5ETmO/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[admin][DEBUG ] Marking /var/tmp/yum-root-m5ETmO/ceph-release-1-0.el7.noarch.rpm to be installed
[admin][DEBUG ] Resolving Dependencies
[admin][DEBUG ] --> Running transaction check
[admin][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[admin][DEBUG ] --> Finished Dependency Resolution
[admin][DEBUG ]
[admin][DEBUG ] Dependencies Resolved
[admin][DEBUG ]
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Package Arch Version Repository Size
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Installing:
[admin][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[admin][DEBUG ]
[admin][DEBUG ] Transaction Summary
[admin][DEBUG ] ================================================================================
[admin][DEBUG ] Install 1 Package
[admin][DEBUG ]
[admin][DEBUG ] Total size: 535
[admin][DEBUG ] Installed size: 535
[admin][DEBUG ] Downloading packages:
[admin][DEBUG ] Running transaction check
[admin][DEBUG ] Running transaction test
[admin][DEBUG ] Transaction test succeeded
[admin][DEBUG ] Running transaction
[admin][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[admin][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[admin][DEBUG ]
[admin][DEBUG ] Installed:
[admin][DEBUG ] ceph-release.noarch 0:1-1.el7
[admin][DEBUG ]
[admin][DEBUG ] Complete!
[admin][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[admin][WARNIN] altered ceph.repo priorities to contain: priority=1
[admin][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[admin][DEBUG ] Loaded plugins: fastestmirror, priorities
[admin][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[admin][DEBUG ] Loading mirror speeds from cached hostfile
[admin][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * elrepo: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * epel: mirror.premi.st
[admin][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[admin][DEBUG ] 12 packages excluded due to repository priority protections
[admin][DEBUG ] Package 2:ceph-10.2.11-0.el7.x86_64 already installed and latest version
[admin][DEBUG ] Package 2:ceph-radosgw-10.2.11-0.el7.x86_64 already installed and latest version
[admin][DEBUG ] Nothing to do
[admin][INFO ] Running command: sudo ceph --version
[admin][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[ceph_deploy.install][DEBUG ] Detecting platform for host node1 ...
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[node1][INFO ] installing Ceph on node1
[node1][INFO ] Running command: sudo yum clean all
[node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node1][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[node1][DEBUG ] Cleaning up everything
[node1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node1][DEBUG ] Cleaning up list of fastest mirrors
[node1][INFO ] Running command: sudo yum -y install epel-release
[node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node1][DEBUG ] Determining fastest mirrors
[node1][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] * epel: fedora.cs.nctu.edu.tw
[node1][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node1][WARNIN] http://fedora.cs.nctu.edu.tw/epel/7/x86_64/repodata/d97ad2922a45eb2a5fc007fdd84e7ae4981b257d3b94c3c9f5d7b0dda6baa098-comps-Everything.x86_64.xml.gz: [Errno 14] HTTP Error 404 - Not Found
[node1][WARNIN] Trying other mirror.
[node1][WARNIN] To address this issue please refer to the below wiki article
[node1][WARNIN]
[node1][WARNIN] https://wiki.centos.org/yum-errors
[node1][WARNIN]
[node1][WARNIN] If above article doesn't help to resolve this issue please use https://bugs.centos.org/.
[node1][WARNIN]
[node1][DEBUG ] 12 packages excluded due to repository priority protections
[node1][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[node1][DEBUG ] Nothing to do
[node1][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node1][DEBUG ] Loading mirror speeds from cached hostfile
[node1][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] * epel: fedora.cs.nctu.edu.tw
[node1][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] 12 packages excluded due to repository priority protections
[node1][DEBUG ] Package yum-plugin-priorities-1.1.31-50.el7.noarch already installed and latest version
[node1][DEBUG ] Nothing to do
[node1][DEBUG ] Configure Yum priorities to include obsoletes
[node1][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[node1][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[node1][INFO ] Running command: sudo yum remove -y ceph-release
[node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node1][DEBUG ] Resolving Dependencies
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[node1][DEBUG ] --> Finished Dependency Resolution
[node1][DEBUG ]
[node1][DEBUG ] Dependencies Resolved
[node1][DEBUG ]
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Package Arch Version Repository Size
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Removing:
[node1][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[node1][DEBUG ]
[node1][DEBUG ] Transaction Summary
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Remove 1 Package
[node1][DEBUG ]
[node1][DEBUG ] Installed size: 535
[node1][DEBUG ] Downloading packages:
[node1][DEBUG ] Running transaction check
[node1][DEBUG ] Running transaction test
[node1][DEBUG ] Transaction test succeeded
[node1][DEBUG ] Running transaction
[node1][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[node1][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[node1][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[node1][DEBUG ]
[node1][DEBUG ] Removed:
[node1][DEBUG ] ceph-release.noarch 0:1-1.el7
[node1][DEBUG ]
[node1][DEBUG ] Complete!
[node1][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[node1][DEBUG ] Examining /var/tmp/yum-root-9yGa_m/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[node1][DEBUG ] Marking /var/tmp/yum-root-9yGa_m/ceph-release-1-0.el7.noarch.rpm to be installed
[node1][DEBUG ] Resolving Dependencies
[node1][DEBUG ] --> Running transaction check
[node1][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[node1][DEBUG ] --> Finished Dependency Resolution
[node1][DEBUG ]
[node1][DEBUG ] Dependencies Resolved
[node1][DEBUG ]
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Package Arch Version Repository Size
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Installing:
[node1][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[node1][DEBUG ]
[node1][DEBUG ] Transaction Summary
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Install 1 Package
[node1][DEBUG ]
[node1][DEBUG ] Total size: 535
[node1][DEBUG ] Installed size: 535
[node1][DEBUG ] Downloading packages:
[node1][DEBUG ] Running transaction check
[node1][DEBUG ] Running transaction test
[node1][DEBUG ] Transaction test succeeded
[node1][DEBUG ] Running transaction
[node1][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[node1][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[node1][DEBUG ]
[node1][DEBUG ] Installed:
[node1][DEBUG ] ceph-release.noarch 0:1-1.el7
[node1][DEBUG ]
[node1][DEBUG ] Complete!
[node1][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[node1][WARNIN] altered ceph.repo priorities to contain: priority=1
[node1][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[node1][DEBUG ] Loaded plugins: fastestmirror, priorities
[node1][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node1][DEBUG ] Loading mirror speeds from cached hostfile
[node1][DEBUG ] * base: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] * epel: fedora.cs.nctu.edu.tw
[node1][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node1][DEBUG ] 12 packages excluded due to repository priority protections
[node1][DEBUG ] Package 2:ceph-10.2.11-0.el7.x86_64 already installed and latest version
[node1][DEBUG ] Package 2:ceph-radosgw-10.2.11-0.el7.x86_64 already installed and latest version
[node1][DEBUG ] Nothing to do
[node1][INFO ] Running command: sudo ceph --version
[node1][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[ceph_deploy.install][DEBUG ] Detecting platform for host node2 ...
[node2][DEBUG ] connection detected need for sudo
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[node2][INFO ] installing Ceph on node2
[node2][INFO ] Running command: sudo yum clean all
[node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[node2][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node2][DEBUG ] Cleaning repos: Ceph Ceph-noarch base ceph-source epel extras updates
[node2][DEBUG ] Cleaning up everything
[node2][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node2][DEBUG ] Cleaning up list of fastest mirrors
[node2][INFO ] Running command: sudo yum -y install epel-release
[node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[node2][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node2][DEBUG ] Determining fastest mirrors
[node2][DEBUG ] * base: mirror.jdcloud.com
[node2][DEBUG ] * epel: fedora.cs.nctu.edu.tw
[node2][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[node2][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node2][DEBUG ] 12 packages excluded due to repository priority protections
[node2][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[node2][DEBUG ] Nothing to do
[node2][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[node2][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node2][DEBUG ] Loading mirror speeds from cached hostfile
[node2][DEBUG ] * base: mirror.jdcloud.com
[node2][DEBUG ] * epel: fedora.cs.nctu.edu.tw
[node2][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[node2][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node2][DEBUG ] 12 packages excluded due to repository priority protections
[node2][DEBUG ] Package yum-plugin-priorities-1.1.31-50.el7.noarch already installed and latest version
[node2][DEBUG ] Nothing to do
[node2][DEBUG ] Configure Yum priorities to include obsoletes
[node2][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[node2][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[node2][INFO ] Running command: sudo yum remove -y ceph-release
[node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[node2][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node2][DEBUG ] Resolving Dependencies
[node2][DEBUG ] --> Running transaction check
[node2][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be erased
[node2][DEBUG ] --> Finished Dependency Resolution
[node2][DEBUG ]
[node2][DEBUG ] Dependencies Resolved
[node2][DEBUG ]
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Package Arch Version Repository Size
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Removing:
[node2][DEBUG ] ceph-release noarch 1-1.el7 installed 535
[node2][DEBUG ]
[node2][DEBUG ] Transaction Summary
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Remove 1 Package
[node2][DEBUG ]
[node2][DEBUG ] Installed size: 535
[node2][DEBUG ] Downloading packages:
[node2][DEBUG ] Running transaction check
[node2][DEBUG ] Running transaction test
[node2][DEBUG ] Transaction test succeeded
[node2][DEBUG ] Running transaction
[node2][DEBUG ] Erasing : ceph-release-1-1.el7.noarch 1/1
[node2][DEBUG ] warning: /etc/yum.repos.d/ceph.repo saved as /etc/yum.repos.d/ceph.repo.rpmsave
[node2][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[node2][DEBUG ]
[node2][DEBUG ] Removed:
[node2][DEBUG ] ceph-release.noarch 0:1-1.el7
[node2][DEBUG ]
[node2][DEBUG ] Complete!
[node2][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[node2][DEBUG ] Examining /var/tmp/yum-root-U2LHeu/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[node2][DEBUG ] Marking /var/tmp/yum-root-U2LHeu/ceph-release-1-0.el7.noarch.rpm to be installed
[node2][DEBUG ] Resolving Dependencies
[node2][DEBUG ] --> Running transaction check
[node2][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[node2][DEBUG ] --> Finished Dependency Resolution
[node2][DEBUG ]
[node2][DEBUG ] Dependencies Resolved
[node2][DEBUG ]
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Package Arch Version Repository Size
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Installing:
[node2][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[node2][DEBUG ]
[node2][DEBUG ] Transaction Summary
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Install 1 Package
[node2][DEBUG ]
[node2][DEBUG ] Total size: 535
[node2][DEBUG ] Installed size: 535
[node2][DEBUG ] Downloading packages:
[node2][DEBUG ] Running transaction check
[node2][DEBUG ] Running transaction test
[node2][DEBUG ] Transaction test succeeded
[node2][DEBUG ] Running transaction
[node2][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[node2][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[node2][DEBUG ]
[node2][DEBUG ] Installed:
[node2][DEBUG ] ceph-release.noarch 0:1-1.el7
[node2][DEBUG ]
[node2][DEBUG ] Complete!
[node2][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[node2][WARNIN] altered ceph.repo priorities to contain: priority=1
[node2][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[node2][DEBUG ] Loaded plugins: fastestmirror, priorities
[node2][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node2][DEBUG ] Loading mirror speeds from cached hostfile
[node2][DEBUG ] * base: mirror.jdcloud.com
[node2][DEBUG ] * epel: fedora.cs.nctu.edu.tw
[node2][DEBUG ] * extras: mirrors.tuna.tsinghua.edu.cn
[node2][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node2][DEBUG ] 12 packages excluded due to repository priority protections
[node2][DEBUG ] Package 2:ceph-10.2.11-0.el7.x86_64 already installed and latest version
[node2][DEBUG ] Package 2:ceph-radosgw-10.2.11-0.el7.x86_64 already installed and latest version
[node2][DEBUG ] Nothing to do
[node2][INFO ] Running command: sudo ceph --version
[node2][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[ceph_deploy.install][DEBUG ] Detecting platform for host node3 ...
[node3][DEBUG ] connection detected need for sudo
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: CentOS Linux 7.5.1804 Core
[node3][INFO ] installing Ceph on node3
[node3][INFO ] Running command: sudo yum clean all
[node3][DEBUG ] Loaded plugins: fastestmirror, priorities
[node3][DEBUG ] Cleaning repos: Ceph-noarch base epel extras updates
[node3][DEBUG ] Cleaning up everything
[node3][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node3][DEBUG ] Cleaning up list of fastest mirrors
[node3][INFO ] Running command: sudo yum -y install epel-release
[node3][DEBUG ] Loaded plugins: fastestmirror, priorities
[node3][DEBUG ] Determining fastest mirrors
[node3][DEBUG ] * base: mirrors.huaweicloud.com
[node3][DEBUG ] * epel: mirror.premi.st
[node3][DEBUG ] * extras: mirrors.aliyun.com
[node3][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node3][DEBUG ] 1 packages excluded due to repository priority protections
[node3][DEBUG ] Package epel-release-7-11.noarch already installed and latest version
[node3][DEBUG ] Nothing to do
[node3][INFO ] Running command: sudo yum -y install yum-plugin-priorities
[node3][DEBUG ] Loaded plugins: fastestmirror, priorities
[node3][DEBUG ] Loading mirror speeds from cached hostfile
[node3][DEBUG ] * base: mirrors.huaweicloud.com
[node3][DEBUG ] * epel: mirror.premi.st
[node3][DEBUG ] * extras: mirrors.aliyun.com
[node3][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node3][DEBUG ] 1 packages excluded due to repository priority protections
[node3][DEBUG ] Package yum-plugin-priorities-1.1.31-50.el7.noarch already installed and latest version
[node3][DEBUG ] Nothing to do
[node3][DEBUG ] Configure Yum priorities to include obsoletes
[node3][WARNIN] check_obsoletes has been enabled for Yum priorities plugin
[node3][INFO ] Running command: sudo rpm --import https://download.ceph.com/keys/release.asc
[node3][INFO ] Running command: sudo yum remove -y ceph-release
[node3][DEBUG ] Loaded plugins: fastestmirror, priorities
[node3][WARNIN] No Match for argument: ceph-release
[node3][DEBUG ] No Packages marked for removal
[node3][INFO ] Running command: sudo yum install -y https://download.ceph.com/rpm-jewel/el7/noarch/ceph-release-1-0.el7.noarch.rpm
[node3][DEBUG ] Loaded plugins: fastestmirror, priorities
[node3][DEBUG ] Examining /var/tmp/yum-root-eaf0yX/ceph-release-1-0.el7.noarch.rpm: ceph-release-1-1.el7.noarch
[node3][DEBUG ] Marking /var/tmp/yum-root-eaf0yX/ceph-release-1-0.el7.noarch.rpm to be installed
[node3][DEBUG ] Resolving Dependencies
[node3][DEBUG ] --> Running transaction check
[node3][DEBUG ] ---> Package ceph-release.noarch 0:1-1.el7 will be installed
[node3][DEBUG ] --> Finished Dependency Resolution
[node3][DEBUG ]
[node3][DEBUG ] Dependencies Resolved
[node3][DEBUG ]
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Package Arch Version Repository Size
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Installing:
[node3][DEBUG ] ceph-release noarch 1-1.el7 /ceph-release-1-0.el7.noarch 535
[node3][DEBUG ]
[node3][DEBUG ] Transaction Summary
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Install 1 Package
[node3][DEBUG ]
[node3][DEBUG ] Total size: 535
[node3][DEBUG ] Installed size: 535
[node3][DEBUG ] Downloading packages:
[node3][DEBUG ] Running transaction check
[node3][DEBUG ] Running transaction test
[node3][DEBUG ] Transaction test succeeded
[node3][DEBUG ] Running transaction
[node3][DEBUG ] Installing : ceph-release-1-1.el7.noarch 1/1
[node3][DEBUG ] Verifying : ceph-release-1-1.el7.noarch 1/1
[node3][DEBUG ]
[node3][DEBUG ] Installed:
[node3][DEBUG ] ceph-release.noarch 0:1-1.el7
[node3][DEBUG ]
[node3][DEBUG ] Complete!
[node3][WARNIN] ensuring that /etc/yum.repos.d/ceph.repo contains a high priority
[node3][WARNIN] altered ceph.repo priorities to contain: priority=1
[node3][INFO ] Running command: sudo yum -y install ceph ceph-radosgw
[node3][DEBUG ] Loaded plugins: fastestmirror, priorities
[node3][WARNIN] Repository Ceph-noarch is listed more than once in the configuration
[node3][DEBUG ] Loading mirror speeds from cached hostfile
[node3][DEBUG ] * base: mirrors.huaweicloud.com
[node3][DEBUG ] * epel: mirror.premi.st
[node3][DEBUG ] * extras: mirrors.aliyun.com
[node3][DEBUG ] * updates: mirrors.tuna.tsinghua.edu.cn
[node3][DEBUG ] 12 packages excluded due to repository priority protections
[node3][DEBUG ] Package 2:ceph-10.2.11-0.el7.x86_64 already installed and latest version
[node3][DEBUG ] Resolving Dependencies
[node3][DEBUG ] --> Running transaction check
[node3][DEBUG ] ---> Package ceph-radosgw.x86_64 2:10.2.11-0.el7 will be installed
[node3][DEBUG ] --> Processing Dependency: mailcap for package: 2:ceph-radosgw-10.2.11-0.el7.x86_64
[node3][DEBUG ] --> Running transaction check
[node3][DEBUG ] ---> Package mailcap.noarch 0:2.1.41-2.el7 will be installed
[node3][DEBUG ] --> Finished Dependency Resolution
[node3][DEBUG ]
[node3][DEBUG ] Dependencies Resolved
[node3][DEBUG ]
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Package Arch Version Repository Size
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Installing:
[node3][DEBUG ] ceph-radosgw x86_64 2:10.2.11-0.el7 Ceph 267 k
[node3][DEBUG ] Installing for dependencies:
[node3][DEBUG ] mailcap noarch 2.1.41-2.el7 base 31 k
[node3][DEBUG ]
[node3][DEBUG ] Transaction Summary
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Install 1 Package (+1 Dependent package)
[node3][DEBUG ]
[node3][DEBUG ] Total download size: 297 k
[node3][DEBUG ] Installed size: 857 k
[node3][DEBUG ] Downloading packages:
[node3][DEBUG ] --------------------------------------------------------------------------------
[node3][DEBUG ] Total 345 kB/s | 297 kB 00:00
[node3][DEBUG ] Running transaction check
[node3][DEBUG ] Running transaction test
[node3][DEBUG ] Transaction test succeeded
[node3][DEBUG ] Running transaction
[node3][DEBUG ] Installing : mailcap-2.1.41-2.el7.noarch 1/2
[node3][DEBUG ] Installing : 2:ceph-radosgw-10.2.11-0.el7.x86_64 2/2
[node3][DEBUG ] Verifying : mailcap-2.1.41-2.el7.noarch 1/2
[node3][DEBUG ] Verifying : 2:ceph-radosgw-10.2.11-0.el7.x86_64 2/2
[node3][DEBUG ]
[node3][DEBUG ] Installed:
[node3][DEBUG ] ceph-radosgw.x86_64 2:10.2.11-0.el7
[node3][DEBUG ]
[node3][DEBUG ] Dependency Installed:
[node3][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node3][DEBUG ]
[node3][DEBUG ] Complete!
[node3][INFO ] Running command: sudo ceph --version
[node3][DEBUG ] ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
安装过程中出现的问题: (1) 执行到每个节点都会包这个错误: [ceph_deploy][ERROR ] RuntimeError: NoSectionError: No section: 'ceph-source' 解决办法:登录到报错的节点后(原因暂不清楚) 卸载: yum remove ceph-release rm /etc/yum.repos.d/ceph.repo.rpmsave (2) 注意su切换用户的方式su 与su - 是不同的在那种方式下创建的密钥请用那种方式登录;
(3) 按照官方的做法在/etc/sudoers.d/下创建的配置文件没用,需要小sudoers文件添加(100行的位置): cephd ALL=(ALL) NOPASSWD: ALL
- 部署初始监视器并收集密钥 [cephd@admin my-cluster]$ ceph-deploy mon create-initial [ceph_deploy.conf][DEBUG ] found configuration file at: /home/cephd/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.39): /usr/bin/ceph-deploy mon create-initial [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] overwrite_conf : False [ceph_deploy.cli][INFO ] subcommand : create-initial [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f319fa08200> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] func : <function mon at 0x7f319f9e5b18> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] keyrings : None [ceph_deploy.mon][WARNIN] keyring (ceph.mon.keyring) not found, creating a new one [ceph_deploy.new][DEBUG ] Creating a random mon key... [ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring... [ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts admin node1 node2 node3 [ceph_deploy.mon][DEBUG ] detecting platform for host admin ... [admin][DEBUG ] connection detected need for sudo [admin][DEBUG ] connected to host: admin [admin][DEBUG ] detect platform information from remote host [admin][DEBUG ] detect machine type [admin][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core [admin][DEBUG ] determining if provided host has same hostname in remote [admin][DEBUG ] get remote short hostname [admin][DEBUG ] deploying mon to admin [admin][DEBUG ] get remote short hostname [admin][DEBUG ] remote hostname: admin [admin][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [admin][DEBUG ] create the mon path if it does not exist [admin][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-admin/done [admin][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-admin/done [admin][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-admin.mon.keyring [admin][DEBUG ] create the monitor keyring file [admin][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i admin --keyring /var/lib/ceph/tmp/ceph-admin.mon.keyring --setuser 167 --setgroup 167 [admin][DEBUG ] ceph-mon: mon.noname-a 192.168.1.163:6789/0 is local, renaming to mon.admin [admin][DEBUG ] ceph-mon: set fsid to 07700035-c7d1-4e76-accf-d3d1d3159dce [admin][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-admin for mon.admin [admin][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-admin.mon.keyring [admin][DEBUG ] create a done file to avoid re-doing the mon deployment [admin][DEBUG ] create the init path if it does not exist [admin][INFO ] Running command: sudo systemctl enable ceph.target [admin][INFO ] Running command: sudo systemctl enable ceph-mon@admin [admin][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@admin.service to /usr/lib/systemd/system/ceph-mon@.service. [admin][INFO ] Running command: sudo systemctl start ceph-mon@admin [admin][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.admin.asok mon_status [admin][DEBUG ] ******************************************************************************** [admin][DEBUG ] status for monitor: mon.admin [admin][DEBUG ] { [admin][DEBUG ] "election_epoch": 0, [admin][DEBUG ] "extra_probe_peers": [ [admin][DEBUG ] "192.168.1.176:6789/0", [admin][DEBUG ] "192.168.1.177:6789/0", [admin][DEBUG ] "192.168.1.178:6789/0" [admin][DEBUG ] ], [admin][DEBUG ] "monmap": { [admin][DEBUG ] "created": "2019-02-27 18:04:40.266147", [admin][DEBUG ] "epoch": 0, [admin][DEBUG ] "fsid": "07700035-c7d1-4e76-accf-d3d1d3159dce", [admin][DEBUG ] "modified": "2019-02-27 18:04:40.266147", [admin][DEBUG ] "mons": [ [admin][DEBUG ] { [admin][DEBUG ] "addr": "192.168.1.163:6789/0", [admin][DEBUG ] "name": "admin", [admin][DEBUG ] "rank": 0 [admin][DEBUG ] }, [admin][DEBUG ] { [admin][DEBUG ] "addr": "0.0.0.0:0/1", [admin][DEBUG ] "name": "node1", [admin][DEBUG ] "rank": 1 [admin][DEBUG ] }, [admin][DEBUG ] { [admin][DEBUG ] "addr": "0.0.0.0:0/2", [admin][DEBUG ] "name": "node2", [admin][DEBUG ] "rank": 2 [admin][DEBUG ] }, [admin][DEBUG ] { [admin][DEBUG ] "addr": "0.0.0.0:0/3", [admin][DEBUG ] "name": "node3", [admin][DEBUG ] "rank": 3 [admin][DEBUG ] } [admin][DEBUG ] ] [admin][DEBUG ] }, [admin][DEBUG ] "name": "admin", [admin][DEBUG ] "outside_quorum": [ [admin][DEBUG ] "admin" [admin][DEBUG ] ], [admin][DEBUG ] "quorum": [], [admin][DEBUG ] "rank": 0, [admin][DEBUG ] "state": "probing", [admin][DEBUG ] "sync_provider": [] [admin][DEBUG ] } [admin][DEBUG ] ******************************************************************************** [admin][INFO ] monitor: mon.admin is running [admin][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.admin.asok mon_status [ceph_deploy.mon][DEBUG ] detecting platform for host node1 ... [node1][DEBUG ] connection detected need for sudo [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [node1][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core [node1][DEBUG ] determining if provided host has same hostname in remote [node1][DEBUG ] get remote short hostname [node1][DEBUG ] deploying mon to node1 [node1][DEBUG ] get remote short hostname [node1][DEBUG ] remote hostname: node1 [node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [node1][DEBUG ] create the mon path if it does not exist [node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done [node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done [node1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring [node1][DEBUG ] create the monitor keyring file [node1][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node1 --keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring --setuser 167 --setgroup 167 [node1][DEBUG ] ceph-mon: mon.noname-b 192.168.1.176:6789/0 is local, renaming to mon.node1 [node1][DEBUG ] ceph-mon: set fsid to 07700035-c7d1-4e76-accf-d3d1d3159dce [node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1 [node1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring [node1][DEBUG ] create a done file to avoid re-doing the mon deployment [node1][DEBUG ] create the init path if it does not exist [node1][INFO ] Running command: sudo systemctl enable ceph.target [node1][INFO ] Running command: sudo systemctl enable ceph-mon@node1 [node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node1.service to /usr/lib/systemd/system/ceph-mon@.service. [node1][INFO ] Running command: sudo systemctl start ceph-mon@node1 [node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status [node1][DEBUG ] ******************************************************************************** [node1][DEBUG ] status for monitor: mon.node1 [node1][DEBUG ] { [node1][DEBUG ] "election_epoch": 0, [node1][DEBUG ] "extra_probe_peers": [ [node1][DEBUG ] "192.168.1.163:6789/0", [node1][DEBUG ] "192.168.1.177:6789/0", [node1][DEBUG ] "192.168.1.178:6789/0" [node1][DEBUG ] ], [node1][DEBUG ] "monmap": { [node1][DEBUG ] "created": "2019-02-27 18:04:44.703372", [node1][DEBUG ] "epoch": 0, [node1][DEBUG ] "fsid": "07700035-c7d1-4e76-accf-d3d1d3159dce", [node1][DEBUG ] "modified": "2019-02-27 18:04:44.703372", [node1][DEBUG ] "mons": [ [node1][DEBUG ] { [node1][DEBUG ] "addr": "192.168.1.163:6789/0", [node1][DEBUG ] "name": "admin", [node1][DEBUG ] "rank": 0 [node1][DEBUG ] }, [node1][DEBUG ] { [node1][DEBUG ] "addr": "192.168.1.176:6789/0", [node1][DEBUG ] "name": "node1", [node1][DEBUG ] "rank": 1 [node1][DEBUG ] }, [node1][DEBUG ] { [node1][DEBUG ] "addr": "0.0.0.0:0/2", [node1][DEBUG ] "name": "node2", [node1][DEBUG ] "rank": 2 [node1][DEBUG ] }, [node1][DEBUG ] { [node1][DEBUG ] "addr": "0.0.0.0:0/3", [node1][DEBUG ] "name": "node3", [node1][DEBUG ] "rank": 3 [node1][DEBUG ] } [node1][DEBUG ] ] [node1][DEBUG ] }, [node1][DEBUG ] "name": "node1", [node1][DEBUG ] "outside_quorum": [ [node1][DEBUG ] "admin", [node1][DEBUG ] "node1" [node1][DEBUG ] ], [node1][DEBUG ] "quorum": [], [node1][DEBUG ] "rank": 1, [node1][DEBUG ] "state": "probing", [node1][DEBUG ] "sync_provider": [] [node1][DEBUG ] } [node1][DEBUG ] ******************************************************************************** [node1][INFO ] monitor: mon.node1 is running [node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status [ceph_deploy.mon][DEBUG ] detecting platform for host node2 ... [node2][DEBUG ] connection detected need for sudo [node2][DEBUG ] connected to host: node2 [node2][DEBUG ] detect platform information from remote host [node2][DEBUG ] detect machine type [node2][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core [node2][DEBUG ] determining if provided host has same hostname in remote [node2][DEBUG ] get remote short hostname [node2][DEBUG ] deploying mon to node2 [node2][DEBUG ] get remote short hostname [node2][DEBUG ] remote hostname: node2 [node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [node2][DEBUG ] create the mon path if it does not exist [node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done [node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done [node2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring [node2][DEBUG ] create the monitor keyring file [node2][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node2 --keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring --setuser 167 --setgroup 167 [node2][DEBUG ] ceph-mon: mon.noname-c 192.168.1.177:6789/0 is local, renaming to mon.node2 [node2][DEBUG ] ceph-mon: set fsid to 07700035-c7d1-4e76-accf-d3d1d3159dce [node2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node2 for mon.node2 [node2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring [node2][DEBUG ] create a done file to avoid re-doing the mon deployment [node2][DEBUG ] create the init path if it does not exist [node2][INFO ] Running command: sudo systemctl enable ceph.target [node2][INFO ] Running command: sudo systemctl enable ceph-mon@node2 [node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node2.service to /usr/lib/systemd/system/ceph-mon@.service. [node2][INFO ] Running command: sudo systemctl start ceph-mon@node2 [node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status [node2][DEBUG ] ******************************************************************************** [node2][DEBUG ] status for monitor: mon.node2 [node2][DEBUG ] { [node2][DEBUG ] "election_epoch": 1, [node2][DEBUG ] "extra_probe_peers": [ [node2][DEBUG ] "192.168.1.163:6789/0", [node2][DEBUG ] "192.168.1.176:6789/0", [node2][DEBUG ] "192.168.1.178:6789/0" [node2][DEBUG ] ], [node2][DEBUG ] "monmap": { [node2][DEBUG ] "created": "2019-02-27 18:04:49.303519", [node2][DEBUG ] "epoch": 0, [node2][DEBUG ] "fsid": "07700035-c7d1-4e76-accf-d3d1d3159dce", [node2][DEBUG ] "modified": "2019-02-27 18:04:49.303519", [node2][DEBUG ] "mons": [ [node2][DEBUG ] { [node2][DEBUG ] "addr": "192.168.1.163:6789/0", [node2][DEBUG ] "name": "admin", [node2][DEBUG ] "rank": 0 [node2][DEBUG ] }, [node2][DEBUG ] { [node2][DEBUG ] "addr": "192.168.1.176:6789/0", [node2][DEBUG ] "name": "node1", [node2][DEBUG ] "rank": 1 [node2][DEBUG ] }, [node2][DEBUG ] { [node2][DEBUG ] "addr": "192.168.1.177:6789/0", [node2][DEBUG ] "name": "node2", [node2][DEBUG ] "rank": 2 [node2][DEBUG ] }, [node2][DEBUG ] { [node2][DEBUG ] "addr": "0.0.0.0:0/3", [node2][DEBUG ] "name": "node3", [node2][DEBUG ] "rank": 3 [node2][DEBUG ] } [node2][DEBUG ] ] [node2][DEBUG ] }, [node2][DEBUG ] "name": "node2", [node2][DEBUG ] "outside_quorum": [], [node2][DEBUG ] "quorum": [], [node2][DEBUG ] "rank": 2, [node2][DEBUG ] "state": "electing", [node2][DEBUG ] "sync_provider": [] [node2][DEBUG ] } [node2][DEBUG ] ******************************************************************************** [node2][INFO ] monitor: mon.node2 is running [node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status [ceph_deploy.mon][DEBUG ] detecting platform for host node3 ... [node3][DEBUG ] connection detected need for sudo [node3][DEBUG ] connected to host: node3 [node3][DEBUG ] detect platform information from remote host [node3][DEBUG ] detect machine type [node3][DEBUG ] find the location of an executable [ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.5.1804 Core [node3][DEBUG ] determining if provided host has same hostname in remote [node3][DEBUG ] get remote short hostname [node3][DEBUG ] deploying mon to node3 [node3][DEBUG ] get remote short hostname [node3][DEBUG ] remote hostname: node3 [node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [node3][DEBUG ] create the mon path if it does not exist [node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node3/done [node3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node3/done [node3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node3.mon.keyring [node3][DEBUG ] create the monitor keyring file [node3][INFO ] Running command: sudo ceph-mon --cluster ceph --mkfs -i node3 --keyring /var/lib/ceph/tmp/ceph-node3.mon.keyring --setuser 167 --setgroup 167 [node3][DEBUG ] ceph-mon: mon.noname-d 192.168.1.178:6789/0 is local, renaming to mon.node3 [node3][DEBUG ] ceph-mon: set fsid to 07700035-c7d1-4e76-accf-d3d1d3159dce [node3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node3 for mon.node3 [node3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node3.mon.keyring [node3][DEBUG ] create a done file to avoid re-doing the mon deployment [node3][DEBUG ] create the init path if it does not exist [node3][INFO ] Running command: sudo systemctl enable ceph.target [node3][INFO ] Running command: sudo systemctl enable ceph-mon@node3 [node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/ceph-mon@node3.service to /usr/lib/systemd/system/ceph-mon@.service. [node3][INFO ] Running command: sudo systemctl start ceph-mon@node3 [node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status [node3][DEBUG ] ******************************************************************************** [node3][DEBUG ] status for monitor: mon.node3 [node3][DEBUG ] { [node3][DEBUG ] "election_epoch": 3, [node3][DEBUG ] "extra_probe_peers": [ [node3][DEBUG ] "192.168.1.163:6789/0", [node3][DEBUG ] "192.168.1.176:6789/0", [node3][DEBUG ] "192.168.1.177:6789/0" [node3][DEBUG ] ], [node3][DEBUG ] "monmap": { [node3][DEBUG ] "created": "2019-02-27 18:04:40.266147", [node3][DEBUG ] "epoch": 1, [node3][DEBUG ] "fsid": "07700035-c7d1-4e76-accf-d3d1d3159dce", [node3][DEBUG ] "modified": "2019-02-27 18:04:40.266147", [node3][DEBUG ] "mons": [ [node3][DEBUG ] { [node3][DEBUG ] "addr": "192.168.1.163:6789/0", [node3][DEBUG ] "name": "admin", [node3][DEBUG ] "rank": 0 [node3][DEBUG ] }, [node3][DEBUG ] { [node3][DEBUG ] "addr": "192.168.1.176:6789/0", [node3][DEBUG ] "name": "node1", [node3][DEBUG ] "rank": 1 [node3][DEBUG ] }, [node3][DEBUG ] { [node3][DEBUG ] "addr": "192.168.1.177:6789/0", [node3][DEBUG ] "name": "node2", [node3][DEBUG ] "rank": 2 [node3][DEBUG ] }, [node3][DEBUG ] { [node3][DEBUG ] "addr": "192.168.1.178:6789/0", [node3][DEBUG ] "name": "node3", [node3][DEBUG ] "rank": 3 [node3][DEBUG ] } [node3][DEBUG ] ] [node3][DEBUG ] }, [node3][DEBUG ] "name": "node3", [node3][DEBUG ] "outside_quorum": [], [node3][DEBUG ] "quorum": [], [node3][DEBUG ] "rank": 3, [node3][DEBUG ] "state": "electing", [node3][DEBUG ] "sync_provider": [] [node3][DEBUG ] } [node3][DEBUG ] ******************************************************************************** [node3][INFO ] monitor: mon.node3 is running [node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status [ceph_deploy.mon][INFO ] processing monitor mon.admin [admin][DEBUG ] connection detected need for sudo [admin][DEBUG ] connected to host: admin [admin][DEBUG ] detect platform information from remote host [admin][DEBUG ] detect machine type [admin][DEBUG ] find the location of an executable [admin][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.admin.asok mon_status [ceph_deploy.mon][WARNIN] mon.admin monitor is not yet in quorum, tries left: 5 [ceph_deploy.mon][WARNIN] waiting 5 seconds before retrying [admin][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.admin.asok mon_status [ceph_deploy.mon][INFO ] mon.admin monitor has reached quorum! [ceph_deploy.mon][INFO ] processing monitor mon.node1 [node1][DEBUG ] connection detected need for sudo [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [node1][DEBUG ] find the location of an executable [node1][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status [ceph_deploy.mon][INFO ] mon.node1 monitor has reached quorum! [ceph_deploy.mon][INFO ] processing monitor mon.node2 [node2][DEBUG ] connection detected need for sudo [node2][DEBUG ] connected to host: node2 [node2][DEBUG ] detect platform information from remote host [node2][DEBUG ] detect machine type [node2][DEBUG ] find the location of an executable [node2][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status [ceph_deploy.mon][INFO ] mon.node2 monitor has reached quorum! [ceph_deploy.mon][INFO ] processing monitor mon.node3 [node3][DEBUG ] connection detected need for sudo [node3][DEBUG ] connected to host: node3 [node3][DEBUG ] detect platform information from remote host [node3][DEBUG ] detect machine type [node3][DEBUG ] find the location of an executable [node3][INFO ] Running command: sudo ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status [ceph_deploy.mon][INFO ] mon.node3 monitor has reached quorum! [ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum [ceph_deploy.mon][INFO ] Running gatherkeys... [ceph_deploy.gatherkeys][INFO ] Storing keys in temp directory /tmp/tmpWBN4kq [admin][DEBUG ] connection detected need for sudo [admin][DEBUG ] connected to host: admin [admin][DEBUG ] detect platform information from remote host [admin][DEBUG ] detect machine type [admin][DEBUG ] get remote short hostname [admin][DEBUG ] fetch remote file [admin][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --admin-daemon=/var/run/ceph/ceph-mon.admin.asok mon_status [admin][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin/keyring auth get client.admin [admin][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin/keyring auth get client.bootstrap-mds [admin][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin/keyring auth get client.bootstrap-mgr [admin][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin/keyring auth get-or-create client.bootstrap-mgr mon allow profile bootstrap-mgr [admin][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin/keyring auth get client.bootstrap-osd [admin][INFO ] Running command: sudo /usr/bin/ceph --connect-timeout=25 --cluster=ceph --name mon. --keyring=/var/lib/ceph/mon/ceph-admin/keyring auth get client.bootstrap-rgw [ceph_deploy.gatherkeys][INFO ] Storing ceph.client.admin.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mds.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-mgr.keyring [ceph_deploy.gatherkeys][INFO ] keyring 'ceph.mon.keyring' already exists [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-osd.keyring [ceph_deploy.gatherkeys][INFO ] Storing ceph.bootstrap-rgw.keyring [ceph_deploy.gatherkeys][INFO ] Destroy temp directory /tmp/tmpWBN4kq
执行完毕后,会在当前目录下生成一系列的密钥,应该是各组件之间访问所需要的认证信息。 注意 :如果此过程失败并显示类似于“无法找到/etc/ceph/ceph.client.admin.keyring”的消息,请确保ceph.conf中为监控节点列出的IP是公共IP,而不是私有IP 。 到此,ceph monitor 已经成功启动了。 创建 OSD
OSD 是最终数据存储的地方,这里我们准备了三个个 OSD 节点,分别为 osd.0 和 osd.1与osd.2。官方建议为 OSD 及其日志使用独立硬盘或分区作为存储空间,不过本机虚拟机上不具备条件,但是我们可以在虚拟机本地磁盘上创建目录模拟磁盘,来作为 OSD 的存储空间。ceph-deploy (admin-node) 上执行
-
在节点上创建数据存储目录;将属主属组更改为cephd用户 [root@admin ~ 1]#su cephd [cephd@admin root]$ ssh node1 [cephd@node1 ~ 1]#cd /var/ [cephd@node1 /var/local 5]#sudo mkdir osd1 [cephd@node1 /var/local 6]#exit [cephd@node1 ~ 8]#sudo chown -R cephd:cephd /var/local/osd1 drwxr-xr-x. 2 cephd cephd 6 Feb 28 10:00 osd1
[cephd@admin root]$ ssh node2 Last login: Wed Feb 27 17:06:17 2019 [cephd@node2 ~ 1]#sudo mkdir /var/local/osd2 [cephd@node2 ~ 3]#sudo chown -R cephd:cephd /var/local/osd2 drwxr-xr-x. 2 cephd cephd 6 Feb 28 10:01 osd2
[cephd@admin root]$ ssh node3 Last login: Wed Feb 27 17:06:25 2019 [cephd@node3 ~ 2]#sudo mkdir /var/local/osd3 [cephd@node3 ~ 1]#sudo chown -R cephd:cephd /var/local/osd3 drwxr-xr-x. 2 cephd cephd 6 Feb 28 10:01 osd3
激活 OSD
接下来,需要 ceph-deploy 节点执行 prepare OSD 操作,目的是分别在各个 OSD 节点上创建一些后边激活 OSD 需要的信息。
[cephd@admin ~]$ cd my-cluster/ [cephd@admin my-cluster]$ sudo ceph-deploy --overwrite-conf osd prepare node1:/var/local/osd1 node2:/var/local/osd2 node3:/var/local/osd3 [ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf [ceph_deploy.cli][INFO ] Invoked (1.5.39): /bin/ceph-deploy --overwrite-conf osd prepare node1:/var/local/osd1 node2:/var/local/osd2 node3:/var/local/osd3 [ceph_deploy.cli][INFO ] ceph-deploy options: [ceph_deploy.cli][INFO ] username : None [ceph_deploy.cli][INFO ] block_db : None [ceph_deploy.cli][INFO ] disk : [('node1', '/var/local/osd1', None), ('node2', '/var/local/osd2', None), ('node3', '/var/local/osd3', None)] [ceph_deploy.cli][INFO ] dmcrypt : False [ceph_deploy.cli][INFO ] verbose : False [ceph_deploy.cli][INFO ] bluestore : None [ceph_deploy.cli][INFO ] block_wal : None [ceph_deploy.cli][INFO ] overwrite_conf : True [ceph_deploy.cli][INFO ] subcommand : prepare [ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys [ceph_deploy.cli][INFO ] quiet : False [ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fd5725d0488> [ceph_deploy.cli][INFO ] cluster : ceph [ceph_deploy.cli][INFO ] fs_type : xfs [ceph_deploy.cli][INFO ] filestore : None [ceph_deploy.cli][INFO ] func : <function osd at 0x7fd572622410> [ceph_deploy.cli][INFO ] ceph_conf : None [ceph_deploy.cli][INFO ] default_release : False [ceph_deploy.cli][INFO ] zap_disk : False [ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/var/local/osd1: node2:/var/local/osd2: node3:/var/local/osd3: root@node1's password: root@node1's password: [node1][DEBUG ] connected to host: node1 [node1][DEBUG ] detect platform information from remote host [node1][DEBUG ] detect machine type [node1][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core [ceph_deploy.osd][DEBUG ] Deploying osd to node1 [node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.osd][DEBUG ] Preparing host node1 disk /var/local/osd1 journal None activate False [node1][DEBUG ] find the location of an executable [node1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/local/osd1 [node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node1][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node1][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [node1][WARNIN] populate_data_path: Preparing osd data dir /var/local/osd1 [node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd1/ceph_fsid.5336.tmp [node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/ceph_fsid.5336.tmp [node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd1/fsid.5336.tmp [node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/fsid.5336.tmp [node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd1/magic.5336.tmp [node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd1/magic.5336.tmp [node1][INFO ] checking OSD status... [node1][DEBUG ] find the location of an executable [node1][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json [ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use. root@node2's password: root@node2's password: [node2][DEBUG ] connected to host: node2 [node2][DEBUG ] detect platform information from remote host [node2][DEBUG ] detect machine type [node2][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core [ceph_deploy.osd][DEBUG ] Deploying osd to node2 [node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.osd][DEBUG ] Preparing host node2 disk /var/local/osd2 journal None activate False [node2][DEBUG ] find the location of an executable [node2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/local/osd2 [node2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [node2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node2][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node2][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [node2][WARNIN] populate_data_path: Preparing osd data dir /var/local/osd2 [node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd2/ceph_fsid.3545.tmp [node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd2/ceph_fsid.3545.tmp [node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd2/fsid.3545.tmp [node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd2/fsid.3545.tmp [node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd2/magic.3545.tmp [node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd2/magic.3545.tmp [node2][INFO ] checking OSD status... [node2][DEBUG ] find the location of an executable [node2][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json [ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use. root@node3's password: root@node3's password: [node3][DEBUG ] connected to host: node3 [node3][DEBUG ] detect platform information from remote host [node3][DEBUG ] detect machine type [node3][DEBUG ] find the location of an executable [ceph_deploy.osd][INFO ] Distro info: CentOS Linux 7.5.1804 Core [ceph_deploy.osd][DEBUG ] Deploying osd to node3 [node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf [ceph_deploy.osd][DEBUG ] Preparing host node3 disk /var/local/osd3 journal None activate False [node3][DEBUG ] find the location of an executable [node3][INFO ] Running command: /usr/sbin/ceph-disk -v prepare --cluster ceph --fs-type xfs -- /var/local/osd3 [node3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=fsid [node3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-allows-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-wants-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node3][WARNIN] command: Running command: /usr/bin/ceph-osd --check-needs-journal -i 0 --log-file $run_dir/$cluster-osd-check.log --cluster ceph --setuser ceph --setgroup ceph [node3][WARNIN] command: Running command: /usr/bin/ceph-osd --cluster=ceph --show-config-value=osd_journal_size [node3][WARNIN] populate_data_path: Preparing osd data dir /var/local/osd3 [node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd3/ceph_fsid.3383.tmp [node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd3/ceph_fsid.3383.tmp [node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd3/fsid.3383.tmp [node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd3/fsid.3383.tmp [node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/local/osd3/magic.3383.tmp [node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/local/osd3/magic.3383.tmp [node3][INFO ] checking OSD status... [node3][DEBUG ] find the location of an executable [node3][INFO ] Running command: /bin/ceph --cluster=ceph osd stat --format=json [ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.
出来一堆警告信息;先不管只要不报错继续往下走... 接下来,我们需要激活 activate OSD。 [cephd@admin my-cluster]$ ceph-deploy osd activate node1:/var/local/osd1 node2:/var/local/osd2 node3:/var/local/osd3
报错如下: [node1][WARNIN] 2019-02-28 10:34:01.762473 7f15eee11ac0 -1 OSD::mkfs: ObjectStore::mkfs failed with error -13 [node1][WARNIN] 2019-02-28 10:34:01.762586 7f15eee11ac0 -1 ** ERROR: error creating empty object store in /var/local/osd1: (13) Permission denied [node1][WARNIN] [node1][ERROR ] RuntimeError: command returned non-zero exit status: 1 [ceph_deploy][ERROR ] RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /var/local/osd1
原因:
解决方法:
chown -R ceph:ceph /var/local/osd1
至此,Ceph 存储集群已经搭建完毕了,我们可以查看那一下集群是否启动成功!
注意: 同时为了确保对 ceph.client.admin.keyring 有正确的操作权限,所以还需要增加权限设置。 $ sudo chmod +r /etc/ceph/ceph.client.admin.keyring 查看集群状态
[cephd@admin my-cluster]$ ceph -s cluster 07700035-c7d1-4e76-accf-d3d1d3159dce health HEALTH_ERR 64 pgs are stuck inactive for more than 300 seconds 64 pgs stuck inactive 64 pgs stuck unclean no osds monmap e1: 4 mons at {admin=192.168.1.163:6789/0,node1=192.168.1.176:6789/0,node2=192.168.1.177:6789/0,node3=192.168.1.178:6789/0} election epoch 8, quorum 0,1,2,3 admin,node1,node2,node3 osdmap e1: 0 osds: 0 up, 0 in flags sortbitwise,require_jewel_osds pgmap v2: 64 pgs, 1 pools, 0 bytes data, 0 objects 0 kB used, 0 kB / 0 kB avail 64 creating 查看集群健康情况
[cephd@node1 osd1]$ ceph health HEALTH_OK
[root@node2 ~ 15]#ceph health HEALTH_OK
[root@node3 ~ 9]#ceph health HEALTH_OK 挂载
挂载方式: 创建挂载点,尝试本地挂载 #mkdir /cephfs_test #mount -t ceph 192.168.1.176:6789:/ /cephfs_test -o name=admin,secret=AQDchXhYTtjwHBAAk2/H1Ypa23WxKv4jA1NFWw== #df -hT 192.168.1.176:6789:/ ceph 60G 104M 60G 1% /cephfs_test
如果有多个mon节点,可以挂载多个节点,保证了CephFS的高可用,当有一个节点down的时候不影响数据读写
#mount -t ceph 192.168.1.176,192.168.1.177,192.168.1.178:6789:/ /mnt/cephfs -o name=admin,secret=AQDchXhYTtjwHBAAk2/H1Ypa23WxKv4jA1NFWw== #df -hT 192.168.1.176,192.168.1.177,192.168.1.178:6789:/ ceph 60G 104M 60G 1% /cephfs_test
本机测试挂载:
[root@admin ~ 11]#mount sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel) proc on /proc type proc (rw,nosuid,nodev,noexec,relatime) devtmpfs on /dev type devtmpfs (rw,nosuid,seclabel,size=920380k,nr_inodes=230095,mode=755) securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime) tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev,seclabel) devpts on /dev/pts type devpts (rw,nosuid,noexec,relatime,seclabel,gid=5,mode=620,ptmxmode=000) tmpfs on /run type tmpfs (rw,nosuid,nodev,seclabel,mode=755) tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,seclabel,mode=755) cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,xattr,release_agent=/usr/lib/systemd/systemd-cgroups-agent,name=systemd) pstore on /sys/fs/pstore type pstore (rw,nosuid,nodev,noexec,relatime) cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct,cpu) cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids) cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory) cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer) cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices) cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,perf_event) cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,net_prio,net_cls) cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset) cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,hugetlb) cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio) configfs on /sys/kernel/config type configfs (rw,relatime) /dev/mapper/centos-root on / type xfs (rw,relatime,seclabel,attr2,inode64,noquota) selinuxfs on /sys/fs/selinux type selinuxfs (rw,relatime) debugfs on /sys/kernel/debug type debugfs (rw,relatime) mqueue on /dev/mqueue type mqueue (rw,relatime,seclabel) hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,seclabel) /dev/sda1 on /boot type xfs (rw,relatime,seclabel,attr2,inode64,noquota) /dev/mapper/centos-home on /home type xfs (rw,relatime,seclabel,attr2,inode64,noquota) tmpfs on /run/user/0 type tmpfs (rw,nosuid,nodev,relatime,seclabel,size=186532k,mode=700) systemd-1 on /proc/sys/fs/binfmt_misc type autofs (rw,relatime,fd=46,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=40442) binfmt_misc on /proc/sys/fs/binfmt_misc type binfmt_misc (rw,relatime) 192.168.1.176:6789:/ on /mnt type ceph (rw,relatime,name=admin,secret=<hidden>,acl,wsize=16777216) 192.168.1.177:6789:/ on /opt type ceph (rw,relatime,name=admin,secret=<hidden>,acl,wsize=16777216) 192.168.1.176,192.168.1.177,192.168.1.178:6789:/ on /mnt/cephfs type ceph (rw,relatime,name=admin,secret=<hidden>,acl,wsize=16777216)
最后行显示挂载成功,其他方面有待继续学习...