摘要

博文主要是实现Etcd的集群的构建工作。利用docker实现Etcd集群的构建。

一、配置节点信息

角色

系统

节点

master

CentOS-7

192.168.10.5

node-1

CentOS-7

192.168.10.6

node-2

CentOS-7

192.168.10.7

二、配置阿里epel源:

mv /etc/yum.repos.d/epel.repo /etc/yum.repos.d/epel.repo.backup
mv /etc/yum.repos.d/epel-testing.repo /etc/yum.repos.d/epel-testing.repo.backup

wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo

三、安装配置存储etcd

[root@node1 ~]# yum -y install etcd
[root@node2 ~]# yum -y install etcd
[root@master ~]# yum -y install etcd

[root@master ~]# etcd --version
etcd Version: 3.2.22
Git SHA: 1674e68
Go Version: go1.9.4
Go OS/Arch: linux/amd64

四、配置相关节点文件

master节点

[root@master ~]# egrep -v "^$|^#" /etc/etcd/etcd.confETCD_DATA_DIR="/var/lib/etcd/default.etcd" #etcd数据保存目录
ETCD_LISTEN_PEER_URLS="http://192.168.10.5:2380" #集群内部通信使用的URL
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379" #供外部客户端使用的url
ETCD_NAME="etcd01" #etcd实例名称
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.5:2380" #广播给集群内其他成员访问的URL
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379" #广播给外部客户端使用的url
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.10.5:2380,etcd02=http://192.168.10.6:2380,etcd03=http://192.168.10.7:2380" #初始集群成员列表
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" #集群的名称
ETCD_INITIAL_CLUSTER_STATE="new" #初始集群状态,new为新建集群
node-1节点

[root@node1 etcd]# egrep -v "^$|^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.10.6:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="etcd02"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.6:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.10.5:2380,etcd02=http://192.168.10.6:2380,etcd03=http://192.168.10.7:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="exist"
node-2

[root@node2 ~]# egrep -v "^$|^#" /etc/etcd/etcd.conf
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.10.7:2380"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_NAME="etcd03"
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.10.7:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
ETCD_INITIAL_CLUSTER="etcd01=http://192.168.10.5:2380,etcd02=http://192.168.10.6:2380,etcd03=http://192.168.10.7:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="exist"

五、Etcd单个节点测试

[root@master etcd]# systemctl start etcd
[root@master etcd]# systemctl status etcd
● etcd.service - Etcd Server
Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2019-01-08 17:28:34 CST; 2h 32min ago
Main PID: 114077 (etcd)
Tasks: 7
Memory: 48.7M
CGroup: /system.slice/etcd.service
└─114077 /usr/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-client-urls=http://0.0.0.0:2379

Jan 08 20:00:08 master etcd[114077]: 4c5d727d37966a87 became candidate at term 16
Jan 08 20:00:08 master etcd[114077]: 4c5d727d37966a87 received MsgVoteResp from 4c5d727d37966a87 at term 16
Jan 08 20:00:08 master etcd[114077]: 4c5d727d37966a87 [logterm: 11, index: 44] sent MsgVote request to 315fd62e577c4037 at term 16
Jan 08 20:00:08 master etcd[114077]: 4c5d727d37966a87 [logterm: 11, index: 44] sent MsgVote request to f617da66fb9b90ad at term 16
Jan 08 20:00:08 master etcd[114077]: 4c5d727d37966a87 received MsgVoteResp from f617da66fb9b90ad at term 16
Jan 08 20:00:08 master etcd[114077]: 4c5d727d37966a87 [quorum:2] has received 2 MsgVoteResp votes and 0 vote rejections
Jan 08 20:00:08 master etcd[114077]: 4c5d727d37966a87 became leader at term 16
Jan 08 20:00:08 master etcd[114077]: raft.node: 4c5d727d37966a87 elected leader 4c5d727d37966a87 at term 16
Jan 08 20:00:09 master etcd[114077]: the clock difference against peer 315fd62e577c4037 is too high [4.285950016s > 1s]
Jan 08 20:00:09 master etcd[114077]: the clock difference against peer f617da66fb9b90ad is too high [4.120954945s > 1s]
[root@master etcd]# netstat -tunlp|grep etcd
tcp 0 0 192.168.10.5:2380 0.0.0.0:* LISTEN 114077/etcd
tcp6 0 0 :::2379 :::* LISTEN 114077/etcd

六、etcd集群测试

查看集群节点
[root@node1 etcd]# etcdctl member list
315fd62e577c4037: name=etcd03 peerURLs=http://192.168.10.7:2380 clientURLs=http://0.0.0.0:2379 isLeader=false
4c5d727d37966a87: name=etcd01 peerURLs=http://192.168.10.5:2380 clientURLs=http://0.0.0.0:2379 isLeader=true
f617da66fb9b90ad: name=etcd02 peerURLs=http://192.168.10.6:2380 clientURLs=http://0.0.0.0:2379 isLeader=false
查看集群健康状态:

[root@master etcd]# etcdctl cluster-health
member 315fd62e577c4037 is healthy: got healthy result from http://0.0.0.0:2379
member 4c5d727d37966a87 is healthy: got healthy result from http://0.0.0.0:2379
member f617da66fb9b90ad is healthy: got healthy result from http://0.0.0.0:2379
cluster is healthy

6.1 设置键值

在一个节点设置值
[root@master etcd]# etcdctl set /test/key "test kubernetes"
test kubernetes

在另一个节点获取值

[root@node1 etcd]# etcdctl get /test/key
test kubernetes

key存在的方式和zookeeper类似,为 /路径/key
设置完之后,其他集群也可以查询到该值
如果dir和key不存在,该命令会创建对应的项

6.2 更新键值

[root@master etcd]# etcdctl update /test/key "test kubernetes cluster"
test kubernetes cluster

[root@node1 etcd]# etcdctl get /test/key
test kubernetes cluster

6.3 删除键值

[root@master etcd]# etcdctl rm /test/key
PrevNode.Value: test kubernetes cluster

当键不存在时,会报错
[root@node1 etcd]# etcdctl get /test/key
Error: 100: Key not found (/test/key) [15]

博文参考