Kubernetes二进制安装步骤
部署环境
角色 IP 组件
k8s-master 134.0.84.110 kube-apiserver,kube-controller-manager,kube-scheduler,etcd
k8s-node1 134.0.84.103 kubelet,kube-proxy,docker,flannel,etcd
k8s-node2 134.0.84.104 kubelet,kube-proxy,docker,flannel,etcd1. 部署Etcd集群
使用cfssl来生成自签证书,先下载cfssl工具:
wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo
1.1 生成证书(master生成拷贝到node)
# cat ca-config.json
{
“signing”: {
“default”: {
“expiry”: “87600h”
},
“profiles”: {
“www”: {
“expiry”: “87600h”,
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
]
}
}
}
}
EOF
# cat ca-csr.json
{
“CN”: “etcd CA”,
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “Beijing”,
“ST”: “Beijing”
}
]
}# cat server-csr.json
{
"CN": "etcd",
"hosts": [
"134.0.84.110",
"134.0.84.103",
"134.0.84.104"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}
# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
# ls *pem
# ca-key.pem ca.pem server-key.pem server.pem1.2 部署Etcd
二进制包下载地址:wget https://github.com/etcd-io/etcd/releases/download/v3.2.12/etcd-v3.2.12-linux-amd64.tar.gz
以下部署步骤在规划的三个etcd节点操作一样,唯一不同的是etcd配置文件中的服务器IP要写当前的:
解压二进制包:
# mkdir /opt/etcd/{bin,cfg,ssl} -p
# tar zxvf etcd-v3.2.12-linux-amd64.tar.gz
# mv etcd-v3.2.12-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
创建etcd配置文件:
# cat /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME=“etcd01” --etcd01,etcd02,etcd03
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS=“https://134.0.84.110:2380”
ETCD_LISTEN_CLIENT_URLS=“https://134.0.84.110:2379”#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://134.0.84.110:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://134.0.84.110:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://134.0.84.110:2380,etcd02=https://134.0.84.103:2380,etcd03=https://134.0.84.104:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
ETCD_NAME 节点名称
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN 集群Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群systemd管理etcd:
# cat /lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target把刚才生成的证书拷贝到配置文件中的位置:
# cp capem serverpem /opt/etcd/ssl
启动并设置开启启动:
# systemctl start etcd
# systemctl enable etcd
都部署完成后,检查etcd集群状态:
# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints=“https://134.0.84.110:2379,https://134.0.84.103:2379,https://134.0.84.104:2379” cluster-health
member 18218cfabd4e0dea is healthy: got healthy result from https://134.0.84.110:2379
member 541c1c40994c939b is healthy: got healthy result from https://134.0.84.103:2379
member a342ea2798d20705 is healthy: got healthy result from https://134.0.84.104:2379
cluster is healthy
如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd1. 在Node安装Docker
在两台node节点分别执行:
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
# yum install docker-ce -y
# curl -sSL https://get.daocloud.io/daotools/set_mirror.sh | sh -s http://bc437cce.m.daocloud.io
# systemctl start docker
# systemctl enable docker3,安装flannel
在两台node节点上执行:
写入预定义子网段:
# cd /opt/etcd/ssl/
# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints=“https://134.0.84.110:2379,https://134.0.84.103:2379,https://134.0.84.104:2379” set /coreos.com/network/config ‘{ “Network”: “172.17.0.0/16”, “Backend”: {“Type”: “vxlan”}}’
下载二进制包:
# wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz
# mkdir -p /opt/kubernetes/{bin,cfg,ssl}
# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin
配置Flannel:
# vim /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="–etcd-endpoints=https://134.0.84.110:2379,https://134.0.84.103:2379,https://134.0.84.104:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"
systemd管理Flannel:
# vim /lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target
配置Docker启动指定子网段:
# vim /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TimeoutStartSec=0
Delegate=yes
KillMode=process
Restart=on-failure
StartLimitBurst=3
StartLimitInterval=60s
[Install]
WantedBy=multi-user.target
重启flannel和docker:
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker
检查是否生效:
node01网络信息:
[root@k8s-node01 bin]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.81.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::3c54:39ff:fecd:1acf prefixlen 64 scopeid 0x20<link>
ether 3e:54:39:cd:1a:cf txqueuelen 0 (Ethernet)
RX packets 2 bytes 168 (168.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 228 (228.0 B)
TX errors 0 dropped 28 overruns 0 carrier 0 collisions 0
[root@k8s-node01 bin]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.81.1 netmask 255.255.255.0 broadcast 172.17.81.255
ether 02:42:7d:7e:e4:b3 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
node02网络信息:
[root@k8s-node02 bin]# ifconfig flannel.1
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 172.17.94.0 netmask 255.255.255.255 broadcast 0.0.0.0
inet6 fe80::ec47:35ff:fee8:c5ba prefixlen 64 scopeid 0x20<link>
ether ee:47:35:e8:c5:ba txqueuelen 0 (Ethernet)
RX packets 2 bytes 168 (168.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2 bytes 228 (228.0 B)
TX errors 0 dropped 29 overruns 0 carrier 0 collisions 0
[root@k8s-node02 bin]# ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.94.1 netmask 255.255.255.0 broadcast 172.17.94.255
ether 02:42:08:cb:0b:f2 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
确保docker0与flannel.1在同一网段。测试不同节点互通,在当前节点访问另一个Node节点docker0 IP:
# ping 172.17.81.0
# ping 172.17.81.1
# ping 172.17.94.0
# ping 172.17.94.1
如果能通说明Flannel部署成功。如果不通检查下日志:journalctl -u flannel4. 在Master节点部署组件
4.1 生成证书
创建CA证书:
# cat ca-config.json
{
“signing”: {
“default”: {
“expiry”: “87600h”
},
“profiles”: {
“kubernetes”: {
“expiry”: “87600h”,
“usages”: [
“signing”,
“key encipherment”,
“server auth”,
“client auth”
]
}
}
}
}# cat ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}
# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
生成apiserver证书:
# cat server-csr.json{
“CN”: “kubernetes”,
“hosts”: [
“10.0.0.1”,
“134.0.84.110”,
“134.0.84.103”,
“134.0.84.104”,
“kubernetes”,
“kubernetes.default”,
“kubernetes.default.svc”,
“kubernetes.default.svc.cluster”,
“kubernetes.default.svc.cluster.local”
],
“key”: {
“algo”: “rsa”,
“size”: 2048
},
“names”: [
{
“C”: “CN”,
“L”: “BeiJing”,
“ST”: “BeiJing”,
“O”: “k8s”,
“OU”: “System”
}
]
}#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json | cfssljson -bare server
生成kube-proxy证书:
# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}
# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
最终生成以下证书文件:
# ls *pem
ca-key.pem ca.pem kube-proxy-key.pem kube-proxy.pem server-key.pem server.pem
# cp *.pem /opt/kubernetes/ssl/
4.2 部署apiserver组件
下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md,下载这个包(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。
# mkdir /opt/kubernetes/{bin,cfg,ssl} -p
# tar zxvf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin
# cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin
创建token文件,用途后面会讲到:
# cat /opt/kubernetes/cfg/token.csv
674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
第一列:随机字符串,自己可生成
第二列:用户名
第三列:UID
第四列:用户组
创建apiserver配置文件:
# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://134.0.84.110:2379,https://134.0.84.103:2379,https://134.0.84.104:2379 \
--bind-address=134.0.84.110 \
--secure-port=6443 \
--advertise-address=134.0.84.110 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
参数说明:
--logtostderr 启用日志
---v 日志等级
--etcd-servers etcd集群地址
--bind-address 监听地址
--secure-port https安全端口
--advertise-address 集群通告地址
--allow-privileged 启用授权
--service-cluster-ip-range Service虚拟IP地址段
--enable-admission-plugins 准入控制模块
--authorization-mode 认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
--token-auth-file token文件
--service-node-port-range Service Node类型默认分配端口范围
systemd管理apiserver:
# cat /lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动:
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver
4.3 部署scheduler组件
创建schduler配置文件:
# cat /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"
参数说明:
--master 连接本地apiserver
--leader-elect 当该组件启动多个时,自动选举(HA)
systemd管理schduler组件:
# cat /lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动:
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
4.4 部署controller-manager组件
创建controller-manager配置文件:
# cat /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"
systemd管理controller-manager组件:
# cat /lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动:
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:
# /opt/kubernetes/bin/kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
etcd-1 Healthy {“health”: “true”}
etcd-2 Healthy {“health”: “true”}
scheduler Healthy ok
etcd-0 Healthy {“health”: “true”}
如上输出说明组件都正常。5. 在Node节点部署组件
Master apiserver启用TLS认证后,Node节点kubelet组件想要加入集群,必须使用CA签发的有效证书才能与apiserver通信,当Node节点很多时,签署证书是一件很繁琐的事情,因此有了TLS Bootstrapping机制,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。
将kubernetes命令添加到系统变量
# vim /etc/profile
PATH=$PATH:/opt/kubernetes/bin/
# source /etc/profile
5.1 在master执行:将kubelet-bootstrap 用户绑定到系统集群角色
kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
5.2 创建kubeconfig生成脚本
# cd /opt/kubernetes/ssl/
# vim kubeconfig.sh
# 创建kubelet bootstrapping kubeconfig
BOOTSTRAP_TOKEN=674c457d4dcf2eefe4920d7dbb6b0ddc
KUBE_APISERVER=“https://134.0.84.110:6443”# 设置集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=bootstrap.kubeconfig
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=bootstrap.kubeconfig
# 设置上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=bootstrap.kubeconfig
# 设置默认上下文
kubectl config use-context default --kubeconfig=bootstrap.kubeconfig
#----------------------
# 创建kube-proxy kubeconfig文件
kubectl config set-cluster kubernetes \
--certificate-authority=./ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-credentials kube-proxy \
--client-certificate=./kube-proxy.pem \
--client-key=./kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=kube-proxy.kubeconfig
kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
执行脚本
[root@k8s-master kubernetesssl]# sh kubeconfig.sh
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
将bootstrap.kubeconfig kube-proxy.kubeconfig这两个文件拷贝到Node节点/opt/kubernetes/cfg目录下
# scp bootstrap.kubeconfig kube-proxy.kubeconfig 134.0.84.103:/opt/kubernetes/cfg/
# scp bootstrap.kubeconfig kube-proxy.kubeconfig 134.0.84.104:/opt/kubernetes/cfg/5.2 部署kubelet组件
在master服务器,将前下载的二进制包中的kubelet和kube-proxy拷贝到node服务器的/opt/kubernetes/bin目录下。
# cd /opt/kubernetes/kubernetes/server/bin
# scp kube-proxy kubelet 134.0.84.103:/opt/kubernetes/bin/
# scp kube-proxy kubelet 134.0.84.104:/opt/kubernetes/bin/
创建kubelet配置文件:
root@ubuntu2:/opt/kubernetes/cfg# cat /opt/kubernetes/cfg/kubelet
KUBELET_OPTS="--logtostderr=true \
--v=4 \
--address=134.0.84.103 \
--hostname-override=134.0.84.103 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--experimental-bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--cert-dir=/opt/kubernetes/ssl \
--allow-privileged=true \
--cluster-dns=8.8.8.8 \
--cluster-domain=cluster.local \
--fail-swap-on=false \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"
参数说明:
--hostname-override 在集群中显示的主机名
--kubeconfig 指定kubeconfig文件位置,会自动生成
--bootstrap-kubeconfig 指定刚才生成的bootstrap.kubeconfig文件
--cert-dir 颁发证书存放位置
--pod-infra-container-image 管理Pod网络的镜像
# cat /opt/kubernetes/cfg/kubelet.config
systemd管理kubelet组件:
# cat /lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet $KUBELET_OPTS
Restart=on-failure
KillMode=process
[Install]
WantedBy=multi-user.target
启动:
systemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
在Master审批Node加入集群:启动后还没加入到集群中,需要手动允许该节点才可以。在Master节点查看请求签名的Node:
# kubectl get csr
NAME AGE REQUESTOR CONDITION
node-csr-5ekUr22JHk_mdPJiiuf8EOiTwDto1yuqxakXi4UYDyw 68s kubelet-bootstrap Pending
# kubectl certificate approve node-csr-5ekUr22JHk_mdPJiiuf8EOiTwDto1yuqxakXi4UYDyw
# kubectl get node
NAME STATUS ROLES AGE VERSION
134.0.84.103 Ready <none> 15s v1.12.1
第二个节点做相同的操作,最终结果:
[root@k8s-master bin]# kubectl get node
NAME STATUS ROLES AGE VERSION
134.0.84.103 Ready <none> 7m37s v1.12.1
134.0.84.104 Ready <none> 14s v1.12.15.3 部署kube-proxy组件
创建kube-proxy配置文件:
# cat /opt/kubernetes/cfg/kube-proxy
KUBE_PROXY_OPTS="–logtostderr=true
–v=4
–hostname-override=134.0.84.103
–cluster-cidr=10.0.0.0/24
–kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"systemd管理kube-proxy组件:
# cat /lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target
[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy $KUBE_PROXY_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target
启动:
systemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy6. 查看集群状态
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
134.0.84.103 Ready 14m v1.12.1
134.0.84.104 Ready 7m26s v1.12.1
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
134.0.84.103 Ready 14m v1.12.1
134.0.84.104 Ready 7m29s v1.12.1
[root@k8s-master ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-1 Healthy {“health”: “true”}
etcd-2 Healthy {“health”: “true”}
etcd-0 Healthy {“health”: “true”}7. 运行一个测试示例
创建一个Nginx Web,测试集群是否正常工作:
[root@k8s-master ~]# kubectl run nginx --image=nginx --replicas=3
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@k8s-master ~]# kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
service/nginx exposed
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-7bm6j 0/1 ContainerCreating 0 17s
nginx-dbddb74b8-qw7sv 0/1 ContainerCreating 0 17s
nginx-dbddb74b8-w77q8 0/1 ContainerCreating 0 17s
[root@k8s-master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-dbddb74b8-7bm6j 1/1 Running 0 51s
nginx-dbddb74b8-qw7sv 1/1 Running 0 51s
nginx-dbddb74b8-w77q8 1/1 Running 0 51s
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 443/TCP 102m
nginx NodePort 10.0.0.215 88:48603/TCP 68s
访问集群中部署的Nginx,打开浏览器输入:http://134.0.84.104:48603/8. 无法查看pod运行日志
[root@k8s-master ~]# kubectl logs nginx-dbddb74b8-7bm6j
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-7bm6j)
解决方法:
# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
问题解决:
[root@k8s-master ~]# kubectl logs nginx-dbddb74b8-7bm6j
172.17.94.0 - - [19/Apr/2019:08:16:19 +0000] “GET / HTTP/1.1” 200 612 “-” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.92 Safari/537.36” “-”
2019/04/19 08:16:19 [error] 6#6: *2 open() “/usr/share/nginx/html/favicon.ico” failed (2: No such file or directory), client: 172.17.94.0, server: localhost, request: “GET /favicon.ico HTTP/1.1”, host: “134.0.84.104:48603”, referrer: “http://134.0.84.104:48603/”
172.17.94.0 - - [19/Apr/2019:08:16:19 +0000] “GET /favicon.ico HTTP/1.1” 404 556 “http://134.0.84.104:48603/” “Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.92 Safari/537.36” “-”9. 部署kubernetes-dashboard
下载dashboard.yaml文件
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/recommended/kubernetes-dashboard.yaml
修改配置文件:kubernetes-dashboard.yaml
1,修改nodePort,修改后如下:
# ------------------- Dashboard Service ------------------- #
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30001
selector:
k8s-app: kubernetes-dashboard
2,修改镜像地址:image: registry.cn-beijing.aliyuncs.com/mine-k8s/kubernetes-dashboard-amd64:v1.10.1 默认为国外地址,无法访问。安装kubernetes-dashboard
# kubectl create -f kubernetes-dashboard.yaml默认kubernetes-dashboard用户权限非常低,登录时会报错:需要创建一个高权限的用户
warning
configmaps is forbidden: User “system:serviceaccount:kube-system:kubernetes-dashboard” cannot list configmaps in the namespace “default”
close
warning
persistentvolumeclaims is forbidden: User “system:serviceaccount:kube-system:kubernetes-dashboard” cannot list persistentvolumeclaims in the namespace “default”
close
warning
secrets is forbidden: User “system:serviceaccount:kube-system:kubernetes-dashboard” cannot list secrets in the namespace “default”
close
warning
……
解决办法:
1.添加serviceaccount账户,设置并使其可登陆
apiVersion: v1
kind: ServiceAccount
metadata:
name: aks-dashboard-admin
namespace: kube-system
—
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: aks-dashboard-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: aks-dashboard-admin
namespace: kube-system
2.创建完全管理权限
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
labels:
k8s-app: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kube-system
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard-head
labels:
k8s-app: kubernetes-dashboard-head
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard-head
namespace: kube-system
查看tocken
# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk ‘{print $1}’)
用户名:
aks-dashboard-admin-token-wdkj6
token:
eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJha3MtZGFzaGJvYXJkLWFkbWluLXRva2VuLXdka2o2Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImFrcy1kYXNoYm9hcmQtYWRtaW4iLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJlODVlMWM3Mi02NGQ1LTExZTktODNlYy0wMDBjMjkwMWQwOTUiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06YWtzLWRhc2hib2FyZC1hZG1pbiJ9.yBDoEskovx1nnw9wbsI8_W4AxRtEW_jcnSqI2R2R1asOWFZxbv2MUrB_z_fhsfZ4Xf44p8ZVE8Y6ELRYRT6U_OF00owxB_SntGiqetpzBWVWknVmruMazeJUyAjgFaFvkI_rL1yMbogvhhJlNNzLqU-nzQ40iXdzL8-fig_7EqtnA7XJa5s9Suo_vZGM-ypdUVMc120m8qINb1-y2NUu_ipLa-sKaJMLLnxx7mNeQZqI44uFicynSqfOwkwSlljCLyl8O9rt-OiIRLh6rIAKkzEosrlKVUbds1tC2KZ_g7DiUM9O5prKtKlIVkpNWivNS-VPb_6E9mgvZbZRQavAiw
打开登录页面;
注意:谷歌新版本浏览器无法打开kubernetes-dashboard的WEB-UI。需要使用火狐浏览器。
https://134.0.84.103:30001 输入上一步查到的tocken,即可登录
其它命令:
查看pod日志
# kubectl -n kube-system logs kubernetes-dashboard-7dcf6b77f7-mbnxc
删除pod
[root@k8s-master dashboard]# kubectl delete -f dashboard-controller.yaml