闲言乱语

在前段日子编写了kubernetes部署全过程之后,好友告诉我,你写得太长啦。能不能拆分章节一下。但是由于各种工作上和学习自研上的计划以及任务太多了,这个篇章的修改以及新篇章的编写给延迟了下来,但是为了更加方便各位读者们阅读,我以下对内容做了四个篇章的拆分

kubernetes v1.11 二进制部署篇章目录



前言

在经过上一篇章关于​​kubernetes 基本技术概述铺垫​​,在部署etcd集群之后,就可以开始部署kubernetes的集群服务了。

kubernetes v1.11  二进制部署(三)之master组件部署_sed



部署master

将master所需的二进制执行文件拷贝至/user/bin目录下

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
softwaredir=$basedir/../install_kubernetes_software

function copy_bin(){
cp -v $softwaredir/kube-apiserver $binDir
cp -v $softwaredir/kube-controller-manager $binDir
cp -v $softwaredir/kube-scheduler $binDir
cp -v $softwaredir/kubectl $binDir
}

copy_bin

API Server权限控制方式介绍

API Server权限控制分为三种:
Authentication(身份认证)、Authorization(授权)、AdmissionControl(准入控制)

身份认证:
当客户端向Kubernetes非只读端口发起API请求时,Kubernetes通过三种方式来认证用户的合法性。kubernetes中,验证用户是否有权限操作api的方式有三种:证书认证,token认证,基本信息认证。

① 证书认证

设置apiserver的启动参数:--client_ca_file=SOMEFILE ,这个被引用的文件中包含的验证client的证书,如果被验证通过,那么这个验证记录中的主体对象将会作为请求的username。

② Token认证(​​本次使用token认证的方式​​)

设置apiserver的启动参数:--token_auth_file=SOMEFILE。 token file的格式包含三列:token,username,userid。当使用token作为验证方式时,在对apiserver的http请求中,增加 一个Header字段:Authorization ,将它的值设置为:Bearer SOMETOKEN。

③ 基本信息认证

设置apiserver的启动参数:--basic_auth_file=SOMEFILE,如果更改了文件中的密码,只有重新启动apiserver使 其重新生效。其文件的基本格式包含三列:password,username,userid。当使用此作为认证方式时,在对apiserver的http 请求中,增加一个Header字段:Authorization,将它的值设置为: Basic BASE64ENCODEDUSER:PASSWORD。

,

创建 TLS Bootstrapping Token

Token auth file
Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)

## set param
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

## function and implments
function save_BOOTSTRAP_TOKEN(){
cat > $configConfDir/BOOTSTRAP_TOKEN <<EOF
$BOOTSTRAP_TOKEN
EOF
}

save_BOOTSTRAP_TOKEN

function create_token(){
cat > $kubernetesDir/token.csv <<EOF
$BOOTSTRAP_TOKEN,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
EOF
}

create_token

后续将token.csv发到所有机器(Master 和 Node)的 /etc/kubernetes/ 目录。


创建admin用户的集群参数

在前面使用openssl创建TLS证书的时候已经对证书的用户以及组签名至证书之中,那么下一步就是定义admin用户在集群中的参数了。

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)

## set param
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`
KUBE_APISERVER="https://$MASTER_IP:6443"

# 设置集群参数
function config_cluster_param(){
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=$KUBE_APISERVER
}

config_cluster_param

# 设置管理员认证参数
function config_admin_credentials(){
kubectl config set-credentials admin \
--client-certificate=$kubernetesTLSDir/admin.pem \
--client-key=$kubernetesTLSDir/admin.key \
--embed-certs=true
}

config_admin_credentials

# 设置管理员上下文参数
function config_admin_context(){
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
}

config_admin_context

# 设置集群默认上下文参数
function config_default_context(){
kubectl config use-context kubernetes
}

config_default_context

值得注意的采用​​token认证​​​的方式,​​kubernetes​​​在后续是需要创建​​bootstrap.kubeconfig​​​的文件的,那么我们需要将​​admin​​​相关的​​TLS证书文件​​​写入这个​​bootstrap.kubeconfig​​文件。

该如何将admin的TLS文件参数写入bootstrap.kubeconfig呢?

这时候就要借助这个​​--embed-certs​​​ 的参数了,当该参数为 ​​true​​​ 时表示将 ​​certificate-authority 证书​​​写入到生成的 ​​bootstrap.kubeconfig​​​文件中。
在指定了参数之后,后续由 ​​​kube-apiserver​​ 自动生成;


安装kube-apiserver

  1. 编写kube-apiserver.service(/usr/lib/systemd/system)


将kube-apiserver.service文件写入/usr/lib/systemd/system/中,后续用来启动二进制文件:

[Unit]
Description=Kube-apiserver Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target
[Service]
Type=notify
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=always
LimitNOFILE=65536

[Install]
WantedBy=default.target

kube-apiserver.service参数说明


EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver

说明:定义apiserver加载的两个配置文件


ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS

说明:定义二进制可执行文件启用的文件路径​​/usr/bin/kube-apiserver​​,并且设置多个启用参数的变量。这些变量都是从配置文件中获取的。


2.编写config配置文件(/etc/kubernetes)

​config配置文件​​​是提供​​apiserver、controller-manager、scheduler​​​服务读取​​kubernetes​​​相关通用参数配置的。
将​​​config配置文件​​​写入​​/etc/kubernetes​​​目录下,当然这个​​/etc/kubernetes​​​也是可以自定义的,需要改动的话,注意要在​​service​​的环境变量文件填写处修改即可。

[root@server81 kubernetes]# vim config 

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
# 表示错误日志记录到文件还是输出到stderr。
KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
# 日志等级。设置0则是debug等级
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers
# 允许运行特权容器。
KUBE_ALLOW_PRIV="--allow-privileged=true"

# How the controller-manager, scheduler, and proxy find the apiserver
# 设置master服务器的访问
KUBE_MASTER="--master=http://172.16.5.81:8080"

  1. 编写apiserver配置文件(/etc/kubernetes)


​apiserver配置文件​​​是单独提供​​apiserver服务​​​读取相关参数的。
将​​​apiserver配置文件​​​写入​​/etc/kubernetes​​目录下。

###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=172.16.5.81 --bind-address=172.16.5.81 --insecure-bind-address=172.16.5.81"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.6.0/24"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,NodeRestriction"

## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/kubernetesTLS/apiserver.pem --tls-private-key-file=/etc/kubernetes/kubernetesTLS/apiserver.key --client-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem --service-account-key-file=/etc/kubernetes/kubernetesTLS/ca.key --storage-backend=etcd3 --etcd-cafile=/etc/etcd/etcdSSL/ca.pem --etcd-certfile=/etc/etcd/etcdSSL/etcd.pem --etcd-keyfile=/etc/etcd/etcdSSL/etcd-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"

配置文件相关参数说明如下:

MASTER IP地址以及节点IP地址的绑定

## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=$MASTER_IP --bind-address=$MASTER_IP --insecure-bind-address=$MASTER_IP"

说明:MASTER_IP就是填写安装master节点服务的IP地址,示例:
--advertise-address=172.16.5.81 --bind-address=172.16.5.81 --insecure-bind-address=172.16.5.81

etcd集群的endpoint访问地址

## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=$ETCD_ENDPOINT"

说明:ETCD_ENDPOINT访问etcd集群的方式,示例:
--etcd-servers=​​​https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379​​​ 如果是单台etcd的话,那么一个的单台IP即可,示例:
--etcd-servers=​​https://172.16.5.81:2379​

kubernetes中service定义的虚拟网段

kubernetes主要分为pods的IP网段、service的IP网段,这里定义的是service的虚拟IP网段。

## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.6.0/24"

配置kubernetes的认证控制启动插件

## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,NodeRestriction"

配置多个自定义参数

## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC \
--runtime-config=rbac.authorization.k8s.io/v1beta1 \
--kubelet-https=true \
--token-auth-file=$kubernetesDir/token.csv \
--service-node-port-range=30000-32767 \
--tls-cert-file=$kubernetesTLSDir/apiserver.pem \
--tls-private-key-file=$kubernetesTLSDir/apiserver.key \
--client-ca-file=$kubernetesTLSDir/ca.pem \
--service-account-key-file=$kubernetesTLSDir/ca.key \
--storage-backend=etcd3 \
--etcd-cafile=$etcdCaPem \
--etcd-certfile=$etcdPem \
--etcd-keyfile=$etcdKeyPem \
--enable-swagger-ui=true \
--apiserver-count=3 \
--audit-log-maxage=30 \
--audit-log-maxbackup=3 \
--audit-log-maxsize=100 \
--audit-log-path=/var/lib/audit.log \
--event-ttl=1h"

参数

说明

--authorization-mode=Node,RBAC

启用Node RBAC插件

--runtime-config=rbac.authorization.k8s.io/v1beta1

运行的rabc配置文件

--kubelet-https=true

启用https

--token-auth-file=$kubernetesDir/token.csv

指定生成token文件

--service-node-port-range=30000-32767

设置node port端口号范围30000~32767

--tls-cert-file=$kubernetesTLSDir/apiserver.pem

指定apiserver的tls公钥证书

--tls-private-key-file=$kubernetesTLSDir/apiserver.key

指定apiserver的tls私钥证书

--client-ca-file=$kubernetesTLSDir/ca.pem

指定TLS证书的ca根证书公钥

--service-account-key-file=$kubernetesTLSDir/ca.key

指定apiserver的tls证书

--storage-backend=etcd3

指定etcd存储为version 3系列

--etcd-cafile=$etcdCaPem

指定etcd访问的ca根证书公钥

--etcd-certfile=$etcdPem

指定etcd访问的TLS证书公钥

--etcd-keyfile=$etcdKeyPem

指定etcd访问的TLS证书私钥

--enable-swagger-ui=true

启用 swagger-ui 功能,Kubernetes使用了swagger-ui提供API在线查询功能

--apiserver-count=3

设置集群中运行的API Sever数量,这种使用单个也没关系

--event-ttl=1h

API Server 对于各种审计时间保存1小时

到此,关于apiserver的service以及配置文件基本说明清楚了。还有疑问的就给我留言吧。

3.启动apiserver

systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver

执行如下:

[root@server81 install_kubernetes]# systemctl daemon-reload
[root@server81 install_kubernetes]# systemctl enable kube-apiserver
[root@server81 install_kubernetes]# systemctl start kube-apiserver
[root@server81 install_kubernetes]# systemctl status kube-apiserver
● kube-apiserver.service - Kube-apiserver Service
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2018-08-19 22:57:48 HKT; 11h ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 1688 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─1688 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,...

Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.415631 1688 storage_rbac.go:246] created role.rbac.authorizat...public
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.448673 1688 controller.go:597] quota admission added evaluato...dings}
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.454356 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.496380 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.534031 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.579370 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.612662 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.652351 1688 storage_rbac.go:276] created rolebinding.rbac.aut...public
Aug 20 01:00:00 server81 kube-apiserver[1688]: I0820 01:00:00.330487 1688 trace.go:76] Trace[864267216]: "GuaranteedUpdate ...75ms):
Aug 20 01:00:00 server81 kube-apiserver[1688]: Trace[864267216]: [683.232535ms] [674.763984ms] Transaction prepared
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_kubernetes]#

安装kube-controller-manager

1. 编写kube-controller-manager.service(/usr/lib/systemd/system)

将​​kube-controller-manager.service​​​写入​​/usr/lib/systemd/system​​​目录下,提供二进制文件的​​service​​启动文件。

[root@server81 install_k8s_master]# cat /usr/lib/systemd/system/kube-controller-manager.service 
[Unit]
Description=Kube-controller-manager Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
Type=simple
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=always
LimitNOFILE=65536

[Install]
WantedBy=default.target
[root@server81 install_k8s_master]#

kube-controller-manager.service的参数说明

EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager

说明:定义​​kube-controller-manager.service​​启用的环境变量配置文件


ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS

说明:定义service启用的二进制可执行文件的路径(​​/usr/bin/kube-controller-manager​​​),以及启动该​​go​​​服务后面多个​​flag​​参数,当然这些参数都是从配置文件中读取的。


2.配置文件controller-manager(/etc/kubernetes)

将​​controller-manager​​​文件写入​​/etc/kubernetes​​目录下。

[root@server81 install_k8s_master]# cat /etc/kubernetes/
apiserver config controller-manager kubernetesTLS/ token.csv
[root@server81 install_k8s_master]# cat /etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager

# defaults from config and apiserver should be adequate

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://172.16.5.81:8080 --address=127.0.0.1 --service-cluster-ip-range=10.0.6.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/kubernetesTLS/ca.pem --cluster-signing-key-file=/etc/kubernetes/kubernetesTLS/ca.key --service-account-private-key-file=/etc/kubernetes/kubernetesTLS/ca.key --root-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem --leader-elect=true --cluster-cidr=10.1.0.0/16"
[root@server81 install_k8s_master]#

controller-manager的参数说明

参数

说明

--master=http://172.16.5.81:8080

配置master访问地址

--address=127.0.0.1

配置监听本地IP地址,address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器

--service-cluster-ip-range=10.0.6.0/24

设置kubernetes的service的网段

--cluster-name=kubernetes

设置集群的域名为kubernetes

--cluster-signing-cert-file=$kubernetesTLSDir/ca.pem

设置集群签署TLS的ca根证书公钥 。指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;

--cluster-signing-key-file=$kubernetesTLSDir/ca.key

设置集群签署TLS的ca根证书私钥 ;指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥;

--service-account-private-key-file=$kubernetesTLSDir/ca.key

设置集群​​安全账号​​签署TLS的ca根证书私钥

--root-ca-file=$kubernetesTLSDir/ca.pem

设置集群root用户签署TLS的ca根证书公钥;用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件;

--leader-elect=true

设置启动选举,但是目前只启动一个,也没地方要选择,主要在于API Sever有多个的时候

--cluster-cidr=$podClusterIP

设置集群pod的IP网段


  1. controller-manager启动服务


systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager

运行结果如下:

[root@server81 conf]# systemctl daemon-reload
[root@server81 conf]# systemctl enable kube-controller-manager
[root@server81 conf]# systemctl start kube-controller-manager
[root@server81 conf]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kube-controller-manager Service
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 10:22:37 HKT; 33min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2246 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─2246 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --master=http://172.16....

Aug 20 10:22:37 server81 kube-controller-manager[2246]: I0820 10:22:37.577898 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.548284 2246 controller_utils.go:1025] Waiting for cac...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.568248 2246 controller_utils.go:1025] Waiting for cac...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.595675 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.595716 2246 garbagecollector.go:142] Garbage collecto...rbage
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.650186 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.668935 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:29:56 server81 kube-controller-manager[2246]: W0820 10:29:56.356490 2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Aug 20 10:39:47 server81 kube-controller-manager[2246]: W0820 10:39:47.125097 2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Aug 20 10:51:45 server81 kube-controller-manager[2246]: W0820 10:51:45.878609 2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 conf]#

安装kube-scheduler

1.编写kube-scheduler.service(/usr/lib/systemd/system)

将​​kube-scheduler.service​​​写入​​/usr/lib/systemd/system​​目录下

[root@server81 install_k8s_master]# cat /usr/lib/systemd/system/kube-scheduler.service 
[Unit]
Description=Kube-scheduler Service
After=network.target

[Service]
Type=simple
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS

Restart=always
LimitNOFILE=65536

[Install]
WantedBy=default.target
[root@server81 install_k8s_master]#

kube-scheduler.service参数说明

EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler

说明:定义配置两个服务启用读取的配置文件


ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS

说明:定义启用的二进制可执行文件的路径(​​/usr/bin/kube-scheduler​​)以及启用相关参数。


2.配置文件scheduler(/etc/kubernetes)

将​​scheduler​​​文件写入​​/etc/kubernetes​​目录下。

[root@server81 install_k8s_master]# cat /etc/kubernetes/
apiserver config controller-manager kubernetesTLS/ scheduler token.csv
[root@server81 install_k8s_master]# cat /etc/kubernetes/scheduler
###
# The following values are used to configure the kubernetes scheduler

# defaults from config and scheduler should be adequate

# Add your own!
KUBE_SCHEDULER_ARGS="--master=http://172.16.5.81:8080 --leader-elect=true --address=127.0.0.1"
[root@server81 install_k8s_master]#

scheduler的参数说明

参数

说明

--master=http://172.16.5.81:8080

定义配置master的apiserver访问地址

--leader-elect=true

设置启动选举,但是目前只启动一个,也没地方要选择,主要在于API Sever有多个的时候

--address=127.0.0.1

配置监听本地IP地址,address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器


3.启用scheduler服务

systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler
systemctl status kube-scheduler

运行结果如下:

[root@server81 install_k8s_master]# systemctl daemon-reload
[root@server81 install_k8s_master]# systemctl enable kube-scheduler
[root@server81 install_k8s_master]# systemctl restart kube-scheduler
[root@server81 install_k8s_master]# systemctl status kube-scheduler
● kube-scheduler.service - Kube-scheduler Service
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 11:12:28 HKT; 686ms ago
Main PID: 2459 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─2459 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --master=http://172.16.5.81:8080...

Aug 20 11:12:28 server81 systemd[1]: Started Kube-scheduler Service.
Aug 20 11:12:28 server81 systemd[1]: Starting Kube-scheduler Service...
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.724918 2459 options.go:148] WARNING: all flags other than --c... ASAP.
Aug 20 11:12:28 server81 kube-scheduler[2459]: I0820 11:12:28.727302 2459 server.go:126] Version: v1.11.0
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.728311 2459 authorization.go:47] Authorization is disabled
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.728332 2459 authentication.go:55] Authentication is disabled
Aug 20 11:12:28 server81 kube-scheduler[2459]: I0820 11:12:28.728341 2459 insecure_serving.go:47] Serving healthz insecurel...:10251
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_k8s_master]#

执行到这里,master所需要的服务都已经安装完毕了,下面我们可以查看一下组件的情况:

[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver config controller-manager kubernetesTLS scheduler token.csv
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
[root@server81 install_k8s_master]#

可以看出,各个组件包含​​etcd​​都是正常运行着的。

那么下面我们就要创建为​​node​​​节点TLS认证服务的​​kube-proxy kubeconfig​​​ 、​​kubelet bootstrapping kubeconfig​​​ 文件了。
这两个文件主要就提供​​​proxy​​​和​​kubelet​​​访问​​apiserver​​的。


创建 kube-proxy kubeconfig 文件以及相关集群参数

​kube-proxy kubeconfig​​​ 文件是提供​​kube-proxy​​​用户请求​​apiserver​​​所有​​API​​​权限的集群参数的。
执行完以下命令之后,自动生成到​​​/etc/kubernetes​​目录下即可。

#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
serviceDir=/usr/lib/systemd/system
binDir=/usr/bin

kubernetesDir=/etc/kubernetes
kubernetesTLSDir=/etc/kubernetes/kubernetesTLS

## set param
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`

BOOTSTRAP_TOKEN=前面记录的,需要一致。

## proxy
## 设置proxy的集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \ ## true将证书自动写入kubeconfig文件
--server=https://$MASTER_IP:6443 \ ## 设置访问的master地址
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig ## 生成kubeconfig的路径

## 设置kube-proxy用户的参数
kubectl config set-credentials kube-proxy \
--client-certificate=$kubernetesTLSDir/proxy.pem \
--client-key=$kubernetesTLSDir/proxy.key \
--embed-certs=true \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig

## 设置kubernetes集群中kube-proxy用户的上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig

## 设置kube-proxy用户的默认上下文参数
kubectl config use-context default --kubeconfig=$kubernetesDir/kube-proxy.kubeconfig

创建 kubelet bootstrapping kubeconfig 文件以及相关集群参数

创建​​kubelet​​​响应式的​​kubeconfig​​​文件,用于提供​​apiserver​​​自动生成​​kubeconfig​​​文件、以及公钥私钥。
该文件创建之后,在​​​node​​​节点​​kubelet​​​启用的时候,自动会创建三个文件,后续在部署​​node​​部分的时候说明。

## 设置kubelet的集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=https://$MASTER_IP:6443 \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig

## 设置kubelet用户的参数
kubectl config set-credentials kubelet-bootstrap \
--token=$BOOTSTRAP_TOKEN \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig

## 设置kubernetes集群中kubelet用户的默认上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig

## 设置kubelet用户的默认上下文参数
kubectl config use-context default \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig

## 创建kubelet的RABC角色
kubectl create --insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap

##参数说明:
#1、跳过tls安全认证直接创建kubelet-bootstrap角色
#2、设置集群角色:system:node-bootstrapper
#3、设置集群用户:kubelet-bootstrap

自动化创建kube-proxy kubeconfig、kubelet bootstrapping kubeconfig 文件

在看到这里的读者肯定会角色指令很多,很麻烦。没关系,送上一段咖啡代码:

[root@server81 install_k8s_master]# cat configDir/conf/BOOTSTRAP_TOKEN 
4b395732894828d5a34737d83c334330
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# cat Step6_create_kubeconfig_file.sh
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
serviceDir=/usr/lib/systemd/system
binDir=/usr/bin

kubernetesDir=/etc/kubernetes
kubernetesTLSDir=/etc/kubernetes/kubernetesTLS

configdir=$basedir/configDir
configServiceDir=$configdir/service
configConfDir=$configdir/conf

## set param
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`

BOOTSTRAP_TOKEN=`cat $configConfDir/BOOTSTRAP_TOKEN`

#echo $BOOTSTRAP_TOKEN

## function and implments
# set proxy
function create_proxy_kubeconfig(){
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=https://$MASTER_IP:6443 \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}

create_proxy_kubeconfig

function config_proxy_credentials(){
kubectl config set-credentials kube-proxy \
--client-certificate=$kubernetesTLSDir/proxy.pem \
--client-key=$kubernetesTLSDir/proxy.key \
--embed-certs=true \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}

config_proxy_credentials

function config_proxy_context(){
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}

config_proxy_context

function set_proxy_context(){
kubectl config use-context default --kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}

set_proxy_context

## set bootstrapping
function create_kubelet_bootstrapping_kubeconfig(){
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=https://$MASTER_IP:6443 \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}

create_kubelet_bootstrapping_kubeconfig

function config_kubelet_bootstrapping_credentials(){
kubectl config set-credentials kubelet-bootstrap \
--token=$BOOTSTRAP_TOKEN \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}

config_kubelet_bootstrapping_credentials

function config_kubernetes_bootstrap_kubeconfig(){
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}

config_kubernetes_bootstrap_kubeconfig

function set_bootstrap_context(){
kubectl config use-context default \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}

set_bootstrap_context

## create rolebinding
function create_cluster_rolebinding(){
kubectl create --insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
}

create_cluster_rolebinding
[root@server81 install_k8s_master]#

执行结果如下:

[root@server81 install_k8s_master]# ./Step6_create_kubeconfig_file.sh 
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver config kube-proxy.kubeconfig scheduler
bootstrap.kubeconfig controller-manager kubernetesTLS/ token.csv
[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver bootstrap.kubeconfig config controller-manager kube-proxy.kubeconfig kubernetesTLS scheduler token.csv
[root@server81 install_k8s_master]#

查看生成的kube-proxy.kubeconfig的内容,如下:

[root@server81 install_k8s_master]# cat /etc/kubernetes/kube-proxy.kubeconfig 
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHVENDQWdHZ0F3SUJBZ0lKQVAxbEpzOTFHbG9wTUEwR0NTcUdTSWIzRFFFQkN3VUFNQ014RXpBUkJnTlYKQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0Y3pBZUZ3MHhPREE0TVRreE5ESXhORFJhRncwMApOakF4TURReE5ESXhORFJhTUNNeEV6QVJCZ05WQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0CmN6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5BejJxOUNsVWozZmNTY20wVTYKWnhrTVFCVVJzSFpFeUpIbXhMWUR1RmNzbGlyUjZxZHFSbExjM3Z1SnlVSHB3dUF5QzZxYzlaZE52clNCUkhOegpxUVFSREVuUENMQXQ0ZFVkUjh2NnQvOVhKbnJ0Y0k3My94U0RKNno2eFh3K2MvTy95c0NET3pQNkFDcmE5cHlPCmJpQ1ZRSEJ4eEI3bGxuM0ErUEFaRWEzOHZSNmhTSklzRndxVjAwKy9iNSt5K3FvVVdtNWFtcS83OWNIM2Zwd0kKNnRmUlZIeHAweXBKNi9TckYyZWVWVU1KVlJxZWtiNjBuZkJRUUNEZ2YyL3lSOGNxVDZlV3VDdmZnVEdCV01QSQpPSjVVM1VxekNMVGNpNHpDSFhaTUlra25EWVFuNFR6Qm05MitzTGhXMlpFZk5DOUxycFZYWHpzTm45alFzeTA3ClliOENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUI4R0ExVWQKSXdRWU1CYUFGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSQpodmNOQVFFTEJRQURnZ0VCQUtNVGJXcng5WXJmSXByY3RHMThTanJCZHVTYkhLL05FRGcySHNCb1BrU2YwbE1TCmdGTnNzOGZURlliKzY3UWhmTnA1MjBodnk3M3JKU29OVkJweWpBWDR1SnRjVG9aZDdCZVhyUHdNVWVjNXRjQWoKSFdvY1dKaXNpck0vdFV4cUxLekdRdnFhVDhmQy9UUW5kTGUxTkJ0cEFQbjM5RzE5VFVialMvUTlKVE1qZVdMWAo0dU5MVExGUVUrYTAwTWMrMGVSWjdFYUVRSks2U0h1OUNuSEtNZnhIVC81UTdvbXBrZlBtTTZLT0VOVndaK0Q5Clh0ZzlIUmlrampFMGtsNHB3TmlHRnZQYVhuY0V5RDlwVW5vdWI0RGc2UHJ1MU9zTjYxakwyd2VneVY4WU1nUVEKWEdkVTIveExMcEh2cVlPVDNRay9mNWw5MHpackQvYm5vZGhxNS84PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://172.16.5.81:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN1ekNDQWFNQ0NRRFZDSG9rSldveEdEQU5CZ2txaGtpRzl3MEJBUXNGQURBak1STXdFUVlEVlFRRERBcHIKZFdKbGNtNWxkR1Z6TVF3d0NnWURWUVFLREFOck9ITXdIaGNOTVRnd09ERTVNVFF5TVRRMFdoY05Namd3T0RFMgpNVFF5TVRRMFdqQWNNUm93R0FZRFZRUUREQkZ6ZVhOMFpXMDZhM1ZpWlMxd2NtOTRlVENDQVNJd0RRWUpLb1pJCmh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWpOVitwVGVFU2d6di9rcDZvQ3Z2T3NoUXFYS0t3RWFrTWEKcDRvNEdoZUZySzVUbW53eTc4YWpJdHM4b0Nyb3l2Q1lVR2VVcVJqaG1xSUdRWWJxWVFPTy9NZ21pZmdFMVFlego3RzNYKzJsQ25qRThOVnZBd011QXpYU0w4L3dkU1NEUTZDdGdvUkVCcFhTQUJWYStaMldXVy9VSm53ZFlFWHlGClh2N3ZERWRJZG1pUWNjWEtMcHRuMWFzV25nek1aVG9EMDVjMWxQSTlZZ1ZqMFVsNldWMkVMdHhxdGVqdXJHT2kKN3R0K3hRanY0ckdQZ01udTNqOEF1QTNLZXpSUFJ0TVA1RkF6SHZ4WVQ3RU0rRzVmU2JGWFY0ZVVMb0czS3pzWQo3eitDYlF1bnYyNmhXMFM5dWtZT0lNWnA4eVJtcHJ6cGxSVnh5d0dJUUw2ajhqdndkcXNDQXdFQUFUQU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBUUVBQmNUazU0TUY5YnNpaDZaVXJiakh0MmFXR3VaTzZBODlZa3ZUL21VcTRoTHUKd2lUcHRKZWNJWEh5RkZYemVCSDJkUGZIZ1lldEMrQTJGS0dsZFJ1SHJuUW1iTWFkdjN6bGNjbEl2ald6dU1GUQpnenhUQUJ0dGVNYkYvL2M5cE9TL2ZmQS9OcVV0akVEUzlJVXZUTDdjUEs3Z0dMSzRrQWY2N2hPTERLb1NGT2ZjCnp0bEpXWkhPaEpGRjM0bkQySytXMmZzb0g4WFdTeDd1N3FmSHFFRkFNOW5BRjRyQjNZdUFHKzdIOUxMbmVaK1IKbHBTeThLNzBVZUdUVFpFdW5yMzJwMmJEZWxQN0tCTWsvbmUxV01PbzRnL01QUUhOTm5XZHlNeFJ6bHBOeTBregpOekVydVlhbHpINDVTVHIrNytCMkNhcS9sWDFTSWpENXBYVDhZMXRtSFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeU0xWDZsTjRSS0RPLytTbnFnSys4NnlGQ3Bjb3JBUnFReHFuaWpnYUY0V3NybE9hCmZETHZ4cU1pMnp5Z0t1aks4SmhRWjVTcEdPR2FvZ1pCaHVwaEE0Nzh5Q2FKK0FUVkI3UHNiZGY3YVVLZU1UdzEKVzhEQXk0RE5kSXZ6L0IxSklORG9LMkNoRVFHbGRJQUZWcjVuWlpaYjlRbWZCMWdSZklWZS91OE1SMGgyYUpCeAp4Y291bTJmVnF4YWVETXhsT2dQVGx6V1U4ajFpQldQUlNYcFpYWVF1M0dxMTZPNnNZNkx1MjM3RkNPL2lzWStBCnllN2VQd0M0RGNwN05FOUcwdy9rVURNZS9GaFBzUXo0Ymw5SnNWZFhoNVF1Z2Jjck94anZQNEp0QzZlL2JxRmIKUkwyNlJnNGd4bW56SkdhbXZPbVZGWEhMQVloQXZxUHlPL0IycXdJREFRQUJBb0lCQVFDeU5KcmJXT3laYTJXSgo4REZrVGorTkhnU01XNDQ2NjBncStaTEt0Zk5pQUw0NWovVEFXS3czU3p4NStSbmtPdWt3RU56NnNCSktCSjRwClFRZ1NaaHRtL3hVVHhEQVpycUFveitMNXNQNXNjalRXV1NxNW5SejgvZmhZZ0lRdHNRZmZXY2RTQjlXcHRCNVUKZi9FOUJJbmF2RkFyN1RmM1dvOWFSVHNEWUw4eTJtVjJrakNpMkd4S3U4K3BQWXN3ZUIrbGZjc1QyNlB3ODBsRgpXTmZVODRzdDE1SjBCNitRSmhEQnNDb3NpbGxrcFZnaDhPMzVNNmE3WjZlL3IrZnZuYjcycXd2MkdGQm0rNEpmCmRydVJtTHRLdHUxVGhzUGQ4YkQ2MXpTblMrSXoyUGxGWnk0RkY3cFhWU2RwbjVlSm00dkJMM3NOem9HWGlGUmIKOTAydFo5d1JBb0dCQVB6ZXZEZWhEYVBiZ1FLTU5hMFBzN2dlNDZIUkF6Rzl4RDh2RXk4dEVXcVVVY2c3Mndqawp6MGFvLzZvRkFDM0tkM3VkUmZXdmhrV2RrcE9CMXIzMml6Y29Ka3lOQmxDc2YxSDF2dVJDb0gwNTZwM3VCa3dHCjFsZjFWeDV0cjVHMU5laXdzQjdsTklDa2pPNTg2b3F6M3NNWmZMcHM1ZlMxeVZFUExrVmErL2N0QW9HQkFNdEoKbnhpQXNCMnZKaXRaTTdrTjZjTzJ1S0lwNHp0WjZDMFhBZmtuNnd5Zk9zd3lyRHdNUnA2Yk56OTNCZzk0azE4aQpIdlJ3YzJPVVBkeXVrU2YyVGZVbXN6L0h1OWY0emRCdFdYM2lkOE50b29MYUd6RnVVN3hObVlrUWJaL2Y1ZmpNCmtpZzlVZVJYdng5THJTa3RDdEdyRWMvK0JubHNrRk1xc2IrZ1FVdzNBb0dCQUs0SzA3cnFFNHhMQVNGeXhXTG0KNHNpQUlpWjJ5RjhOQUt5SVJ3ajZXUGxsT21DNXFja1dTditVUTl1T2M1QVF3V29JVm1XQ09NVmpiY1l1NEZHQgpCbEtoUkxMOWdYSTNONjUrbUxOY2xEOThoRm5Nd1BMRTVmUkdQWDhJK1lVdEZ2eWYxNmg4RTBYVGU5aU5pNVNKCnRuSEw4Z2dSK2JnVEFvdlRDZ0xjVzMzRkFvR0FSZWFYelM0YTRPb2ovczNhYWl4dGtEMlpPVEdjRUFGM1EySGcKN05LY0VTZ0RhTW1YemNJTzJtVFcxM3pPMmEwRlI3WU0zTko1NnVqRGFNbWg0aExnZFlhTUprZEF3Uit0YlpqYwpKOXdpZ0ZHSGl1VUNhcm5jRXlpL3ZaQ25rVXpFNEFzL3lwUmpQMWdvd05NZHhNWFhMWWRjUlorOGpDNFhabkdNCjB5NkFwWHNDZ1lFQXh6aUkyK2tUekNJcENnOGh3WXdiQ21sTVBaM3RBNXRLRHhKZmNjdWpXSExHVkNnMVd6QTAKdHZuUmxJbnZxdzFXOWtsSGlHTlhmTUpqczhpeXk5WUl4S0NKeTdhUU85WXZ1SVR6OC9PMHVCRURlQ1gvOHFDTwpzRGJ0eHpsa3A2NVdaYTFmR2FLRWVwcHFtWUU2NUdiZk91eHNxRENDSG1WWXcvZmR0M2NnMjI0PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
[root@server81 install_k8s_master]#

查看生成的bootstrap.kubeconfig的内容,如下:

[root@server81 install_k8s_master]# cat /etc/kubernetes/bootstrap.kubeconfig 
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHVENDQWdHZ0F3SUJBZ0lKQVAxbEpzOTFHbG9wTUEwR0NTcUdTSWIzRFFFQkN3VUFNQ014RXpBUkJnTlYKQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0Y3pBZUZ3MHhPREE0TVRreE5ESXhORFJhRncwMApOakF4TURReE5ESXhORFJhTUNNeEV6QVJCZ05WQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0CmN6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5BejJxOUNsVWozZmNTY20wVTYKWnhrTVFCVVJzSFpFeUpIbXhMWUR1RmNzbGlyUjZxZHFSbExjM3Z1SnlVSHB3dUF5QzZxYzlaZE52clNCUkhOegpxUVFSREVuUENMQXQ0ZFVkUjh2NnQvOVhKbnJ0Y0k3My94U0RKNno2eFh3K2MvTy95c0NET3pQNkFDcmE5cHlPCmJpQ1ZRSEJ4eEI3bGxuM0ErUEFaRWEzOHZSNmhTSklzRndxVjAwKy9iNSt5K3FvVVdtNWFtcS83OWNIM2Zwd0kKNnRmUlZIeHAweXBKNi9TckYyZWVWVU1KVlJxZWtiNjBuZkJRUUNEZ2YyL3lSOGNxVDZlV3VDdmZnVEdCV01QSQpPSjVVM1VxekNMVGNpNHpDSFhaTUlra25EWVFuNFR6Qm05MitzTGhXMlpFZk5DOUxycFZYWHpzTm45alFzeTA3ClliOENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUI4R0ExVWQKSXdRWU1CYUFGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSQpodmNOQVFFTEJRQURnZ0VCQUtNVGJXcng5WXJmSXByY3RHMThTanJCZHVTYkhLL05FRGcySHNCb1BrU2YwbE1TCmdGTnNzOGZURlliKzY3UWhmTnA1MjBodnk3M3JKU29OVkJweWpBWDR1SnRjVG9aZDdCZVhyUHdNVWVjNXRjQWoKSFdvY1dKaXNpck0vdFV4cUxLekdRdnFhVDhmQy9UUW5kTGUxTkJ0cEFQbjM5RzE5VFVialMvUTlKVE1qZVdMWAo0dU5MVExGUVUrYTAwTWMrMGVSWjdFYUVRSks2U0h1OUNuSEtNZnhIVC81UTdvbXBrZlBtTTZLT0VOVndaK0Q5Clh0ZzlIUmlrampFMGtsNHB3TmlHRnZQYVhuY0V5RDlwVW5vdWI0RGc2UHJ1MU9zTjYxakwyd2VneVY4WU1nUVEKWEdkVTIveExMcEh2cVlPVDNRay9mNWw5MHpackQvYm5vZGhxNS84PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://172.16.5.81:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 4b395732894828d5a34737d83c334330
[root@server81 install_k8s_master]#

最后总结一下master部署

检查master组件情况以及集群情况

[root@server81 install_k8s_master]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# kubectl cluster-info
Kubernetes master is running at https://172.16.5.81:6443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@server81 install_k8s_master]#

确认后续master需要拷贝到node的相关证书文件

因为​​node​​​部署的时候,​​proxy​​​和​​kubelet​​​是需要拷贝上面生成的证书以及​​kubeconfig​​文件的,这里罗列如下:

[root@server81 install_k8s_master]# tree /etc/kubernetes/
/etc/kubernetes/
├── apiserver
├── bootstrap.kubeconfig
├── config
├── controller-manager
├── kube-proxy.kubeconfig
├── kubernetesTLS
│ ├── admin.key
│ ├── admin.pem
│ ├── apiserver.key
│ ├── apiserver.pem
│ ├── ca.key
│ ├── ca.pem
│ ├── proxy.key
│ └── proxy.pem
├── scheduler
└── token.csv

1 directory, 15 files
[root@server81 install_k8s_master]#

其中​​apiserver、controller-manager、scheduler​​​三个配置文件不需要拷贝到​​node​​节点服务器上,但是个人比较懒惰,干脆整个文件夹目录拷贝过去了。

好了,这里已经写清楚了部署​​master​​​以及相关证书需要知道的知识了,那么下一步我们就切换到​​node​​部署的环节。