阅读本篇请首选阅读前面的

kubernetes1.5.1集群安装部署指南之基础组件安装篇

kubernetes1.5.1集群安装部署指南之基础环境准备篇


三、集群配置篇

(一)master配置

1、集群初始化

rm -r -f /etc/kubernetes/* /var/lib/kubelet/* /var/lib/etcd/*

kubeadm init --api-advertise-addresses=192.168.128.115 --pod-network-cidr 10.245.0.0/16 --use-kubernetes-version v1.5.1


注意:上面的192.168.128.115是我的master的地址。这个命令不可以连续运行两次,如果再次运行,需要执行 kubeadm reset。


输出内容如下:

[root@kube ~]# kubeadm init --api-advertise-addresses=192.168.128.115  --pod-network-cidr=10.245.0.0/16
[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.
[preflight] Running pre-flight checks
[init] Using Kubernetes version: v1.5.1
[tokens] Generated token: "211c65.e7a44742440e1fad"
[certificates] Generated Certificate Authority key and certificate.
[certificates] Generated API Server key and certificate
[certificates] Generated Service Account signing keys
[certificates] Created keys and certificates in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[apiclient] Created API client, waiting for the control plane to become ready
[apiclient] All control plane components are healthy after 23.373017 seconds
[apiclient] Waiting for at least one node to register and become ready

注意:如果这里一直停留很长时间,说明平台所需要的docker镜像未下载到位,请参见基础组件安装篇。
[apiclient] First node is ready after 6.017237 seconds
[apiclient] Creating a test deployment
[apiclient] Test deployment succeeded
[token-discovery] Created the kube-discovery deployment, waiting for it to become ready
[token-discovery] kube-discovery is ready after 3.504919 seconds
[addons] Created essential addon: kube-proxy
[addons] Created essential addon: kube-dns


Your Kubernetes master has initialized successfully! //表示集群初始化成功。


You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
    http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node:


kubeadm join --token=211c65.e7a44742440e1fad 192.168.128.115 //千万注意:这条信息一定要拷贝记录下来,否则后面哭都来不及。


(二)每台node计算节点加入k8s集群

在所有的结点运行下面的命令:
kubeadm join --token=211c65.e7a44742440e1fad 192.168.128.115


运行完输出 信息如下:
[root@kube~]# kubeadm join --token=211c65.e7a44742440e1fad 192.168.128.115

[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks

[tokens] Validating provided token

[discovery] Created cluster info discovery client, requesting info from http://192.168.128.115:9898/cluster-info/v1/?token-id=60a95a

[discovery] Cluster info object received, verifying signature using given token

[discovery] Cluster info signature and contents are valid, will use API endpoints [https://192.168.128.115:6443]

 [bootstrap] Trying to connect to endpoint https://192.168.128.115:6443

[bootstrap] Detected server version: v1.5.1

[bootstrap] Successfully established connection with endpoint https://192.168.128.115:6443 

[csr] Created API client to obtain unique certificate for this node, generating keys and certificate signing request

[csr] Received signed certificate from the API server:Issuer: CN=kubernetes | Subject: CN=system:node:k8s-node1 | CA: falseNot before: 2016-12-23 07:06:00 +0000 UTC Not After: 2017-12-23 07:06:00 +0000 UTC[csr] Generating kubelet configuration[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"Node join complete:*

 Certificate signing request sent to master and response  received.* Kubelet informed of new secure connection details.


Run 'kubectl get nodes' on the master to see this machine join.

检测结果

[root@kube ~]# kubectl get node
NAME          STATUS         AGE
kube.master   Ready,master   12d
kube.node1    Ready          12d
kube.node2    Ready          12d


(三)在master上清理测试数据
[root@kube ~]#kubectl taint nodes --all dedicated-

taint key="dedicated" and effect="" not found.

taint key="dedicated" and effect="" not found.

taint key="dedicated" and effect="" not found.


(四)在master上配置部署 weave 网络,打通跨主机容器通讯

 官方给出的命令:kubectl create -f https://git.io/weave-kube

网络问题我们一般用不了,我们用这样:

[root@kube ~]#wget https://git.io/weave-kube -O weave-kube.yaml  //下载配置文件

[root@kube ~]#kubectl create -f weave-kube.yaml    //创建weave网络

[root@kube ~]# kubectl get pods -o wide -n kube-system  //查看网络pods启动情况


(六)在master上配置dashboard

1、下载yaml文件
[root@kube ~]#wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml


2、修改yaml文件
[root@kube ~]#  vi kubernetes-dashboard.yaml
p_w_picpathPullPolicy: Always     修改为    p_w_picpathPullPolicy: IfNotPresent


3、配置dashboard
[root@kube ~]#kubectl create -f kubernetes-dashboard.yaml

deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created


4、查看仪表服务启动状态
[root@kube ~]# kubectl get pod --namespace=kube-system
NAME                      READY     STATUS    RESTARTS   AGE
dummy-2088944543-pwdw2          1/1       Running   0          3h
etcd-kube.master              1/1       Running   0          3h
kube-apiserver-kube.master        1/1       Running   0          3h
kube-controller-manager-kube.master  1/1       Running   0          3h
kube-discovery-982812725-rj6te     1/1       Running   0          3h
kube-dns-2247936740-9g51a        3/3       Running   1          3h
kube-proxy-amd64-i1shn          1/1       Running   0          3h
kube-proxy-amd64-l3qrg          1/1       Running   0          2h
kube-proxy-amd64-yi1it           1/1       Running   0          3h
kube-scheduler-kube.master         1/1       Running   0          3h
kubernetes-dashboard-3000474083-6kwqs 1/1       Running   0          15s
注意:如果该pod不停重启,我是将整个k8s集群重启就OK,启动顺序为node、最后master。不知道为啥?希望大神解答。
weave-net-f89j7                2/2       Running   0          32m
weave-net-q0h18                 2/2       Running   0          32m
weave-net-xrfry                 2/2       Running   0          32m


5、查看kubernetes-dashboard服务的外网访问端口
[root@kube ~]# kubectl describe svc kubernetes-dashboard --namespace=kube-system
Name:kubernetes-dashboard
Namespace:kube-system
Labels:app=kubernetes-dashboard
Selector:app=kubernetes-dashboard
Type:NodePort
IP:10.13.114.76
Port:<unset>80/TCP
NodePort:<unset>30435/TCP //外网访问端口
Endpoints:10.38.0.2:9090
Session Affinity:None
No events.[rootkubectl get pod --namespace=kube-system

至此可以用NodeIP:NodePort访问kubernetes-dashboard


(七)在master上配置第三方开源监控heapster

1、下载配置文件并上传master

请在github上下载heapster-master文件包或者到附件influxdb.rar

在heapster-master\deploy\kube-config\influxdb目录下找到这6个文件:

grafana-deployment.yaml

grafana-service.yaml

influxdb-deployment.yaml

influxdb-service.yaml

heapster-deployment.yaml

heapster-service.yaml


2、创建deployment、service

kubectl create -f grafana-deployment.yaml -f grafana-service.yaml -f influxdb-deployment.yaml -f  influxdb-service.yaml -f heapster-deployment.yaml -f  heapster-service.yaml


3、查看pod启动状态

[root@kube ~]# kubectl get pods -o wide -n kube-system
NAME                                    READY     STATUS    RESTARTS   AGE       IP              NODE
dummy-2088944543-8dql8                  1/1       Running   1          12d       192.168.128.115   kube.master
etcd-kube.master                        1/1       Running   1          12d       192.168.128.115   kube.master
heapster-3901806196-hsv2s               1/1       Running   1          12d       10.46.0.4       kube.node2
kube-apiserver-kube.master              1/1       Running   1          12d       192.168.128.115   kube.master
kube-controller-manager-kube.master     1/1       Running   1          12d       192.168.128.115   kube.master
kube-discovery-1769846148-j8nwk         1/1       Running   1          12d       192.168.128.115   kube.master
kube-dns-2924299975-vdp8s               4/4       Running   4          12d       10.40.0.2       kube.master
kube-proxy-5mkkz                        1/1       Running   1          12d       192.168.128.115   kube.master
kube-proxy-8ggq0                        1/1       Running   1          12d       192.168.128.117   kube.node2
kube-proxy-tdd7m                        1/1       Running   2          12d       192.168.128.116   kube.node1
kube-scheduler-kube.master              1/1       Running   1          12d       192.168.128.115   kube.master
kubernetes-dashboard-3000605155-gr6ll   1/1       Running   0          4d        10.46.0.12      kube.node2
monitoring-grafana-810108360-2nfb7      1/1       Running   1          12d       10.46.0.3       kube.node2
monitoring-influxdb-3065341217-tzhfj    1/1       Running   0          4d        10.46.0.13      kube.node2
weave-net-98jjb                         2/2       Running   5          12d       192.168.128.116   kube.node1
weave-net-h15r5                         2/2       Running   2          12d       192.168.128.115   kube.master
weave-net-rcr6x                         2/2       Running   2          12d       192.168.128.117   kube.node2


4、查看外网服务端口


查看monitoring-grafana服务端口


[root@kube ~ heapster]# kubectl get svc --namespace=kube-system
NAME                   CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE

heapster               10.98.45.1      <none>        80/TCP          1h

kube-dns               10.96.0.10      <none>        53/UDP,53/TCP   2h

kubernetes-dashboard   10.108.45.66    <nodes>       80:32155/TCP    1h

monitoring-grafana     10.97.110.225   <nodes>       80:30687/TCP    1h

monitoring-influxdb    10.96.175.67    <none>        8086/TCP        1h


看到开放端口为30687

通过节点IP加端口号30687访问第三方监控画面