可以在配置文件metrics-server-deployment.yaml中添加一个参数–kubelet-insecure-tls跳过这个检查,但是不推荐,因为不安全。
或者在每个节点的/var/lib/kubelet/config.yaml文件中的最后一行添加。

serverTLSBootstrap: true ##添加自签名证书
[root@server4 ~]# vim /var/lib/kubelet/config.yaml
[root@server4 ~]# systemctl daemon-reload
[root@server4 ~]# systemctl restart kubelet.service

这个时侯请求就过来了,之后我们去要进行签发证书。

[kubeadm@server1 ~]$ kubectl get csr
NAME        AGE     REQUESTOR             CONDITION
csr-5tzf2   5m33s   system:node:server3   Pending
csr-dzv8n   7m8s    system:node:server1   Pending
csr-kntm2   5m1s    system:node:server4   Pending
[kubeadm@server1 ~]$ kubectl certificate approve csr-dzv8n
certificatesigningrequest.certificates.k8s.io/csr-dzv8n approved
[kubeadm@server1 ~]$ kubectl certificate approve csr-5tzf2
certificatesigningrequest.certificates.k8s.io/csr-5tzf2 approved
[kubeadm@server1 ~]$ kubectl certificate approve csr-kntm2
certificatesigningrequest.certificates.k8s.io/csr-kntm2 approved

签发完成再次查看

[kubeadm@server1 ~]$ kubectl get csr
NAME        AGE     REQUESTOR             CONDITION
csr-5tzf2   8m11s   system:node:server3   Approved,Issued
csr-dzv8n   9m46s   system:node:server1   Approved,Issued
csr-kntm2   7m39s   system:node:server4   Approved,Issued        ##签发成功

证书也签发成功再次查看命令能否使用

[kubeadm@server1 ~]$ kubectl top node 
Error from server (ServiceUnavailable): the server is currently unable to handle the request (get nodes.metrics.k8s.io)

依然有问题,继续查看日志

[kubeadm@server1 ~]$ kubectl logs -n kube-system metrics-server-64475bbf5d-nms65
I0304 05:53:24.480910       1 serving.go:312] Generated self-signed cert (/tmp/apiserver.crt, /tmp/apiserver.key)
I0304 05:53:25.546441       1 secure_serving.go:116] Serving securely on [::]:4443

这个是侯pod已经没有问题,下来就应该看看服务的问题。

[kubeadm@server1 ~]$ kubectl describe svc metrics-server -n kube-system 
Name:              metrics-server
Endpoints:         10.244.2.28:4443
[kubeadm@server1 ~]$ kubectl get pod -n kube-system -o wide
metrics-server-64475bbf5d-mfkzt   1/1     Running   0          5m44s   10.244.2.28     server3   <none>           <none>

两个ip地址相同说明已经找到,这么看服务没有问题。
再回来查看我们的api

[kubeadm@server1 ~]$ kubectl -n kube-system get apiservice
v1beta1.metrics.k8s.io                 kube-system/metrics-server   False (FailedDiscoveryCheck)   71m

服务还是失败的。

[kubeadm@server1 ~]$ kubectl describe -n kube-system apiservice v1beta1.metrics.k8s.io
    Message:               failing or missing response from https://10.108.202.229:443/apis/metrics.k8s.io/v1beta1: Get https://10.108.202.229:443/apis/metrics.k8s.io/v1beta1: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

从这里看应该是网络的问题。

3.网络

从这里看应该是metrics-server-64475bbf5d-mfkzt的网络段和其他的网络段不一样所导致的。

[kubeadm@server1 ~]$ kubectl get pod -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE   IP              NODE      NOMINATED NODE   READINESS GATES
coredns-9d85f5447-rq9rj           1/1     Running   2          41h   10.244.2.26     server3   <none>           <none>
coredns-9d85f5447-spkdz           1/1     Running   2          41h   10.244.2.25     server3   <none>           <none>
etcd-server1                      1/1     Running   3          41h   192.168.122.2   server1   <none>           <none>
kube-apiserver-server1            1/1     Running   3          41h   192.168.122.2   server1   <none>           <none>
kube-controller-manager-server1   1/1     Running   4          41h   192.168.122.2   server1   <none>           <none>
kube-flannel-ds-amd64-nmhbl       1/1     Running   2          40h   192.168.122.5   server4   <none>           <none>
kube-flannel-ds-amd64-qxz4d       1/1     Running   2          40h   192.168.122.4   server3   <none>           <none>
kube-flannel-ds-amd64-zqs9b       1/1     Running   3          40h   192.168.122.2   server1   <none>           <none>
kube-proxy-4blfr                  1/1     Running   3          41h   192.168.122.2   server1   <none>           <none>
kube-proxy-4p7rg                  1/1     Running   2          41h   192.168.122.5   server4   <none>           <none>
kube-proxy-9n5gp                  1/1     Running   2          41h   192.168.122.4   server3   <none>           <none>
kube-scheduler-server1            1/1     Running   4          41h   192.168.122.2   server1   <none>           <none>
metrics-server-64475bbf5d-mfkzt   1/1     Running   0          13m   10.244.2.28     server3   <none>           <none>

解决方法修改网络
在配置文件metrics-server-deployment.yaml中添加一行hostNetwork: true

k8s-app: metrics-server
spec:
  hostNetwork: true
  serviceAccountName: metrics-server

再次查看,就发现配置已经生效,ip已经和其他处于同网段。

[kubeadm@server1 kubernetes]$ kubectl get pod -n kube-system -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP              NODE      NOMINATED NODE   READINESS GATES
metrics-server-7cf4565bc6-9gkpw   1/1     Running   0          2m29s   192.168.122.5   server4   <none>           <none>

这样就可以进行使用了,并且可以进行采集了。

[kubeadm@server1 kubernetes]$ kubectl top node
NAME      CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
server1   152m         7%     879Mi           50%       
server3   42m          2%     324Mi           18%       
server4   36m          1%     309Mi           17%

Dashboard部署

从github上面复制文件

https://github.com/kubernetes/dashboard/blob/v2.0.0-rc5/aio/deploy/recommended.yaml

将镜像提前下好,或者直接在网上进行拉取都可以。

[kubeadm@server1 dashboard]$ kubectl create -f deploy.yaml 
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created

创建完成后如何访问,如果有图形界面直接使用kubernetes-dashboard的ip进行访问。

[kubeadm@server1 dashboard]$ kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
dashboard-metrics-scraper   ClusterIP   10.109.74.40    <none>        8000/TCP   20s
kubernetes-dashboard        ClusterIP   10.103.247.85   <none>        443/TCP    21s

如果没有图形界面那么就要将端口暴露出来。将type改为nodeport

[kubeadm@server1 dashboard]$ kubectl edit svc -n kubernetes-dashboard kubernetes-dashboard
service/kubernetes-dashboard edited
[kubeadm@server1 dashboard]$ kubectl get svc -n kubernetes-dashboard 
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.109.74.40    <none>        8000/TCP        62s
kubernetes-dashboard        NodePort    10.103.247.85   <none>        443:30319/TCP   63s

暴露完成查看kubernetes-dashboard在哪个节点上部署的,那么就用那个节点的ip进行访问。

kubernetes容器化部署 容器部署k8s_kubernetes容器化部署


访问的是侯记得使用https方法,使用证书加密的方式。

进入界面后我们使用token的方式进行登陆。

如何找到token,首先找到对应的sa,因为sa和token是绑定的。

[kubeadm@server1 dashboard]$ kubectl -n kubernetes-dashboard get sa
NAME                   SECRETS   AGE
default                1         10m
kubernetes-dashboard   1         10m
[kubeadm@server1 dashboard]$ kubectl -n kubernetes-dashboard describe sa kubernetes-dashboard
Name:                kubernetes-dashboard
Namespace:           kubernetes-dashboard
Labels:              k8s-app=kubernetes-dashboard
Annotations:         <none>
Image pull secrets:  <none>
Mountable secrets:   kubernetes-dashboard-token-nvd76
Tokens:              kubernetes-dashboard-token-nvd76 ##这里就是我们要找的token
Events:              <none>

接着再使用命令拿到token,将下面的token复制进行登陆。