本篇我们用上一篇知识点搭建的k8s集群,认识k8s的入门级使用方法,无法完全体现出k8s的性能,但是非专业运维人员已经足够测试使用。

1、查看当前集群的所有节点列表,get命令查询的目的通常都是集群资源列表清单

[root@hdp1 ~] kubectl get nodes
NAME   STATUS   ROLES    AGE   VERSION
hdp1   Ready    master   44h   v1.17.0
hdp2   Ready    <none>   42h   v1.17.0
hdp3   Ready    <none>   42h   v1.17.0

2、查看某个节点的详情信息,describe是某个资源的详情

[root@hdp1 ~] kubectl describe node hdp2
Name:               hdp2
Roles:              <none>
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=hdp2
                    kubernetes.io/os=linux
Annotations:        flannel.alpha.coreos.com/backend-data: {"VNI":1,"VtepMAC":"96:19:ab:21:bb:07"}
                    flannel.alpha.coreos.com/backend-type: vxlan
                    flannel.alpha.coreos.com/kube-subnet-manager: true
                    flannel.alpha.coreos.com/public-ip: 192.168.88.187
                    kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Fri, 09 Dec 2022 20:14:16 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  hdp2
  AcquireTime:     <unset>
  RenewTime:       Sun, 11 Dec 2022 15:02:47 +0800
Conditions:
  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----                 ------  -----------------                 ------------------                ------                       -------
  NetworkUnavailable   False   Sun, 11 Dec 2022 14:53:54 +0800   Sun, 11 Dec 2022 14:53:54 +0800   FlannelIsUp                  Flannel is running on this node
  MemoryPressure       False   Sun, 11 Dec 2022 14:58:47 +0800   Sun, 11 Dec 2022 14:53:47 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure         False   Sun, 11 Dec 2022 14:58:47 +0800   Sun, 11 Dec 2022 14:53:47 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure          False   Sun, 11 Dec 2022 14:58:47 +0800   Sun, 11 Dec 2022 14:53:47 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready                True    Sun, 11 Dec 2022 14:58:47 +0800   Sun, 11 Dec 2022 14:53:47 +0800   KubeletReady                 kubelet is posting ready status
Addresses:
  InternalIP:  192.168.88.187
  Hostname:    hdp2
Capacity:
  cpu:                8
  ephemeral-storage:  28565416Ki
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             10223324Ki
  pods:               110
Allocatable:
  cpu:                8
  ephemeral-storage:  26325887343
  hugepages-1Gi:      0
  hugepages-2Mi:      0
  memory:             10120924Ki
  pods:               110
System Info:
  Machine ID:                 63e960d7067544858a2821b1ed501a7b
  System UUID:                4A8A4D56-81F8-0A1D-3EBB-61A031A9C067
  Boot ID:                    a391fd57-5391-493b-84b8-e9df5afbb215
  Kernel Version:             3.10.0-693.el7.x86_64
  OS Image:                   CentOS Linux 7 (Core)
  Operating System:           linux
  Architecture:               amd64
  Container Runtime Version:  docker://18.6.3
  Kubelet Version:            v1.17.0
  Kube-Proxy Version:         v1.17.0
PodCIDR:                      10.244.1.0/24
PodCIDRs:                     10.244.1.0/24
Non-terminated Pods:          (2 in total)
  Namespace                   Name                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
  ---------                   ----                     ------------  ----------  ---------------  -------------  ---
  kube-flannel                kube-flannel-ds-czhx6    100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      42h
  kube-system                 kube-proxy-psjwj         0 (0%)        0 (0%)      0 (0%)           0 (0%)         42h
Allocated resources:
  (Total limits may be over 100 percent, i.e., overcommitted.)
  Resource           Requests   Limits
  --------           --------   ------
  cpu                100m (1%)  100m (1%)
  memory             50Mi (0%)  50Mi (0%)
  ephemeral-storage  0 (0%)     0 (0%)
Events:
  Type     Reason                   Age                   From              Message
  ----     ------                   ----                  ----              -------
  Normal   Starting                 42h                   kubelet, hdp2     Starting kubelet.
  Normal   NodeHasNoDiskPressure    42h (x2 over 42h)     kubelet, hdp2     Node hdp2 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     42h (x2 over 42h)     kubelet, hdp2     Node hdp2 status is now: NodeHasSufficientPID
  Normal   NodeAllocatableEnforced  42h                   kubelet, hdp2     Updated Node Allocatable limit across pods
  Normal   NodeHasSufficientMemory  42h (x2 over 42h)     kubelet, hdp2     Node hdp2 status is now: NodeHasSufficientMemory
  Normal   Starting                 42h                   kube-proxy, hdp2  Starting kube-proxy.
  Normal   NodeReady                42h                   kubelet, hdp2     Node hdp2 status is now: NodeReady
  Normal   NodeAllocatableEnforced  9m12s                 kubelet, hdp2     Updated Node Allocatable limit across pods
  Normal   Starting                 9m12s                 kubelet, hdp2     Starting kubelet.
  Normal   NodeHasNoDiskPressure    9m6s (x6 over 9m12s)  kubelet, hdp2     Node hdp2 status is now: NodeHasNoDiskPressure
  Normal   NodeHasSufficientPID     9m6s (x6 over 9m12s)  kubelet, hdp2     Node hdp2 status is now: NodeHasSufficientPID
  Normal   NodeHasSufficientMemory  9m6s (x6 over 9m12s)  kubelet, hdp2     Node hdp2 status is now: NodeHasSufficientMemory
  Warning  Rebooted                 9m6s                  kubelet, hdp2     Node hdp2 has been rebooted, boot id: a391fd57-5391-493b-84b8-e9df5afbb215
  Normal   NodeReady                9m6s                  kubelet, hdp2     Node hdp2 status is now: NodeReady
  Normal   Starting                 9m4s                  kube-proxy, hdp2  Starting kube-proxy.

3、查看版本

[root@hdp1 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

4、查看集群信息,这个命令主要是查看集群的API信息,由于本例中的集群刚搭建没有太多的附件服务,目前只有三个Kubernetes、KubeDNS、flannel

[root@hdp1 ~] kubectl cluster-info
Kubernetes master is running at https://192.168.88.186:6443
KubeDNS is running at https://192.168.88.186:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

5、当你想要在k8s集群上运行一个容器服务的时候你可以用如下命令,注意刚开始这里叫容器,后面统称Pod,分不清区别的去看其他资料

[root@hdp1 ~] kubectl run mykung --image=nginx:1.14-alpine --port=80 --replicas=1 --dry-run=true
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mykung created (dry run)

run:和dcocker一样run创建并运行一个命令
mykung:k8s操作的是pod,所以你可以给创建的pod赋予一个名字
--image:指定所用的景象
--port:暴露的端口,可以不用,会自动暴露容器中服务对应的端口号,如果你手动配置,则一定要和容器所含服务的配置是一样的,不然无效
--replicas:创建多少个该镜像对应的Pod
--dry-run=true:干跑模式,默认是false,这个选项是告诉k8s当前运行的这个run命令只是想看一下对应会参与run命令的API对象就行,而不正式运行Pod

通过日志可以看到会由deployment控制器负责该Pod,且Pod名叫mykung,随后我们去掉干跑参数,看一下效果。

[root@hdp1 ~] kubectl run mykung --image=nginx:1.14-alpine --port=80 --replicas=1
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/mykung created

从日志上看除了dry run干跑提示,没什么差别,但是我们可以通过命令查询到它。

5、通过get可以获取集群当前某类资源信息,比如查询一个运行中的Pod,不加命名空间默认是default

[root@hdp1 ~] kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
mykung-56cf8b9444-v9l7k   1/1     Running   0          2m6s

READY:展示此Pod里面的容器运行情况,既正在运行个数/总个数RESTARTS:重启次数
AGE:运行时间

你也可以查询控制器的方式看到它

[root@hdp1 ~] kubectl get deployment
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
mykung   1/1     1            1           7m57s

UP-TO-DATE:官方叫是否最新版本,但其实展示的是信息的版本数
AVAILABLE:可用个数,如果刚创建立马查询可能会处于未准备就绪状态展示所以0个

你在查询某个资源的时候,可以指定并查询较多东西

[root@hdp1 ~] kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE   NOMINATED NODE   READINESS GATES
mykung-56cf8b9444-v9l7k   1/1     Running   0          20m   10.244.2.2   hdp3   <none>           <none>

此时你要注意的是k8s底层,虽然使用的是第三方容器引擎,但是它的通信地址,有自己的pod网络ip,这个ip跟随着不同node节点的cni0网卡走,并且pod网络只能在k8s集群内部使用,无法对外使用

6、当你要在k8s上删除一个资源的时候就需要使用delete命令参数,但是这里有一点需要注意如果你要删除的资源是Pod,那么不能直接只删除它自身,那样会被k8s的容灾机制识别为意外down从而重启新的Pod,除非该Pod是一个自主Pod。

[root@hdp1 ~] kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
mykung-56cf8b9444-v9l7k   1/1     Running   0          43m

[root@hdp1 ~] kubectl delete pods mykung-56cf8b9444-v9l7k
pod "mykung-56cf8b9444-v9l7k" deleted

[root@hdp1 ~] kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
mykung-56cf8b9444-vxz4v   1/1     Running   0          6s

[root@hdp1 ~] kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE     IP           NODE   NOMINATED NODE   READINESS GATES
mykung-56cf8b9444-vxz4v   1/1     Running   0          4m10s   10.244.1.2   hdp2   <none>           <none>

想要成功的删除它,你需要删除它的控制器,默认时每一种Pod都对应了一个deployment控制器

[root@hdp1 ~] kubectl get deployment
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
client   1/1     1            1           92d
mykung   2/2     2            2           93d
[root@hdp1 ~] kubectl delete deployment mykung
deployment.apps "mykung" deleted
[root@hdp1 ~] kubectl get deployment
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
client   1/1     1            1           92d

7、上面的命令中我们会发现一个问题,依托于Pod网络,虽然可以实现不同Pod的通信,但访问的时候,可以会因为控制器重启Pod,而导致旧的访问方式失效,因此我们需要用到service网络,给Pod一个固定的访问方式。当然这里说的固定的访问方式,并不是说这个Pod以后名字和IP就不变了,而是我们去service上给它注册一条代理信息,以后访问service就行,而service负责为我们找到集群中的这个pod并访问它。但是注意,默认情况下service提供的代理,和Pod网络一样不允许集群之外的客户端访问,我前面介绍k8s的知识点虽然有提到Service它可以让Pod跨节点通讯摒弃访问方式改变的问题,但没有说可以从外部访问该Pod,这点下面的用例会复现这个问题。

首先我们先创建一个Service代理

[root@hdp1 ~] kubectl expose deployment mykung --name=mykung --port=80 --target-port=80 
service/mykung exposed

expose deployment mykung:暴露deployment控制器下的mykung这个Pod资源
--name:起一个名字
--port:Pod端口
--target-port:Service用那个端口代理
除此之外有两个默认参数一会同步生效
--type:暴露Service的类型,默认ClusterIP
--protocol:指定暴露的协议,默认TCP

我们可以用get查看现在有的Service

[root@hdp1 ~] kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   46h
mykung       ClusterIP   10.96.16.251   <none>        80/TCP    7m43s

往后的话集群内部就可以用service的ip去访问被代理的Pod,但如果你的kube-proxy组件使用的是默认配置的话,不出意外会出现非被代理pod所在节点,访问不了service ip的问题,比如本例用的k8s就是刚搭建没做详细配置

[root@hdp1 ~] kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   46h
mykung       ClusterIP   10.96.16.251   <none>        80/TCP    7m43s

[root@hdp1 ~] kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE   NOMINATED NODE   READINESS GATES
mykung-56cf8b9444-vxz4v   1/1     Running   0          31m   10.244.1.2   hdp2   <none>           <none>

[root@hdp1 ~] ping 10.244.1.2
PING 10.244.1.2 (10.244.1.2) 56(84) bytes of data.
64 bytes from 10.244.1.2: icmp_seq=1 ttl=63 time=0.386 ms
64 bytes from 10.244.1.2: icmp_seq=2 ttl=63 time=0.308 ms
^C
--- 10.244.1.2 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.308/0.347/0.386/0.039 ms

[root@hdp1 ~] curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

[root@hdp2 ~] curl 10.244.1.2
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

----------------------如果不改配置,hdp1节点访问无法访问Service中的代理,不报错只是无响应-------------------------------
[root@hdp1 ~] curl 10.96.16.251

----------------------Pod所在节点访问Service-------------------------------

[root@hdp2 ~] curl 10.96.16.251
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

通过上面的用例,会发现kube-proxy组件使用的是默认配置的话,那service虽然创建了,但只能是让被代理的Pod所在节点通过service访问,就比如我上面的用例,这个service所代理的pod在hdp2上,其他的节点用service的IP访问不到,但是用pod的IP是可以访问到的。其实被代理Pod所在节点能够service IP访问也相当勉强,只是因为恰巧所在的宿主机能够加载到pod的虚拟网卡而已,也就是说,这其实是一个小意外,而不是k8s正式提供的功能。

这种情况在我们正常使用k8s,无论是个人使用还是项目上正式使用,都是不能容忍的,虽然对于k8s集群本身来说不是问题,默认配置就这样,所以为了让service IP能够在集群之间流畅的使用,我们就需要去改kube-proxy的配置

你要先运行如下命令查看当前kube-proxy所用的工作模式是什么,一般默认的是iptables或者是空

kubectl get cm kube-proxy -n kube-system -o yaml |grep mode

但无论默认值是什么你都要运行如下命令,打开配置

kubectl edit cm kube-proxy -n kube-system

在配置中你要找到mode配置项,把他改为ipvs

mode: "ipvs"

随后删除现有的所有kube-proxy容器,使得它被控制器重启

kubectl get pods -n kube-system | grep kube-proxy | awk '{print $1}' | xargs kubectl delete pod -n kube-system

这个时候你就可以在集群中任意节点流畅的使用service ip了,为了体现效果,本例中使用的k8s集群hdp1再次访问service ip,可以发现已经可以不同节点使用,并且可以在被代理pod容器重起后任然不影响访问,需要注意的是service ip的访问比较慢,所以curl的返回结果需要等一段时间

[root@hdp1 ~] ping 10.96.16.251
PING 10.96.16.251 (10.96.16.251) 56(84) bytes of data.
64 bytes from 10.96.16.251: icmp_seq=1 ttl=64 time=0.047 ms
64 bytes from 10.96.16.251: icmp_seq=2 ttl=64 time=0.043 ms
^C
--- 10.96.16.251 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1000ms
rtt min/avg/max/mdev = 0.043/0.045/0.047/0.002 ms

[root@hdp1 ~] kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE    IP           NODE   NOMINATED NODE   READINESS GATES
client1                   1/1     Running   0          46m    10.244.2.4   hdp3   <none>           <none>
mykung-56cf8b9444-vxz4v   1/1     Running   0          132m   10.244.1.2   hdp2   <none>           <none>

[root@hdp1 ~] kubectl delete pods mykung-56cf8b9444-vxz4v
pod "mykung-56cf8b9444-vxz4v" deleted

[root@hdp1 ~] kubectl get pods -o wide
NAME                      READY   STATUS    RESTARTS   AGE   IP           NODE   NOMINATED NODE   READINESS GATES
client1                   1/1     Running   0          47m   10.244.2.4   hdp3   <none>           <none>
mykung-56cf8b9444-j7hhn   1/1     Running   0          10s   10.244.1.3   hdp2   <none>           <none>

[root@hdp1 ~] kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   2d
mykung       ClusterIP   10.96.16.251   <none>        80/TCP    114m

[root@hdp1 ~] ping 10.96.16.251
PING 10.96.16.251 (10.96.16.251) 56(84) bytes of data.
64 bytes from 10.96.16.251: icmp_seq=1 ttl=64 time=0.039 ms
64 bytes from 10.96.16.251: icmp_seq=2 ttl=64 time=0.046 ms
^C
--- 10.96.16.251 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.039/0.042/0.046/0.007 ms

[root@hdp1 ~] curl 10.96.16.251:80
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

但是我上面也说了它还不能被集群以外的终端访问

k8s查看节点describe k8s查看节点使用情况_ci


8、上面这种使用Service的方式主要是利于我们人为的直接使用ServiceIP,但前面也提到了,默认模式不让人为调用,只面向Pod端。但是如你去尝试在pod里面去ping会发现默认的时候即便是在pod里面也无法ping通ip。所以该怎么理解默认模式在什么时候生效?其实默认的时候是在一个Pod运行时触发DNS时生效,因为service本身是k8s的一层所有pod都能找到并解析的DNS网络,当然,我们改了配置,也不影响默认的效果,ipvs是一种更灵活的网络模式。当然,我们还是要了解一下默认网络模式的作用,比如下面的用例

[root@hdp1 ~] kubectl run client --image=busybox -it
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
If you don't see a command prompt, try pressing enter.
/ # wget -O - -q http://mykung:80/
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
/ # ping mykung
PING mykung (10.96.16.251): 56 data bytes
64 bytes from 10.96.16.251: seq=0 ttl=64 time=0.031 ms
64 bytes from 10.96.16.251: seq=1 ttl=64 time=0.086 ms
64 bytes from 10.96.16.251: seq=2 ttl=64 time=0.082 ms
^C
--- mykung ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.031/0.066/0.086 ms
/ #

注意,Service作为DNS无论那种网络模式,只对Pod来说可以作为NDS使用,其他的时候都只是一个IP,没有DNS功能,不要犯傻去在外部用Service的DNS。

9、如果你想知道Service网络当前代理的容器信息,你可以使用describe命令查询它

[root@hdp1 ~] kubectl describe svc mykung
Name:              mykung
Namespace:         default
Labels:            run=mykung
Annotations:       <none>
Selector:          run=mykung
Type:              ClusterIP
IP:                10.96.16.251
Port:              <unset>  80/TCP
TargetPort:        80/TCP
Endpoints:         10.244.1.4:80
Session Affinity:  None
Events:            <none>

10、有很多时候,如果你需要知道某些资源的标签信息,你可以使用--show-lables参数

[root@hdp1 ~] kubectl get pods --show-labels
NAME                      READY   STATUS    RESTARTS   AGE   LABELS
mykung-56cf8b9444-j7hhn   1/1     Running   1          20h   pod-template-hash=56cf8b9444,run=mykung

11、如果你想编辑集群中的每个信息,你可以使用edit命令,比如上面修改Servive网络模式配置时用到的

kubectl edit cm kube-proxy -n kube-system

但不是所有的东西都可以被修改,如果你的修改不被k8s支持,当你要保存的时候,会报错

"/tmp/kubectl-edit-ihudz.yaml" 30L, 774C written
A copy of your changes has been stored to "/tmp/kubectl-edit-ihudz.yaml"
error: Edit cancelled, no valid changes were saved.

12、在k8s中一个Pod的副本数是可以被修改的,比如我们可以把上面用例中的nginxPod原本只有一个容器数量没有多余的副本,我们可以把副本数改成8,这是就用到了scale命令

[root@hdp1 ~]# kubectl scale deployment mykung --replicas=8
deployment.apps/mykung scaled
[root@hdp1 ~]# kubectl get deployment
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
client   1/1     1            1           28m
mykung   8/8     8            8           24h

同时,让一个Pod的中容器总数大于1的时候,你在访问该Pod时,k8s会负责负载均衡的任务

13、如果你觉得在运行的Pod中,某个Pod的镜像版本比较旧了,你可以直接更新它,就用到了set命令。但是在使用set命令前,你需要知道要改pod中的那个容器,并且知道它的容器名,所以你要先查出来。这里以前面的mykung为例

[root@hdp1 ~] kubectl get pods
NAME                      READY   STATUS    RESTARTS   AGE
client-f44986c9-476w4     1/1     Running   3          43m
mykung-56cf8b9444-bvkxk   1/1     Running   0          24s
mykung-56cf8b9444-j7hhn   1/1     Running   1          21h
[root@hdp1 ~] kubectl describe pod mykung-56cf8b9444-bvkxk
Name:         mykung-56cf8b9444-bvkxk
Namespace:    default
Priority:     0
Node:         hdp3/192.168.88.188
Start Time:   Mon, 12 Dec 2022 16:22:31 +0800
Labels:       pod-template-hash=56cf8b9444
              run=mykung
Annotations:  <none>
Status:       Running
IP:           10.244.2.10
IPs:
  IP:           10.244.2.10
Controlled By:  ReplicaSet/mykung-56cf8b9444
Containers:
  mykung:
    Container ID:   docker://869a75982902a17ac74e897fa7c19c5203c2d244083b67316b8bf6c9453635f4
    Image:          nginx:1.14-alpine
    Image ID:       docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Mon, 12 Dec 2022 16:22:31 +0800
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-q6zw8 (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  default-token-q6zw8:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-q6zw8
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type    Reason     Age        From               Message
  ----    ------     ----       ----               -------
  Normal  Scheduled  <unknown>  default-scheduler  Successfully assigned default/mykung-56cf8b9444-bvkxk to hdp3
  Normal  Pulled     97s        kubelet, hdp3      Container image "nginx:1.14-alpine" already present on machine
  Normal  Created    97s        kubelet, hdp3      Created container mykung
  Normal  Started    97s        kubelet, hdp3      Started container mykung

通过查询会发现,查询的Pod中只有一个容器,而且它的容器名字也是mykung,这个时候你就可以去更改了,改的时候deployment mykung是要升级pod,由于我直接指定了控制器下的所有名叫mykung的Pod,所以结果会把所有的相关Pod都升级,mykung=nginx:1.15-alpine是容器名和版本,一次性可以更改一个Pod下多个容器的版本

[root@hdp1 ~] kubectl set image deployment mykung mykung=nginx:1.15-alpine
deployment.apps/mykung image updated

随着镜像版本的更新你可以你可以使用回滚命令查询它的当前状态,如果它还更新则会同步打印更新日志,反之输出结果日志

[root@hdp1 ~]# kubectl rollout status deployment mykung
deployment "mykung" successfully rolled out

当Pod中的容器更新结束,原来的pod就会被删除,控制器会重生成对应Pod,控制器中的信息版本数也会加一

[root@hdp1 ~] kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
client-f44986c9-476w4    1/1     Running   3          57m
mykung-d67fbb685-2bbbq   1/1     Running   0          2m57s
mykung-d67fbb685-rvcpd   1/1     Running   0          3m1s

[root@hdp1 ~] kubectl get deployment mykung
NAME     READY   UP-TO-DATE   AVAILABLE   AGE
mykung   2/2     2            2           24h

14、回滚命令,其实k8s的镜像升级,就是一种向新状态滚动的事务,当你到了新版本发现有bug,你还可以回滚回来。回滚的时候你可以先查看一下当前有那些版本以及了解一下版本的改动信息

[root@hdp1 ~] kubectl rollout history deployment mykung
deployment.apps/mykung 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

[root@hdp1 ~] kubectl rollout history deployment mykung --revision=2
deployment.apps/mykung with revision #2
Pod Template:
  Labels:       pod-template-hash=d67fbb685
        run=mykung
  Containers:
   mykung:
    Image:      nginx:1.15-alpine
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>

[root@hdp1 ~] kubectl rollout history deployment mykung --revision=3
deployment.apps/mykung with revision #3
Pod Template:
  Labels:       pod-template-hash=56cf8b9444
        run=mykung
  Containers:
   mykung:
    Image:      nginx:1.14-alpine
    Port:       80/TCP
    Host Port:  0/TCP
    Environment:        <none>
    Mounts:     <none>
  Volumes:      <none>

从上面的输出中可以知道要回到的版本,就可以执行回滚,版本如果不写默认是上一版本,

[root@hdp1 ~] kubectl rollout undo deployment mykung --to-revision=2
deployment.apps/mykung rolled back
[root@hdp1 ~] kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
client-f44986c9-476w4    1/1     Running   3          109m
mykung-d67fbb685-4zv9z   1/1     Running   0          36s
mykung-d67fbb685-v8mvm   1/1     Running   0          37s

15、在k8s中你同样可以进入Pod中的容器,同样提供了两种方法和docker一样,但是参数细节上有一点不同的是exec命令后需要--,且需要通过-c指定进入的容器。最重要的是退出时不会造成容器的关闭。

kubectl attach (POD | TYPE/NAME) -c CONTAINER [options]
 kubectl exec (POD | TYPE/NAME) [-c CONTAINER] [flags] -- COMMAND [args...] [options]

对于k8s来讲,建议优先使用exec去进入容器,因为当容器没有运行任何的程序时,attach连接会出问题,而不是如docker那样进入一个干净的可以正常使用的会话。使用exec时也要注意,shell编译器手动选择只能使用/bin/sh

16、本篇最后一点,要说的还是前面的Service,前面在使用Service的时候我们修改了它的网络模式,默认的模式由于版本不同,所以如果你在上面修改时你的默认模式是iptables规则策略,那你应该能够在Pod所在节点的宿主机iptables列表里看到相关的路由信息,说白了就像我前面说的那样,默认的时候你Pod所在节点能够访问并不是k8s给你提供了功能,而是宿主机上有路由信息而已。

iptables -vnL -t nat

当然你如果本篇知识点一样修改了配置,你应该只能看到Pod的路由信息。

上面并还提到,改了配置之后没有还是没有办法让外部的设备访问Pod,所以你想让外部能够访问到,也很简单,上面创建service的时候类型用的默认的clusterip,你只需要修改它为NodePort就行,比如我去修改本例中创建的mykung

kubectl edit svc mykung

k8s查看节点describe k8s查看节点使用情况_docker_02


修改结束后你在查询service,你会发现多了一个端口

[root@hdp1 ~] kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        2d23h
mykung       NodePort    10.96.16.251   <none>        80:30198/TCP   25h

多出来的这个端口就和docker的-P参数作用是一样的,会给你暴露一个宿主机的端口,并转发

k8s查看节点describe k8s查看节点使用情况_ci_03


注意,此时访问k8s集群的任意一个节点的该端口,均可以。同时在企业中使用的时候会在外部单独写一个用来访问不同节点的app,用来解决万一某个k8s集群node崩掉的意外。

当然,这个映射到宿主机的端口并不是不可控的,你可以在在它生成后修改它,或者创建service的时候,使用如下参数完成指定

kubectl expose deployment mytest --name=query --port=80 --target-port=8001