pod亲和性和反亲和性

上面实验了pod的资源调度可以通过nodeName、nodeSelector完成,以及node节点亲和性,都是根据依赖关系完成node与pod之间的调度。在实际的需求中,还需要对pod和pod的调度进行控制。

本节就测试一下pod的亲和性和反亲和性
准备一个基础pod作为亲和性的基础pod
[root@master-worker-node-1 pod]# cat pod-affinity-base-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: pod-affinity-base-pod
  labels:
    func: pod-affinity
spec:
  containers:
  - name: pod-affinity-base-pod
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ['/bin/sh','-c','sleep 1234']
    
[root@master-worker-node-1 pod]# kubectl apply -f pod-affinity-base-pod.yaml 
pod/pod-affinity-base-pod created

[root@master-worker-node-1 pod]# kubectl get pods -o wide | grep pod-affinity
pod-affinity-base-pod   1/1     Running   0              69s     10.244.31.11   only-worker-node-3   <none>           <none>
requiredDuringSchedulingIgnoredDuringExecution: 调度器只有在规则被满足的时候才能执行调度。此功能类似于 nodeSelector, 但其语法表达能力更强。

preferredDuringSchedulingIgnoredDuringExecution: 调度器会尝试寻找满足对应规则的节点。如果找不到匹配的节点,调度器仍然会调度该 Pod。
pod亲和性--强亲和性(required)
pod强亲和性下主要字段包括:
kubectl explain pod.spec.affinity.podAffinity.requiredDuringSchedulingIgnoredDuringExecution
FIELDS:
   labelSelector        <Object>
   namespaceSelector    <Object>
   namespaces   <[]string>
   topologyKey  <string> -required-
通过labelSelector实现强亲和性
[root@master-worker-node-1 pod]# cat test-pod-affinity-labelSeletor.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-pod-affinity-by-labelselector
  labels:
    func: by-label-selector
spec:
  containers:
  - name: by-label-selector
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ['/bin/sh','-c','sleep 1234']
  affinity:
    podAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
      - labelSelector:
          matchExpressions:
          - key: func
            operator: In
            values: 
            - pod-affinity
        topologyKey: node-role.kubernetes.io/worker   # 确定一个逻辑节点(自创的概念)
[root@master-worker-node-1 pod]# kubectl apply -f test-pod-affinity-labelSeletor.yaml 
pod/test-pod-affinity-by-labelselector created

[root@master-worker-node-1 pod]# kubectl get pods -o wide |  grep pod-affinity
pod-affinity-base-pod                1/1     Running   2 (7m31s ago)   48m     10.244.31.11   only-worker-node-3   <none>           <none>
test-pod-affinity-by-labelselector   1/1     Running   0               15s     10.244.54.10   only-worker-node-4   <none>           <none>

上面的例子看到,计划依赖的pod是运行在only-worker-node-3上,但是新建的pod却运行在only-worker-node-4上。这里就要说明一下topologyKey的意思了,topologyKey表示,只要满足topologyKey规则的节点被认为是逻辑节点。只要在该逻辑节点上能够找到满足label Selector表达式的pod,那么新建的pod就能在这个逻辑节点上创建。本例中,only-worker-node-3和only-worker-node-4都具有node-role.kubernetes.io/worker,那么就讲她两作为一个逻辑节点。同时,在该节点上也有一个pod满足labelSelector要求,那么新建的pod就能被调度,并随机调度到only-worker-node-4

pod亲和性--弱亲和性(preferred)
[root@master-worker-node-1 pod]# cat test-pod-affinity-prefered.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-prefered-1
  labels:
    func: test-preferred
spec:
  containers:
  - name: test-preferred-1
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ['/bin/sh','-c','sleep 3600']
  affinity:
    podAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: func
              operator: In
              values:
              - pod-affinity
          topologyKey: node-role.kubernetes.io/worker
        weight: 50      # weight同之前nodeaffinity perfered方法类似,同样需要两条才有意义
[root@master-worker-node-1 pod]# kubectl apply -f test-pod-affinity-prefered.yaml 
pod/test-prefered-1 created

# 这里的结果和上面一样,因为topologyKey标识了一个逻辑节点,只要在节点内满足pod亲和性的调度的都是允许的。基础pod在only-worker-node-3,新建的pod在only-worker-node-4上。
[root@master-worker-node-1 pod]# kubectl get pods -o wide |  grep prefer
test-prefered-1                        1/1     Running   0               2m3s    10.244.54.11   only-worker-node-4   <none>           <none>
[root@master-worker-node-1 pod]# cat test-pod-affinity-prefered-2.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: test-prefered-2
  labels:
    func: test-preferred
spec:
  containers:
  - name: test-preferred-2
    image: busybox:latest
    imagePullPolicy: IfNotPresent
    command: ['/bin/sh','-c','sleep 3600']
  affinity:
    podAffinity:
      preferredDuringSchedulingIgnoredDuringExecution:
      - podAffinityTerm:
          labelSelector:
            matchExpressions:
            - key: func
              operator: In
              values:
              - pod-affinity
          topologyKey: node-role.kubernetes.io/worker3456789 #肯定不存在这样一个逻辑节点
        weight: 50

但是pod依然可以正常创建
[root@master-worker-node-1 pod]# kubectl get pods -o wide |  grep prefer
test-prefered-1                        1/1     Running   0               10m     10.244.54.11   only-worker-node-4   <none>           <none>
test-prefered-2                        1/1     Running   0               26s     10.244.31.13   only-worker-node-3   <none>           <none>
pod的反亲和性--AntiAffinity
pod反亲和性除了可以通过修改pod的亲和性中operator字段为NotIn或者DoesNotExist实现反亲和性,还可以通过专门的AntiAffinity实现pod的反亲和性。

这部分的语法和其他情急下的语法类似。
kubectl explain pod.spec.affinity.podAntiAffinity
小结

pod的亲和性同样分为强亲和性(required)和弱亲和性(perferred),两种情况和node affinity并无区别

pod亲和性中新增了topologyKey的概率,可以理解为一个逻辑节点,一个逻辑节点中可以不包含node,也可以包含一个或者多个node