Kubernetes Service

Service的概念

Kubernetes Service 定义了这样-种抽象: 一个Pod 的逻辑分组,一种可以访问它们的策略——通常称为微服务。这一组Pod能够被Service访问到,通常是通过Label Selector

Kubernetes微服务实战 kubernetes service原理_kubernetes


Service能够提供负载均衡的能力,但是在使用上有以下限制:

●只提供4层负载均衡能力,而没有7层功能,但有时我们可能需要更多的匹配规则来转发请求,这点上4层.负载均衡是不支持的

Service的类型

Service在K8s中有以下四种类型

●Clusterlp: 默认类型,自动分配-个仅Cluster 内部可以访问的虚拟IP

●NodePort: 在ClusterIP基础上为Service在每台机器上绑定一个端口,这样就可以通过<NodeIP>:NodePort来访问该服务

●LoadBalancer: 在NodePort的基础上,借助cloud provider创建-个外部负载均衡器, 并将请求转发到<NodelP>: NodePort

●ExternalName: 把集群外部的服务引入到集群内部来,在集群内部直接使用。没有任何类型代理被创建,这只有kubernetes 1.7或更高版本的kube-dns才伎持

Kubernetes微服务实战 kubernetes service原理_容器_02

VIP和Service代理

在Kubernetes集群中,每个Node运行一个kube . proxy进程。kube - proxy负责为Service实现了一种VIP (虚拟IP)的形式,而不是ExternalName 的形式。在Kubernetes v1.0版本,代理完全在userspace。在Kubernetes v1.1版本,新增了iptables代理,但并不是默认的运行模式。从Kubernetes v1.2起,默认就是iptables代理。在Kubernetes v1.8.0-beta.0中,添加了ipvs代理

在Kubernetes 1.14版本开始默认使用ipvs代理

在Kubernetes v1.0版本,Service 是“4层”(TCP/UDP overIP)概念。在Kubernetes v1.1版本。新增了Ingress API (beta 版),用来表示“7层”(HTTP) 服务

!为何不使用round-robin DNS?

代理模式的分类

1、userspace 代理模式

Kubernetes微服务实战 kubernetes service原理_TCP_03

2、iptables代理模式

Kubernetes微服务实战 kubernetes service原理_容器_04

3、ipvs 代理模式

这种模式,kube-proxy 会监视Kubernetes Service 对象和Endpoints,调用netlink接口以相应地创建ipvs规则并定期与Kubernetes Service 对象和Endpoints 对象同步ipvs规则,以确保ipvs状态与期望一致。 访问服务时,流量将被重定向到其中一个后端Pod

与iptables类似,ipvs 于netfilter的hook功能,但使用哈希表作为底层数据结构并在内核空间中工作。这意味着ipvs可以更快地重定向流量,并且在同步代理规则时具有更好的性能。此外,ipvs为负载均衡算法提供了更多选项,例如:
● rr:轮询调度
● lc:最小连接数I
● dh:目标哈希
● sh:源哈希
● sed:最短期望延迟
● nq:不排队调度

注意: ipvs模式假定在运行kube-proxy 之前在节点上都已经安装了IPVS内核模块。当 kube-proxy 以ipvs 代理模式启动时,kube-proxy 将验证节点上是否安装了IPVS 模块,如果未安装,则kube-proxy 将回退到iptables 代理模式

Kubernetes微服务实战 kubernetes service原理_kubernetes_05

[root@k8s-master01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.192.131:6443         Masq    1      3          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.16:53               Masq    1      0          0         
  -> 10.244.0.17:53               Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.16:9153             Masq    1      0          0         
  -> 10.244.0.17:9153             Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.16:53               Masq    1      0          0         
  -> 10.244.0.17:53               Masq    1      0          0   
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   6d13h

ClusterlP

clusterIP主要在每个node节点使用iptables,将发向clusterIP对应端口的数据,转发到kube-proxy中。然后kube-proxy自己内部实现有负载均衡的方法,并可以查询到这个service下对应pod的地址和端口,进而把数据转发给对应的pod的地址和端口

Kubernetes微服务实战 kubernetes service原理_kubernetes_06


为了实现图上的功能,主要需要以下几个组件的协同工作:

● apiserver 用户通过kubectl命令向apiserver发送创建service的命令,apiserver接收到请求后将数据存储到etcd中

● kube-proxy kubernetes的每个节点中都有一个叫做kube-porxy的进程, 这个进程负责感知service, pod的变化,并将变化的信息写入本地的iptables规则中

● iptables 使用NAT等技术将virtuallP的流量转至endpoint中

创建myapp-deploy.yaml文件

vim myapp-deploy.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: myapp-deploy
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels: 
      app: myapp
      release: stabel
  template:
    metadata:
      labels:
        app: myapp
        release: stabel
        env: test
    spec:
      containers:
      - name: myapp
        image: ikubernetes/myapp:v2
        imagePullPolicy: IfNotPresent
        ports:
        - name: http
          containerPort: 80
[root@k8s-master01 ~]# kubectl apply -f myapp-deploy.yaml 
deployment.apps/myapp-deploy created
[root@k8s-master01 ~]# kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
myapp-deploy-659f64f98b-bn8sl   1/1     Running   0          78s
myapp-deploy-659f64f98b-fztt9   1/1     Running   0          78s
myapp-deploy-659f64f98b-wdsbw   1/1     Running   0          78s
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE    IP            NODE         NOMINATED NODE   READINESS GATES
myapp-deploy-659f64f98b-bn8sl   1/1     Running   0          113s   10.244.1.82   k8s-node01   <none>           <none>
myapp-deploy-659f64f98b-fztt9   1/1     Running   0          113s   10.244.2.38   k8s-node02   <none>           <none>
myapp-deploy-659f64f98b-wdsbw   1/1     Running   0          113s   10.244.1.83   k8s-node01   <none>           <none>
[root@k8s-master01 ~]# curl 10.244.1.82
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>

创建Service信息

vim myapp-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: myapp1  # 这两个标签要与上面匹配
    release: stabel  # 这两个标签要与上面匹配
  ports: 
  - name: http
    port: 80  # 后面端口指定
    targetPort: 80  # 后面端口指定
[root@k8s-master01 ~]# kubectl apply -f myapp-service.yaml 
service/myapp created
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP   6d14h
myapp        ClusterIP   10.106.140.139   <none>        80/TCP    10s
[root@k8s-master01 ~]# curl 10.106.140.139
curl: (7) Failed connect to 10.106.140.139:80; 拒绝连接
[root@k8s-master01 ~]# ipvsadm -Ln                        
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.192.131:6443         Masq    1      3          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.16:53               Masq    1      0          0         
  -> 10.244.0.17:53               Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.16:9153             Masq    1      0          0         
  -> 10.244.0.17:9153             Masq    1      0          0         
TCP  10.106.140.139:80 rr   # 空的
UDP  10.96.0.10:53 rr
  -> 10.244.0.16:53               Masq    1      0          0         
  -> 10.244.0.17:53               Masq    1      0          0         

# 修改回来,先删除,可以拿yaml文件进行删除
[root@k8s-master01 ~]# kubectl delete -f myapp-service.yaml 
service "myapp" deleted
# 重要文件都要保存下来的原因
[root@k8s-master01 ~]# cd /usr/local/
[root@k8s-master01 local]# ls
apache-maven-3.6.3    bin  games    install-k8s   lib    libexec  share
apache-tomcat-9.0.30  etc  include  jdk1.8.0_231  lib64  sbin     src
[root@k8s-master01 local]# cd install-k8s/
[root@k8s-master01 install-k8s]# ls
core  plugin

修改vim myapp-service.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  type: ClusterIP
  selector:
    app: myapp  # 这两个标签要与上面匹配
    release: stabel  # 这两个标签要与上面匹配
  ports: 
  - name: http
    port: 80  # 后面端口指定
    targetPort: 80  # 后面端口指定
[root@k8s-master01 ~]# kubectl apply -f myapp-service.yaml 
service/myapp created
[root@k8s-master01 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP   6d14h
myapp        ClusterIP   10.103.61.43   <none>        80/TCP    17s
[root@k8s-master01 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.96.0.1:443 rr
  -> 192.168.192.131:6443         Masq    1      3          0         
TCP  10.96.0.10:53 rr
  -> 10.244.0.16:53               Masq    1      0          0         
  -> 10.244.0.17:53               Masq    1      0          0         
TCP  10.96.0.10:9153 rr
  -> 10.244.0.16:9153             Masq    1      0          0         
  -> 10.244.0.17:9153             Masq    1      0          0         
TCP  10.103.61.43:80 rr   # 通的
  -> 10.244.1.82:80               Masq    1      0          0         
  -> 10.244.1.83:80               Masq    1      0          0         
  -> 10.244.2.38:80               Masq    1      0          0         
UDP  10.96.0.10:53 rr
  -> 10.244.0.16:53               Masq    1      0          0         
  -> 10.244.0.17:53               Masq    1      0          0     
[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
myapp-deploy-659f64f98b-bn8sl   1/1     Running   0          18m   10.244.1.82   k8s-node01   <none>           <none>
myapp-deploy-659f64f98b-fztt9   1/1     Running   0          18m   10.244.2.38   k8s-node02   <none>           <none>
myapp-deploy-659f64f98b-wdsbw   1/1     Running   0          18m   10.244.1.83   k8s-node01   <none>           <none>

# 自动轮询,负载均衡
[root@k8s-master01 ~]# curl 10.103.61.43
Hello MyApp | Version: v2 | <a href="hostname.html">Pod Name</a>
[root@k8s-master01 ~]# curl 10.103.61.43/hostname.html
myapp-deploy-659f64f98b-wdsbw
[root@k8s-master01 ~]# curl 10.103.61.43/hostname.html
myapp-deploy-659f64f98b-bn8sl
[root@k8s-master01 ~]# curl 10.103.61.43/hostname.html
myapp-deploy-659f64f98b-fztt9
[root@k8s-master01 ~]# curl 10.103.61.43/hostname.html
myapp-deploy-659f64f98b-wdsbw

查询流程

iptables -t nat -nvL
PREROUTING > KUBE-SERVICES > SVC > SEP

Headless Service

有时不需要或不想要负载均衡,以及单独的Service IP。遇到这种情况,可以通过指定Cluster IP(spec.clusterlP)的值为"None"来创建Headless Service.这类Service 并不会分配Cluster IP,kube-proxy 不会处理它们,而且平台也不会为它们进行负载均衡和路由
vi myapp-svc-headless.yaml

apiVersion: v1
kind: Service
metadata: 
  name: myapp-headless
  namespace: default
spec:
  selector:
    app: myapp
  clusterIP: "None"
  ports:
  - port: 80
    targetPort: 80


[root@k8s-master mainfests]# dig -t A myapp-headless.default.svc.cluster.local. @10.96.0.10
[root@k8s-master01 ~]# kubectl apply -f myapp-svc-headless.yaml 
service/myapp-headless created
[root@k8s-master01 ~]# kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
myapp-deploy-659f64f98b-bn8sl   1/1     Running   0          26m
myapp-deploy-659f64f98b-fztt9   1/1     Running   0          26m
myapp-deploy-659f64f98b-wdsbw   1/1     Running   0          26m
[root@k8s-master01 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP   6d14h
myapp            ClusterIP   10.103.61.43   <none>        80/TCP    9m31s
myapp-headless   ClusterIP   None           <none>        80/TCP    27s
[root@k8s-master01 ~]# kubectl get pod -n kube-system
NAME                                   READY   STATUS    RESTARTS   AGE
coredns-5c98db65d4-4kj2t               1/1     Running   7          6d14h
coredns-5c98db65d4-7zsr7               1/1     Running   7          6d14h
etcd-k8s-master01                      1/1     Running   8          6d14h
kube-apiserver-k8s-master01            1/1     Running   8          6d14h
kube-controller-manager-k8s-master01   1/1     Running   7          6d14h
kube-flannel-ds-amd64-5chsx            1/1     Running   8          6d12h
kube-flannel-ds-amd64-8bxpj            1/1     Running   8          6d12h
kube-flannel-ds-amd64-g4gh9            1/1     Running   7          6d13h
kube-proxy-cznqr                       1/1     Running   7          6d12h
kube-proxy-mcsdl                       1/1     Running   8          6d12h
kube-proxy-t7v46                       1/1     Running   7          6d14h
kube-scheduler-k8s-master01            1/1     Running   7          6d14h


# 安装一下工具
[root@k8s-master01 ~]# yum -y install bind-utils
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP                NODE           NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-4kj2t               1/1     Running   7          6d14h   10.244.0.16       k8s-master01   <none>           <none>
coredns-5c98db65d4-7zsr7               1/1     Running   7          6d14h   10.244.0.17       k8s-master01   <none>           <none>
etcd-k8s-master01                      1/1     Running   8          6d14h   192.168.192.131   k8s-master01   <none>           <none>
kube-apiserver-k8s-master01            1/1     Running   8          6d14h   192.168.192.131   k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01   1/1     Running   7          6d14h   192.168.192.131   k8s-master01   <none>           <none>
kube-flannel-ds-amd64-5chsx            1/1     Running   8          6d12h   192.168.192.129   k8s-node02     <none>           <none>
kube-flannel-ds-amd64-8bxpj            1/1     Running   8          6d12h   192.168.192.130   k8s-node01     <none>           <none>
kube-flannel-ds-amd64-g4gh9            1/1     Running   7          6d13h   192.168.192.131   k8s-master01   <none>           <none>
kube-proxy-cznqr                       1/1     Running   7          6d12h   192.168.192.130   k8s-node01     <none>           <none>
kube-proxy-mcsdl                       1/1     Running   8          6d12h   192.168.192.129   k8s-node02     <none>           <none>
kube-proxy-t7v46                       1/1     Running   7          6d14h   192.168.192.131   k8s-master01   <none>           <none>
kube-scheduler-k8s-master01            1/1     Running   7          6d14h   192.168.192.131   k8s-master01   <none>           <none>

[root@k8s-master01 ~]# dig -t A myapp-headless.default.svc.cluster.local. @10.244.0.16

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.9 <<>> -t A myapp-headless.default.svc.cluster.local. @10.244.0.16
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8817
;; flags: qr aa rd; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;myapp-headless.default.svc.cluster.local. IN A

;; ANSWER SECTION:
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.82
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.1.83
myapp-headless.default.svc.cluster.local. 30 IN A 10.244.2.38

;; Query time: 0 msec
;; SERVER: 10.244.0.16#53(10.244.0.16)
;; WHEN: 三 6月 01 10:23:04 CST 2022
;; MSG SIZE  rcvd: 237

[root@k8s-master01 ~]# kubectl get pod -o wide
NAME                            READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
myapp-deploy-659f64f98b-bn8sl   1/1     Running   0          35m   10.244.1.82   k8s-node01   <none>           <none>
myapp-deploy-659f64f98b-fztt9   1/1     Running   0          35m   10.244.2.38   k8s-node02   <none>           <none>
myapp-deploy-659f64f98b-wdsbw   1/1     Running   0          35m   10.244.1.83   k8s-node01   <none>           <none>

NodePort

nodePort的原理在于在node上开了一个端口,将向该端口的流量导入到kube-proxy,然后由kube-proxy进一步到给对应的pod
vi nodeport.yaml

apiVersion: v1
kind: Service
metadata:
  name: myapp
  namespace: default
spec:
  type: NodePort
  selector:
    app: myapp
    release: stabel
  ports:
  - name: http
    port: 80
    targetPort: 80

查询流程

iptables -t nat -nvL
	KUBE-NODEPORTS
[root@k8s-master01 ~]# kubectl apply -f nodeport.yaml 
service/myapp configured
[root@k8s-master01 ~]# kubectl get pod
NAME                            READY   STATUS    RESTARTS   AGE
myapp-deploy-659f64f98b-bn8sl   1/1     Running   0          42m
myapp-deploy-659f64f98b-fztt9   1/1     Running   0          42m
myapp-deploy-659f64f98b-wdsbw   1/1     Running   0          42m
[root@k8s-master01 ~]# kubectl get svc
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP        6d14h
myapp            NodePort    10.103.61.43   <none>        80:30585/TCP   25m
myapp-headless   ClusterIP   None           <none>        80/TCP         15m

打开浏览器访问:192.168.192.131:30585 打开浏览器访问:192.168.192.130:30585

打开浏览器访问:192.168.192.129:30585

所有node节点都可以访问

Kubernetes微服务实战 kubernetes service原理_容器_07


三台主机都有kube-proxy信息

[root@k8s-master01 ~]# netstat -anpt | grep :30585
tcp6       0      0 :::30585                :::*                    LISTEN      2009/kube-proxy

master主机

[root@k8s-master01 ~]# ipvsadm -Ln | grep 192.168.192.131
TCP  192.168.192.131:30585 rr
  -> 192.168.192.131:6443         Masq    1      3          0

LoadBalancer(收费服务)

loadBalancer和nodePort其实是同一种方式。 区别在于loadBalancer比nodePort多了一步,就是可以调用cloud provider去创建LB来向节点导流

Kubernetes微服务实战 kubernetes service原理_kubernetes_08

ExternalName

这种类型的Service通过返回CNAME和它的值,可以将服务映射到externalName字段的内容(例如: hub.atguigu.com )。ExternalName Service是Service的特例,它没有selector,也没有定义任何的端口和Endpoint。相反的,对于运行在集群外部的服务,它通过返回该外部服务的别名这种方式来提供服务
vi ex.yml

kind: Service
apiVersion: v1
metadata:
  name: my-service-1
  namespace: default
spec:
  type: ExternalName
  externalName: hub.atguigu.com

当查询主机my-service.defalut.svc.cluster.local ( SVC_NAME.NAMESPACE.svc.cluster.local )时,集群的DNS服务将返回一个值my.database.example.com的CNAME记录。访问这个服务的工作方式和其他的相同,唯-不同的是重定向发生在DNS层,而且不会进行代理或转发

[root@k8s-master01 ~]# kubectl create -f ex.yml 
service/my-service-1 created
[root@k8s-master01 ~]# kubectl get svc
NAME             TYPE           CLUSTER-IP     EXTERNAL-IP       PORT(S)        AGE
kubernetes       ClusterIP      10.96.0.1      <none>            443/TCP        6d15h
my-service-1     ExternalName   <none>         hub.atguigu.com   <none>         19s
myapp            NodePort       10.103.61.43   <none>            80:30585/TCP   67m
myapp-headless   ClusterIP      None           <none>            80/TCP         58m
[root@k8s-master01 ~]# kubectl get pod -n kube-system -o wide
NAME                                   READY   STATUS    RESTARTS   AGE     IP                NODE           NOMINATED NODE   READINESS GATES
coredns-5c98db65d4-4kj2t               1/1     Running   7          6d15h   10.244.0.16       k8s-master01   <none>           <none>
coredns-5c98db65d4-7zsr7               1/1     Running   7          6d15h   10.244.0.17       k8s-master01   <none>           <none>
etcd-k8s-master01                      1/1     Running   8          6d15h   192.168.192.131   k8s-master01   <none>           <none>
kube-apiserver-k8s-master01            1/1     Running   8          6d15h   192.168.192.131   k8s-master01   <none>           <none>
kube-controller-manager-k8s-master01   1/1     Running   7          6d15h   192.168.192.131   k8s-master01   <none>           <none>
kube-flannel-ds-amd64-5chsx            1/1     Running   8          6d13h   192.168.192.129   k8s-node02     <none>           <none>
kube-flannel-ds-amd64-8bxpj            1/1     Running   8          6d13h   192.168.192.130   k8s-node01     <none>           <none>
kube-flannel-ds-amd64-g4gh9            1/1     Running   7          6d14h   192.168.192.131   k8s-master01   <none>           <none>
kube-proxy-cznqr                       1/1     Running   7          6d13h   192.168.192.130   k8s-node01     <none>           <none>
kube-proxy-mcsdl                       1/1     Running   8          6d13h   192.168.192.129   k8s-node02     <none>           <none>
kube-proxy-t7v46                       1/1     Running   7          6d15h   192.168.192.131   k8s-master01   <none>           <none>
kube-scheduler-k8s-master01            1/1     Running   7          6d15h   192.168.192.131   k8s-master01   <none>           <none>
[root@k8s-master01 ~]# dig -t A my-service-1.default.svc.cluster.local. @10.103.61.43

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.9 <<>> -t A my-service-1.default.svc.cluster.local. @10.103.61.43
;; global options: +cmd
;; connection timed out; no servers could be reached
[root@k8s-master01 ~]# dig -t A my-service-1.default.svc.cluster.local. @10.244.0.16

; <<>> DiG 9.11.4-P2-RedHat-9.11.4-26.P2.el7_9.9 <<>> -t A my-service-1.default.svc.cluster.local. @10.244.0.16
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 30414
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;my-service-1.default.svc.cluster.local.        IN A

;; ANSWER SECTION:
my-service-1.default.svc.cluster.local. 30 IN CNAME hub.atguigu.com.

;; Query time: 16 msec
;; SERVER: 10.244.0.16#53(10.244.0.16)
;; WHEN: 三 6月 01 11:15:27 CST 2022
;; MSG SIZE  rcvd: 134