kubernetes组件CoreDNS
https:///coredns/coredns
https://coredns.io/plugins/- Kubernetes的DNS解析流程案例



部署CoreDNS
- 下载官方yaml部署脚本(也可以用kubeasz集群部署时内置的coredns插件脚本)
#下载地址
https:///coredns/deployment- 更改coredns插件脚本配置
[root@K8s-ansible ~]#ll /usr/local/src/kubernetes/cluster/addons/dns/coredns/
total 44
drwxr-xr-x 2 root root 4096 Mar 15 14:01 ./
drwxr-xr-x 5 root root 4096 Mar 15 14:01 ../
-rw-r--r-- 1 root root 1075 Mar 15 14:01 Makefile
-rw-r--r-- 1 root root 5065 Mar 15 14:01 coredns.yaml.base
-rw-r--r-- 1 root root 5115 Mar 15 14:01 coredns.yaml.in
-rw-r--r-- 1 root root 5117 Mar 15 14:01 coredns.yaml.sed
-rw-r--r-- 1 root root 344 Mar 15 14:01 transforms2salt.sed
-rw-r--r-- 1 root root 287 Mar 15 14:01 transforms2sed.sed
#Kubernetes集群中有默认的DNS解析地址,查看方式是进入pod中查看resolv.conf文件
[root@K8s-ansible ~]#kubectl exec -it net-test1 bash -n myserver
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# cat /etc/resolv.conf
search myserver.svc.mooreyxia.local svc.mooreyxia.local mooreyxia.local
nameserver 10.100.0.2
options ndots:5
[root@net-test1 /]# exit
exit
#配置说明
--------------------------------------------------------
errors:错误信息标准输出。
health:在CoreDNS的 http://localhost:8080/health 端口提供 CoreDNS 服务的健康报告。
ready:监听8181端口,当coredns的插件都已就绪时,访问该接口会返回 200 OK。
kubernetes:CoreDNS 将基于 kubernetes service name进行 DNS 查询并返回查询记录给客户端.
prometheus:CoreDNS 的度量指标数据以 Prometheus 的key-value的格式在
http://localhost:9153/metrics URI上提供。
forward: 不是Kubernetes 集群内的其它任何域名查询都将转发到 预定义的目的
server,如 (/etc/resolv.conf或IP(如8.8.8.8)).
cache:启用 service解析缓存,单位为秒。
loop:检测域名解析是否有死循环,如coredns转发给内网DNS服务器,而内网
DNS服务器又转发给coredns,如果发现解析是死循环,则强制中止 CoreDNS 进程(kubernetes会重建)。
reload:检测corefile是否更改,在重新编辑configmap 配置后,默认2分钟后会优雅的自动加载。
loadbalance:轮训DNS域名解析, 如果一个域名存在多个记录则轮训解析。
--------------------------------------------------------
[root@K8s-ansible script]#cat coredns.yaml
...
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
#kubernetes __DNS__DOMAIN__ in-addr.arpa ip6.arpa {
kubernetes mooreyxia.local in-addr.arpa ip6.arpa { #域名
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: coredns
namespace: kube-system
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
# replicas: not specified here:
# 1. In order to make Addon Manager do not reconcile this replicas parameter.
# 2. Default is 1.
# 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
selector:
matchLabels:
k8s-app: kube-dns
template:
metadata:
labels:
k8s-app: kube-dns
spec:
securityContext:
seccompProfile:
type: RuntimeDefault
priorityClassName: system-cluster-critical
serviceAccountName: coredns
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: k8s-app
operator: In
values: ["kube-dns"]
topologyKey: kubernetes.io/hostname
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
nodeSelector:
kubernetes.io/os: linux
containers:
- name: coredns
# image: registry.k8s.io/coredns/coredns:v1.9.3
image: K8s-harbor01./coredns/coredns:v1.9.3 #更换到私有harbor
imagePullPolicy: IfNotPresent
resources:
limits:
# memory: __DNS__MEMORY__LIMIT__
memory: 256Mi #生产要保证内存足够
cpu: 200m #生产要保证cpu核数
requests:
cpu: 100m
memory: 70Mi
args: [ "-conf", "/etc/coredns/Corefile" ]
volumeMounts:
- name: config-volume
mountPath: /etc/coredns
readOnly: true
ports:
- containerPort: 53
name: dns
protocol: UDP
- containerPort: 53
name: dns-tcp
protocol: TCP
- containerPort: 9153
name: metrics
protocol: TCP
livenessProbe:
httpGet:
path: /health
port: 8080
scheme: HTTP
initialDelaySeconds: 60
timeoutSeconds: 5
successThreshold: 1
failureThreshold: 5
readinessProbe:
httpGet:
path: /ready
port: 8181
scheme: HTTP
securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- all
readOnlyRootFilesystem: true
dnsPolicy: Default
volumes:
- name: config-volume
configMap:
name: coredns
items:
- key: Corefile
path: Corefile
---
apiVersion: v1
kind: Service
metadata:
name: kube-dns
namespace: kube-system
annotations:
prometheus.io/port: "9153"
prometheus.io/scrape: "true"
labels:
k8s-app: kube-dns
kubernetes.io/cluster-service: "true"
/mode: Reconcile
kubernetes.io/name: "CoreDNS"
spec:
selector:
k8s-app: kube-dns
# clusterIP: __DNS__SERVER__
clusterIP: 10.100.0.2 #配置默认的DNS解析地址
ports:
- name: dns
port: 53
protocol: UDP
- name: dns-tcp
port: 53
protocol: TCP
- name: metrics
port: 9153
protocol: TCP
#生成配置
[root@K8s-ansible script]#kubectl apply -f coredns.yaml
serviceaccount/coredns created
/system:coredns created
/system:coredns created
configmap/coredns created
deployment.apps/coredns created
service/kube-dns created
#配置了kube-dns的service
[root@K8s-ansible ~]#kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 27m
kube-system kube-dns ClusterIP 10.100.0.2 <none> 53/UDP,53/TCP,9153/TCP 11m
#运行coredns的Pod
[root@K8s-ansible ~]#kubectl get pod -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
...
kube-system coredns-6b6f6898b4-98prz 1/1 Running 0 50s 10.200.67.1 192.168.11.215 <none> <none>
#测试Pod的DNS解析
[root@K8s-ansible ~]#kubectl exec -it net-test1 bash -n myserver
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
[root@net-test1 /]# cat /etc/resolv.conf
search myserver.svc.mooreyxia.local svc.mooreyxia.local mooreyxia.local
nameserver 10.100.0.2
options ndots:5
[root@net-test1 /]# ping www.baidu.com
PING www.a.shifen.com (14.215.177.38) 56(84) bytes of data.
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=1 ttl=53 time=27.7 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=2 ttl=53 time=26.4 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=3 ttl=53 time=27.1 ms
64 bytes from 14.215.177.38 (14.215.177.38): icmp_seq=4 ttl=53 time=27.1 ms
^C
--- www.a.shifen.com ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 6998ms
rtt min/avg/max/mdev = 26.414/27.128/27.782/0.512 msCoreDNS性能优化
#常用配置说明
--------------------------------------------------------
errors:错误信息标准输出。
health:在CoreDNS的 http://localhost:8080/health 端口提供 CoreDNS 服务的健康报告。
ready:监听8181端口,当coredns的插件都已就绪时,访问该接口会返回 200 OK。
kubernetes:CoreDNS 将基于 kubernetes service name进行 DNS 查询并返回查询记录给客户端.
prometheus:CoreDNS 的度量指标数据以 Prometheus 的key-value的格式在
http://localhost:9153/metrics URI上提供。
forward: 不是Kubernetes 集群内的其它任何域名查询都将转发到 预定义的目的
server,如 (/etc/resolv.conf或IP(如8.8.8.8)).
cache:启用 service解析缓存,单位为秒。
loop:检测域名解析是否有死循环,如coredns转发给内网DNS服务器,而内网
DNS服务器又转发给coredns,如果发现解析是死循环,则强制中止 CoreDNS 进程(kubernetes会重建)。
reload:检测corefile是否更改,在重新编辑configmap 配置后,默认2分钟后会优雅的自动加载。
loadbalance:轮训DNS域名解析, 如果一个域名存在多个记录则轮训解析。
--------------------------------------------------------
---
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
labels:
/mode: EnsureExists
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes mooreyxia.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
cache 30
loop
reload
loadbalance
}
---
#DNS解析加速优化
limits:
memory: __DNS__MEMORY__LIMIT__ #资源限制生产环境4G以上
kind: Deployment
...
spec:
replicas : NUMBER #DNS服务开多副本,负载均衡
cache 30 #DNS缓存开启kubernetes组件-dashboard
部署官方dashboard
https:///kubernetes/dashboard
- 获取部署文件并安装
[root@K8s-ansible script]#cd dashboard-v2.7.0/
[root@K8s-ansible dashbord-v2.7.0]#wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
[root@K8s-ansible dashboard-v2.7.0]#ls
recommended.yaml
#更配配置文件
#1.更换下载源到私有harbor
[root@K8s-ansible dashboard-v2.7.0]#cat recommended.yaml |grep harbor
image: K8s-harbor01./kubernetes/kubernetesui/dashboard:v2.7.0
image: K8s-harbor01./kubernetes/kubernetesui/metrics-scraper:v1.0.8
#2.对外暴露服务端口
[root@K8s-ansible dashboard-v2.7.0]#cat recommended.yaml
...
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort #添加对外暴露端口
ports:
- port: 443
targetPort: 8443
nodeport: 30000 #端口要设置在预定的k8s使用范围内
selector:
k8s-app: kubernetes-dashboard
#创建服务
[root@K8s-ansible dashboard-v2.7.0]#kubectl apply -f recommended.yaml
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
/kubernetes-dashboard created
/kubernetes-dashboard created
/kubernetes-dashboard created
/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
[root@K8s-ansible ~]#kubectl get pod -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
dashboard-metrics-scraper-6c6f999b45-kzpvp 1/1 Running 0 2m56s
kubernetes-dashboard-fc76cd84f-b29zs 1/1 Running 0 2m56s
#访问https://192.168.11.214:30000/
- 创建登录用户Token
#创建用户
[root@K8s-ansible dashboard-v2.7.0]#cat admin-user.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
apiVersion: /v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup:
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard
[root@K8s-ansible dashboard-v2.7.0]#kubectl apply -f admin-user.yaml
serviceaccount/admin-user created
/admin-user created
#生成用户密码
[root@K8s-ansible dashboard-v2.7.0]#cat admin-secret.yaml
apiVersion: v1
kind: Secret
type: kubernetes.io/service-account-token
metadata:
name: dashboard-admin-user
namespace: kubernetes-dashboard
annotations:
kubernetes.io/: "admin-user"
[root@K8s-ansible dashboard-v2.7.0]#kubectl apply -f admin-secret.yaml
secret/dashboard-admin-user created
#查看登录Token
[root@K8s-ansible dashboard-v2.7.0]#kubectl get secrets -A |grep dashboard-admin-user
kubernetes-dashboard dashboard-admin-user kubernetes.io/service-account-token 3 40s
[root@K8s-ansible dashboard-v2.7.0]#kubectl describe secrets dashboard-admin-user -n kubernetes-dashboard
Name: dashboard-admin-user
Namespace: kubernetes-dashboard
Labels: <none>
Annotations: kubernetes.io/: admin-user
kubernetes.io/service-account.uid: e03c53f4-d159-4008-804b-970912fe556e
Type: kubernetes.io/service-account-token
Data
====
token: eyJhbGciOiJSUzI1NiIsImtpZCI6ImhIMHhCWW1iOFRhbXNjdDAyQUg5YVE3RUVuRjNxTDZReXhnUzJqbnRpTzQifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdXNlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJhZG1pbi11c2VyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiZTAzYzUzZjQtZDE1OS00MDA4LTgwNGItOTcwOTEyZmU1NTZlIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmVybmV0ZXMtZGFzaGJvYXJkOmFkbWluLXVzZXIifQ.lVHgpVsH0G0Rsq-OLST8zTeH48GlLUZDcPTjYSAh1MnOFDhylKofJUjjv68t0nkQ71xZnsqEs89qekakC1UfkTmpRgbHjRVisYdPPqO7Y-D6RqDJUC_FMArPRZaTONta7ZKCs6j99zp8VrFB4BajBdNvpXJ1YsawCFE6ZNssVkL2Wjdy8mkpb8xYQX1XDrEvFaNHX67IRkcQDiF-k8rZeSOVvHlqzHKgeeg4OBblb2yNwVDc8X6FdmZXfTvA768t9rkmq1VJ4U2dRBmHAgMNZN5iD4YjNphNkCMzAZQJm4glkxvAD7nDpGX6CT_4boskv4jHOITbkXUjDPpf_VZyJg
ca.crt: 1310 bytes
namespace: 20 bytes
#复制token进行登录即可

部署第三方dashboard
- Rancher

官方API
https://rancher.com/quick-start官方部署示例:
root@k8s-master1:~# sudo docker run --privileged -d --restart=unless-stopped -p 80:80 -p
443:443 rancher/rancher- kuboard
官方API
https://kuboard.cn/learning/工作逻辑

用户从客户端访问server端的web界面,web界面通过kube-api发送请求到api-server,再通过api-server控制etcd,在etcd进行CURD操作后,再通过Node节点的kubelet执行业务操作或者通过kube-proxy更新网络规则,执行结果通过kube-api返还到kuboard的server,通过web界面展示给用户部署案例
#官方部署文档
https://kuboard.cn/install/v3/install-built-in.html#%E9%83%A8%E7%BD%B2%E8%AE%A1%E5%88%92部署环境准备
#用于安装 Kuboard v3.x 的机器已经安装了 docker,并且版本不低于 docker 19.03
[root@K8s-ansible ~]#docker --version
Docker version 20.10.22, build 3a2c30b
#已经有自己的 Kubernetes 集群,并且版本不低于 Kubernetes v1.13
[root@K8s-ansible ~]#kubectl get node
NAME STATUS ROLES AGE VERSION
192.168.11.211 Ready,SchedulingDisabled master 3d18h v1.26.3
192.168.11.212 Ready,SchedulingDisabled master 3d18h v1.26.3
192.168.11.213 Ready,SchedulingDisabled master 3d18h v1.26.3
192.168.11.214 Ready node 3d18h v1.26.3
192.168.11.215 Ready node 3d18h v1.26.3
192.168.11.216 Ready node 3d18h v1.26.3部署kuboard
#使用docker命令部署
[root@K8s-ansible ~]#mkdir -p /data/kuboard-data
[root@K8s-ansible ~]#sudo docker run -d \
--restart=unless-stopped \
--name=kuboard \
-p 81:80/tcp \
-p 10081:10081/tcp \
-e KUBOARD_ENDPOINT="http://192.168.11.205:81" \
-e KUBOARD_AGENT_SERVER_TCP_PORT="10081" \
-v /data/kuboard-data:/data \
/kuboard/kuboard:v3
Unable to find image '/kuboard/kuboard:v3' locally
v3: Pulling from kuboard/kuboard
39cf15d1b231: Pull complete
29d298e3e9dd: Pull complete
b50323021d34: Pull complete
1bd2237dfc43: Pull complete
e2db41d875d4: Pull complete
0f6f2d6a4d02: Pull complete
f1da1074c6ab: Pull complete
8b7235e275ff: Pull complete
4cfd0f44a67c: Pull complete
94b550a6b553: Pull complete
12f8fd9235ff: Pull complete
ee6857bcab98: Pull complete
c64b52f7eb5b: Pull complete
8dcf7fdd2f94: Pull complete
431d52cb491f: Pull complete
cb5f5889fa9d: Pull complete
62b6a027a579: Pull complete
052b9a115af1: Pull complete
Digest: sha256:8531a62d0f21ca05fef1e70b2a74617c02c5ff52d1fd0c901ed8fc4e202e3c64
Status: Downloaded newer image for /kuboard/kuboard:v3
6ccc2d9d81c44bf1a644eee77a30d6656c167fec51167e749b16d608d6f78e63
#确认监听端口
[root@K8s-ansible ~]#ss -nltp|grep 81
LISTEN 0 4096 0.0.0.0:10081 0.0.0.0:* users:(("docker-proxy",pid=1722,fd=4))
LISTEN 0 4096 0.0.0.0:81 0.0.0.0:* users:(("docker-proxy",pid=1742,fd=4))
LISTEN 0 4096 [::]:10081 [::]:* users:(("docker-proxy",pid=1729,fd=4))
LISTEN 0 4096 [::]:81 [::]:* users:(("docker-proxy",pid=1746,fd=4))
#在日志中找到初始账户
[root@K8s-ansible ~]#docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6ccc2d9d81c4 /kuboard/kuboard:v3 "/" 27 seconds ago Up 24 seconds 443/tcp, 0.0.0.0:10081->10081/tcp, :::10081->10081/tcp, 0.0.0.0:81->80/tcp, :::81->80/tcp kuboard
[root@K8s-ansible ~]#docker logs 6ccc2d9d81c4
生成 KUBOARD_SSO_CLIENT_SECRET: 26a66c40856f2992122c2870
设置 KuboardAdmin 的默认密码(仅第一次启动时设置) Kuboard123
mkdir: cannot create directory '/data': File exists
start kuboard-agent-server
...
#默认账户
用户名: admin
密 码: Kuboard123访问 Kuboard v3.x 的界面


添加集群

- 第一种方式,向集群添加代理Pod,通过代理访问集群中其他Node节点获取数据

实现方式



- 第二种方式,给Kuboard服务端添加Kubernetes集群访问权限,不需要代理




查看集群概要


我是moore.大家一起加油!!!
















