概述:

官方原始文件使用的是deployment,replicate 为 1,这样将会在某一台节点上启动对应的nginx-ingress-controller pod。外部流量访问至该节点,由该节点负载分担至内部的service。考虑到单点故障的问题,改为DaemonSet然后删掉replicate ,配合亲和性部署在指定节点上启动nginx-ingress-controller pod,确保有多个节点启动nginx-ingress-controller pod,生产环境中建议ingress节点打上污点不允许业务pod进行调度,以避免业务应用与Ingress服务发生资源争抢。后续将这些节点加入到外部硬件负载均衡组实现高可用性。
 

 

云服务器方式介绍:

1.使用daemonset方式将ingress-controller部署在相应节点,一般在k8s-node节点上,master节点为了集群稳定不建议部署。

2.申请使用SLB负载均衡高可用IP对应解析访问域名,指向三台ingress节点主机(把ingress节点主机绑定至后端服务器组),达到高可用目的。

大概结构图如下:

Ingress Route可以跨namespace配置吗 ingress高可用搭建_IP

 

 

  

私有服务器方式 

下面针对的是企业自建服务器部署高可用架构(云服务器部署架构也基本差不多)

 

大致结构:

在Kubernetes中添加了ingress后,公网域名解析设置为:域名解析到机房公网IP服务器(nginx服务器),配置nginx转发到keepalived的VIP

这样集群外部就可以通过域名来访问你的服务,也解决了单点故障。

注意:本实验针对生产环境,ingress服务器没有公网IP,备案的域名解析到Nginx服务器上的,所以没使用到ingress的域名,

没使用到ingress的spec.rules.host,也可以不用配置域名,通过nginx反向代理到ingress宿主机的IP访问。

或者VIP使用公网IP,把VIP解析到域名,使用域名访问(可以不用nginx)

选择Kubernetes部署了ingress的三个node作为节点,都安装keepalived。

Ingress Route可以跨namespace配置吗 ingress高可用搭建_IP_02

 

修改配置文件/etc/keepalived/keepalived.conf

除了priority优先级不一样,其他三个node节点都一样

注意:修改如下所示

Ingress Route可以跨namespace配置吗 ingress高可用搭建_IP_03

 

启动keeplived

systemctl start keepalived
systemctl enable keepalived

 

创建deployment和service

(用于测试ingress请求后端业务pod)

$ vim deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment 
metadata: 
  name: deployment 
spec: 
  replicas: 2 
  template: 
    metadata: 
      labels: 
        name: nginx 
    spec: 
      containers: 
      - name: nginx 
        image: wangyanglinux/myapp:v3 
        imagePullPolicy: IfNotPresent 
        ports: 
        - containerPort: 80

---
apiVersion: v1 
kind: Service 
metadata: 
  name: svc-1 
spec: 
  ports: 
  - port: 80 
    targetPort: 80 
    protocol: TCP 
  selector: 
    name: nginx #当name=nginx时匹配

$ kubectl apply -f deployment.yaml

 

 

安装ingress-nginx-controller

官网安装文件地址:

https://github.com/kubernetes/ingress-nginx/tree/main/deploy/static/provider/baremetal/deploy.yaml

 

1.给要部署的ingress节点打标签

nginx-ingress-controller会随意选择一个node节点运行pod,为此需要我们把nginx-ingress-controller运行到指定的node节点上。
首先需要给需要运行nginx-ingress-controller的node节点打标签

kubectl label nodes k8s-node01 edgenode=true
kubectl label nodes k8s-node02 edgenode=true
kubectl label nodes k8s-node03 edgenode=true

查看node标签

kubectl get node --show-labels

 

2.daemonset形式安装ingress-nginx-controller(修改原来ingress部署的yaml文件,注意修改标红处)

  • deployment改为daemonset  
  • 注释replicate  #注销此行,DaemonSet不需要此参数
  • 添加hostNetwork  #添加该字段让pod使用物理机网络,在物理机暴露服务端口80,注意:物理机80端口不能被占用
  • dnsPolicy:ClusterFirstWithHostNet  #使用hostNetwork后容器会使用物理机网络包括DNS,会无法解析内部service,使用此参数可以让容器同时使用 hostNetwork 与 kube-dns 作为 Pod 预设 DNS 配置。
  • 添加节点亲和性属性  
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-configuration
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: default-http-backend
  labels:
    app: default-http-backend
  namespace: ingress-nginx
spec:
  replicas: 1
  selector:
    matchLabels:
      app: default-http-backend 
  template:
    metadata:
      labels:
        app: default-http-backend
    spec:
      terminationGracePeriodSeconds: 60
      containers:
      - name: default-http-backend
        # Any image is permissable as long as:
        # 1. It serves a 404 page at /
        # 2. It serves 200 on a /healthz endpoint
        image: registry.cn-hangzhou.aliyuncs.com/google_containers/defaultbackend:1.4
        livenessProbe:
          httpGet:
            path: /healthz
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          timeoutSeconds: 5
        ports:
        - containerPort: 8080
        resources:
          limits:
            cpu: 10m
            memory: 20Mi
          requests:
            cpu: 10m
            memory: 20Mi
---
 
apiVersion: v1
kind: Service
metadata:
  name: default-http-backend
  namespace: ingress-nginx
  labels:
    app: default-http-backend
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: default-http-backend
---

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress-controller
  namespace: ingress-nginx
spec:
  selector:
    matchLabels:
      app: ingress-nginx
  template:
    metadata:
      labels:
        app: ingress-nginx
      annotations:
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
    spec:
      serviceAccountName: nginx-ingress-serviceaccount
      hostNetwork: true
    dnsPolicy: ClusterFirstWithHostNet 
      nodeSelector:
        edgenode: 'true'
      containers:
        - name: nginx-ingress-controller
          image: registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:0.20.0
          args:
            - /nginx-ingress-controller
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
          - name: http
            containerPort: 80
          - name: https
            containerPort: 443
          # livenessProbe:
          #   failureThreshold: 3
          #   httpGet:
          #     path: /healthz
          #     port: 10254
          #     scheme: HTTP
          #   initialDelaySeconds: 10
          #   periodSeconds: 10
          #   successThreshold: 1
          #   timeoutSeconds: 1
          # readinessProbe:
          #   failureThreshold: 3
          #   httpGet:
          #     path: /healthz
          #     port: 10254
          #     scheme: HTTP
          #   periodSeconds: 10
          #   successThreshold: 1
          #   timeoutSeconds: 1
---

apiVersion: v1
kind: Namespace
metadata:
  name: ingress-nginx
---

apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
        - events
    verbs:
        - create
        - patch
  - apiGroups:
      - "extensions"
    resources:
      - ingresses/status
    verbs:
      - update
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx
 
---
 
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
---

kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx

 

应用资源清单

kubectl apply -f ingress-nginx.yaml

 

查看安装是否成功

kubectl get ds -n ingress-nginx
kubectl get pods -n ingress-nginx -o wide
[root@master ingress]# kubectl get pod -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-ingress-controller-3sfom 1/1 Running 0 13m 192.168.3.1 node1 <none> <none>
nginx-ingress-controller-5jdeq 1/1 Running 0 13m 192.168.3.2 node2 <none> <none>
nginx-ingress-controller-1hdkr 1/1 Running 0 13m 192.168.3.3 node3 <none> <none>

可以看到三个ingress-controller已经根据我们选择,部署在3个node节点上,使用宿主机的网络

 

Ingress HTTPS 代理访问

创建https证书的secret

kubectl create secret tls tls-secret --key tls.key --cert tls.crt

 

创建ingress策略

$ vim https_ingress.yaml

apiVersion: extensions/v1beta1 
kind: Ingress 
metadata: 
  name: https 
spec: 
  tls:
  - hosts: 
    - www.test.com 
    secretName: tls-secret #上面创建时保存的secret名称
  rules: 
    - host: www.test.com 
      http: 
        paths: 
        - path: / 
          backend: 
            serviceName: svc-1
            servicePort: 80

$ kubectl apply -f https_ingress.yaml

 

然后就是分别在三台ingress服务器上部署keepalived,使用VIP (略)

 

最后部署nginx转发,把业务请求路径转发到VIP(略)

 

测试

 

好记性不如烂笔头,最难不过坚持