官网

先去官网复制文件

definition文件太长,懒得复制到这,自行官网复制

'cat traefik-dep.yaml'

apiVersion: v1
kind: ServiceAccount
metadata:
  name: traefik-ingress-controller

---
kind: DaemonSet
apiVersion: apps/v1
metadata:
  name: traefik
  labels:
    app: traefik

spec:
  selector:
    matchLabels:
      app: traefik
  template:
    metadata:
      labels:
        app: traefik
    spec:
      serviceAccountName: traefik-ingress-controller
      containers:
        - name: traefik
          image: traefik:v3.0
          args:
            - --log.level=DEBUG
            - --api
            - --api.insecure
            - --entryPoints.web.address=:80
            - --entryPoints.tcpep.address=:8000
            - --entryPoints.udpep.address=:9000/udp
            - --providers.kubernetescrd
          ports:
            - name: web
              containerPort: 80
            - name: admin
              containerPort: 8080
            - name: tcpep
              containerPort: 8000
            - name: udpep
              containerPort: 9000

'cat rbac.yaml'

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: traefik-ingress-controller

rules:
  - apiGroups:
      - ""
    resources:
      - services
      - endpoints
      - secrets
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses
      - ingressclasses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - extensions
      - networking.k8s.io
    resources:
      - ingresses/status
    verbs:
      - update
  - apiGroups:
      - traefik.io
    resources:
      - middlewares
      - middlewaretcps
      - ingressroutes
      - traefikservices
      - ingressroutetcps
      - ingressrouteudps
      - tlsoptions
      - tlsstores
      - serverstransports
      - serverstransporttcps
    verbs:
      - get
      - list
      - watch

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: traefik-ingress-controller

roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: traefik-ingress-controller
subjects:
  - kind: ServiceAccount
    name: traefik-ingress-controller
    namespace: default

'cat service.yaml'

apiVersion: v1
kind: Service
metadata:
  name: traefik
spec:
  type: LoadBalancer
  selector:
    app: traefik
  ports:
    - protocol: TCP
      port: 80
      name: web
      targetPort: 80
    - protocol: TCP
      port: 8080
      name: admin
      targetPort: 8080
    - protocol: TCP
      port: 8000
      name: tcpep
      targetPort: 8000

---
apiVersion: v1
kind: Service
metadata:
  name: traefikudp
spec:
  type: LoadBalancer
  selector:
    app: traefik
  ports:
    - protocol: UDP
      port: 9000
      name: udpep
      targetPort: 9000
我把官网的Traefik拆分成了两个文件

'service.yaml traefik-dep.yaml'

部署
[root@k8s-node1 ingressroute]# kubectl apply -f definition.yaml 
Warning: resource customresourcedefinitions/ingressroutes.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/ingressroutes.traefik.io configured
Warning: resource customresourcedefinitions/ingressroutetcps.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/ingressroutetcps.traefik.io configured
Warning: resource customresourcedefinitions/ingressrouteudps.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/ingressrouteudps.traefik.io configured
Warning: resource customresourcedefinitions/middlewares.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/middlewares.traefik.io configured
Warning: resource customresourcedefinitions/middlewaretcps.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/middlewaretcps.traefik.io configured
Warning: resource customresourcedefinitions/serverstransports.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/serverstransports.traefik.io configured
Warning: resource customresourcedefinitions/serverstransporttcps.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/serverstransporttcps.traefik.io configured
Warning: resource customresourcedefinitions/tlsoptions.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/tlsoptions.traefik.io configured
Warning: resource customresourcedefinitions/tlsstores.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/tlsstores.traefik.io configured
Warning: resource customresourcedefinitions/traefikservices.traefik.io is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
customresourcedefinition.apiextensions.k8s.io/traefikservices.traefik.io configured


[root@k8s-node1 ingressroute]# kubectl apply -f rbac.yaml 
clusterrole.rbac.authorization.k8s.io/traefik-ingress-controller created
clusterrolebinding.rbac.authorization.k8s.io/traefik-ingress-controller created


[root@k8s-node1 ingressroute]# kubectl apply -f traefik-dep.yaml 
serviceaccount/traefik-ingress-controller created
daemonset.apps/traefik created
[root@k8s-node1 ingressroute]# kubectl apply -f service.yaml 
service/traefik created
service/traefikudp created

' kubectl get svc,pod |grep traefik'

service/traefik      LoadBalancer   10.254.43.35     <pending>     80:32694/TCP,8080:31030/TCP,8000:30800/TCP   16h
service/traefikudp   LoadBalancer   10.254.158.174   <pending>     9000:30887/UDP                               16h
pod/traefik-9h9fn                       1/1     Running   0             16h
pod/traefik-frfx2                       1/1     Running   0             16h
pod/traefik-vdhfv                       1/1     Running   0             16h
都起来了.怎么访问呢?
在K8S中,我们可以通过Service Type来定义Service的访问方式。以下是K8S中常用的几种Service Type:

- ClusterIP:通过集群内部的虚拟IP来提供Service的访问。这种类型的Service只在集群内部可访问,对外部不可见。这是默认的Service Type。

- NodePort:在每个集群节点上开放一个指定端口,该端口转发到Service上。这种类型的Service可以通过节点的IP和指定的端口访问。

- LoadBalancer:在云平台上创建一个负载均衡器,并将外部流量转发到Service。这种类型的Service通常用于云平台提供商支持的负载均衡功能。

- ExternalName:将Service映射到另外一个DNS服务记录,可以通过Service名称来访问另外一个服务

traefik service的type是LoadBalancer

通过'http://nodeIP:31030',访问trarfik的web.'31030'是'service/traefik'节点对应'8080'的端口

通过nodeIP访问的方式并不是我们想要的,我们需要通过traefik实现域名访问.
开始前了解traefik的pod端口知识

端口知识参考文档

            - name: web
              containerPort: 80
              hostPort: 80            # [增加] 暴露Traefik容器的80端口至节点[HTTP转发]
            - name: websecure         # [增加] 增加HTTPS转发的支持[选用]
              containerPort: 443      # [增加] Traefik容器上使用的端口[对应上面的配置][选用]
              hostPort: 443           # [增加] 暴露Traefik容器的443端口至节点[HTTPS转发]
            - name: admin             # [增加] 实际上加不加也可以
              containerPort: 8080     # [增加] 这是Traefik的DashBoard访问端口
              hostPort: 8080          # [增加] 暴露Tracfik容器的8080端口至节点[尽量别使用,可以后期通用转发实现访问][选用]
            - name: tcpep
              containerPort: 8000
              hostPort: 8000          # [增加] 暴露Tracfik容器的8000端口至节点[TCP转发]
            - name: udpep
              containerPort: 9000
              hostPort: 9000          # [增加] 暴露Tracfik容器的9000端口至节点[UDP转发]

修改service type为clusterip

'kubectl get svc |grep trae'

traefik      ClusterIP      10.254.43.35     <none>        80/TCP,8080/TCP,8000/TCP   21h

这时已经无法通过nodeip访问

创建ingressroute,yaml文件名字随便写

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: myingressroute
  namespace: default

spec:
  entryPoints:
    - web

  routes:
  - match: Host(`traefik.local`)
    kind: Rule
    services:
    - name: traefik
      port: 8080

'kubectl get ingressroute'

NAME             AGE
myingressroute   93s

在访问traefik的windows系统上做host域名解析,然后访问"traefik.local"

无法打开网页,报错"HTTP ERROR 502"

思考一个问题,我们现在的访问流程是 'win客户机'---'通过域名'----'域名解析到node节点ip'-----'到traefik'-----'跳转到svc'-----'最后到提供服务的pod'

前三步没有问题,关键是怎么'到traefik'呢?

设置hostPort直接映射pod的端口为Node的端口,访问node的端口就是直接访问pod的端口

修改traefik.yaml文件.开启hostPort

'cat traefik-dep.yaml |grep "hostPort" -A2 -B 2'

            - name: web
              containerPort: 80
              hostPort: 80

修改好后,重新执行yaml文件,测试通过域名,成功访问

访问流程解析 'win客户机'---'通过域名'----'域名解析到node节点ip'-----'跳转到traefik pod的80端口'-----'通过ingressroute访问svc'-----'svc跳转到最后提供服务的pod'----'返回访问的数据'

虽然配置了hostPort,但是直接访问nodeip无法访问traefik,报错'404'
直接通过nodeip访问并没有经过traefik.
用官网的whoaii做测试

'cat whoami.yaml'

kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoami
  namespace: default
  labels:
    app: traefiklabs
    name: whoami

spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefiklabs
      task: whoami
  template:
    metadata:
      labels:
        app: traefiklabs
        task: whoami
    spec:
      containers:
        - name: whoami
          image: traefik/whoami
          ports:
            - containerPort: 80

---
apiVersion: v1
kind: Service
metadata:
  name: whoami
  namespace: default

spec:
  ports:
    - name: http
      port: 80
  selector:
    app: traefiklabs
    task: whoami

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoamitcp
  namespace: default
  labels:
    app: traefiklabs
    name: whoamitcp

spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefiklabs
      task: whoamitcp
  template:
    metadata:
      labels:
        app: traefiklabs
        task: whoamitcp
    spec:
      containers:
        - name: whoamitcp
          image: traefik/whoamitcp
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: whoamitcp
  namespace: default

spec:
  ports:
    - protocol: TCP
      port: 8080
  selector:
    app: traefiklabs
    task: whoamitcp

---
kind: Deployment
apiVersion: apps/v1
metadata:
  name: whoamiudp
  namespace: default
  labels:
    app: traefiklabs
    name: whoamiudp

spec:
  replicas: 2
  selector:
    matchLabels:
      app: traefiklabs
      task: whoamiudp
  template:
    metadata:
      labels:
        app: traefiklabs
        task: whoamiudp
    spec:
      containers:
        - name: whoamiudp
          image: traefik/whoamiudp:latest
          ports:
            - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: whoamiudp
  namespace: default

spec:
  ports:
    - port: 8080
  selector:
    app: traefiklabs
    task: whoamiudp

'kubectl apply -f whoami.yaml '

deployment.apps/whoami created
service/whoami created
deployment.apps/whoamitcp created
service/whoamitcp created
deployment.apps/whoamiudp created
service/whoamiudp created

'kubectl get pod,svc |grep whoami'

pod/whoami-84948b857c-7xp8q             1/1     Running   0             74s
pod/whoami-84948b857c-dpbnj             1/1     Running   0             75s
pod/whoamitcp-56cf4788fd-gwvxg          1/1     Running   0             75s
pod/whoamitcp-56cf4788fd-x8tcb          1/1     Running   0             74s
pod/whoamiudp-686fbfcb49-nqrw9          1/1     Running   0             74s
pod/whoamiudp-686fbfcb49-qncz5          1/1     Running   0             74s
service/whoami       ClusterIP      10.254.225.255   <none>        80/TCP                                       76s
service/whoamitcp    ClusterIP      10.254.248.1     <none>        8080/TCP                                     76s
service/whoamiudp    ClusterIP      10.254.223.239   <none>        8080/TCP  

部署ingressroute

'cat whoami-ingress.yaml'

apiVersion: traefik.io/v1alpha1
kind: IngressRoute
metadata:
  name: whoingressroute
  namespace: default

spec:
  entryPoints:
    - web

  routes:
  - match: Host(`foo.test.com`) && PathPrefix(`/bar`)
    kind: Rule
    services:
    - name: whoami
      port: 80

'kubectl apply -f whoami-ingress.yaml'

ingressroute.traefik.io/whoingressroute created
访问报错404

把'Host(foo.test.com) && PathPrefix(/bar)'改成'Host(foo.test.com) && PathPrefix(/)',访问正常

Hostname: whoami-84948b857c-n4bcq
IP: 127.0.0.1
IP: ::1
IP: 10.244.36.124
IP: fe80::b09c:a5ff:fec9:828c
RemoteAddr: 10.244.169.158:37158
GET / HTTP/1.1
Host: foo.test.com
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7
Accept-Encoding: gzip, deflate
Accept-Language: zh-CN,zh;q=0.9,en-US;q=0.8,en;q=0.7
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
X-Forwarded-For: 192.168.1.90
X-Forwarded-Host: foo.test.com
X-Forwarded-Port: 80
X-Forwarded-Proto: http
X-Forwarded-Server: traefik-lklb5
X-Real-Ip: 192.168.1.90