k8s 自动生成yaml文件

运维开发网 https://www.qedev.com

.k8s的yaml文件到底有多复杂Kubernetes创建、更新、删除资源等操作时均可以使用json或yaml文件进行操作,更新和删除可以依赖之前的文件进行更改,但是创建具有多变形,往往编辑起来比较复杂,容器出错,而且k8s的配置项实在太多,稍微不注意就会犯错。要写好一个yaml文件,你需要了解yaml的语法,需要掌握k8s的各种配置,对于一个k8s的初学者而言,这将是一件很难的事情。

1. k8s的yaml文件到底有多复杂

Kubernetes创建、更新、删除资源等操作时均可以使用json或yaml文件进行操作,更新和删除可以依赖之前的文件进行更改,但是创建具有多变形,往往编辑起来比较复杂,容器出错,而且k8s的配置项实在太多,稍微不注意就会犯错。要写好一个yaml文件,你需要了解yaml的语法,需要掌握k8s的各种配置,对于一个k8s的初学者而言,这将是一件很难的事情。

比如我们看一个同时创建一个Deployment、Service、Ingress的yaml文件内容:

---
apiVersion: v1
kind: Service
metadata:
  labels:
    app: test-yaml
  name: test-yaml
  namespace: freeswitch
spec:
  ports:
  - name: container-1-web-1
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: test-yaml
  sessionAffinity: None
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  creationTimestamp: null
  name: test-yaml
spec:
  rules:
  - host: test.com
    http:
      paths:
      - backend:
          serviceName: test-yaml
          servicePort: 8080
        path: /
---
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: test-yaml
  name: test-yaml
  namespace: freeswitch
spec:
  replicas: 3
  selector:
    matchLabels:
      app: test-yaml
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      annotations:
        info: test for yaml
      labels:
        app: test-yaml
    spec:
      affinity:
        podAntiAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - podAffinityTerm:
              labelSelector:
                matchExpressions:
                - key: app
                  operator: In
                  values:
                  - test-yaml
              topologyKey: kubernetes.io/hostname
            weight: 100
      containers:
      - env:
        - name: TZ
          value: Asia/Shanghai
        - name: LANG
          value: C.UTF-8
        image: nginx
        imagePullPolicy: Always
        lifecycle: {}
        livenessProbe:
          failureThreshold: 2
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 8080
          timeoutSeconds: 2
        name: test-yaml
        ports:
        - containerPort: 8080
          name: web
          protocol: TCP
        readinessProbe:
          failureThreshold: 2
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 8080
          timeoutSeconds: 2
        resources:
          limits:
            cpu: 195m
            memory: 375Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          procMount: Default
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        volumeMounts:
        - mountPath: /usr/share/zoneinfo/Asia/Shanghai
          name: tz-config
        - mountPath: /etc/localtime
          name: tz-config
        - mountPath: /etc/timezone
          name: timezone
      dnsPolicy: ClusterFirst
      hostAliases:
      - hostnames:
        - www.baidu.com
        ip: 114.114.114.114
      imagePullSecrets:
      - name: myregistrykey
      - name: myregistrykey2
      restartPolicy: Always
      securityContext: {}
      volumes:
      - hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
          type: ""
        name: tz-config
      - hostPath:
          path: /etc/timezone
          type: ""
        name: timezone

这是一个包含了Service、Ingress、Deployment比较常用并且没有用到高级功能的yaml配置,就已经有上百行,如果是在添加了一些高级配置或者是Deployment中的容器不止一个,这个yaml会更大,就会造成一种视觉上疲劳,更改起来也比较麻烦而且非常容易出错。

2. 基于图形化的方式自动生成yaml

2.1 k8s图形化管理工具Ratel安装

本次采用Ratel自动生成yaml文件,Ratel安装文档:https://github.com/dotbalo/ratel-doc/blob/master/cluster/Install.md

2.2 使用Ratel创建生成yaml文件

2.2.1 基本配置

安装完成后,可以生成、创建管理常用的k8s核心资源,比如创建一个Deployment:

点击Deployment -- 创建如图所示:

k8s yaml volumes自动创建文件 k8s yaml文件生成_kubernetes

 

之后可以填写一些基本的配置信息,比如Deployment名称、副本数、标签信息等,当然也可以点击必须/尽量部署至不同宿主机进行Pod亲和力的配置

同时也可添加一些复杂的配置,比如内核配置、容忍配置、节点亲和力快捷配置:

2.2.2 亲和力配置

 

基本配置编译完成以后,点击NEXT,下一个配置亲和力配置,如果上一页使用了亲和力快捷键,这边会自动生成亲和力配置,你可以再次编辑或者添加、删除:

 

k8s yaml volumes自动创建文件 k8s yaml文件生成_Deployment_02

 

 

2.2.3 存储配置

亲和力配置完成以后,可以点击NEXT进行存储配置,目前支持volume和projectedVolume配置,volume支持configMap、Secret、HostPath、PVC、NFS、Empty等常用类型的配置:

k8s yaml volumes自动创建文件 k8s yaml文件生成_kubernetes_03

 

2.2.4 容器配置

 

接下来是容器配置,支持常用的容器配置,当然也可以添加多个容器:

稍微复制一点的配置:

2.2.4 初始化容器配置

初始化容器和容器配置类似

2.2.5 Service和Ingress配置

创建Deployment时可以一键添加Service和Ingress,添加Service时会自动读取容器的端口配置,添加Ingress时会自动读取Service配置

k8s yaml volumes自动创建文件 k8s yaml文件生成_自动生成_04

 

 

 

 

2.2.6 创建资源或生成yaml文件

 

上述配置完成以后,可以选择创建资源或生成yaml文件,假如点击生成yaml文件,会自动生成Service、Ingress、Deployment的yaml文件,可以直接拿着使用:

生成的内容如下:

---
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: null
  labels:
    app: test-yaml
  name: test-yaml
  namespace: default
spec:
  ports:
  - name: container-1-web-1
    port: 8080
    protocol: TCP
    targetPort: 8080
  selector:
    app: test-yaml
  sessionAffinity: None
  type: ClusterIP
status:
  loadBalancer: {}
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  creationTimestamp: null
  name: test-yaml
spec:
  rules:
  - host: test.com
    http:
      paths:
      - backend:
          serviceName: test-yaml
          servicePort: 8080
        path: /
status:
  loadBalancer: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  creationTimestamp: null
  labels:
    app: test-yaml
  name: test-yaml
  namespace: default
spec:
  replicas: 3
  selector:
    matchLabels:
      app: test-yaml
  strategy:
    rollingUpdate:
      maxSurge: 1
      maxUnavailable: 0
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        app: test-yaml
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - preference:
              matchExpressions:
              - key: loki
                operator: In
                values:
                - "true"
            weight: 100
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: master
                operator: NotIn
                values:
                - "true"
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchExpressions:
              - key: app
                operator: In
                values:
                - test-yaml
            topologyKey: kubernetes.io/hostname
      containers:
      - args:
        - '*.jar --server.port=80'
        command:
        - java -jar
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: LANG
          value: C.UTF-8
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        envFrom:
        - configMapRef:
            name: testcm
        image: nginx
        imagePullPolicy: IfNotPresent
        lifecycle:
          postStart:
            exec:
              command:
              - echo "start"
          preStop:
            exec:
              command:
              - sleep 30
        livenessProbe:
          failureThreshold: 2
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          tcpSocket:
            port: 8080
          timeoutSeconds: 2
        name: test-yaml
        ports:
        - containerPort: 8080
          name: web
          protocol: TCP
        readinessProbe:
          failureThreshold: 2
          httpGet:
            httpHeaders:
            - name: a
              value: b
            path: /
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 30
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 2
        resources:
          limits:
            cpu: 493m
            memory: 622Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          procMount: Default
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        volumeMounts:
        - mountPath: /usr/share/zoneinfo/Asia/Shanghai
          name: tz-config
        - mountPath: /etc/localtime
          name: tz-config
        - mountPath: /etc/timezone
          name: timezone
        - mountPath: /mnt
          name: nfs-test
      dnsPolicy: ClusterFirst
      initContainers:
      - args:
        - init
        command:
        - echo
        env:
        - name: TZ
          value: Asia/Shanghai
        - name: LANG
          value: C.UTF-8
        image: nignx-init
        imagePullPolicy: Always
        name: init
        resources:
          limits:
            cpu: 351m
            memory: 258Mi
          requests:
            cpu: 10m
            memory: 10Mi
        securityContext:
          allowPrivilegeEscalation: false
          capabilities: {}
          privileged: false
          procMount: Default
          readOnlyRootFilesystem: false
          runAsNonRoot: false
        volumeMounts:
        - mountPath: /usr/share/zoneinfo/Asia/Shanghai
          name: tz-config
        - mountPath: /etc/localtime
          name: tz-config
        - mountPath: /etc/timezone
          name: timezone
      nodeSelector:
        ratel: "true"
      restartPolicy: Always
      securityContext:
        sysctls:
        - name: net.core.somaxconn
          value: "16384"
        - name: net.ipv4.tcp_max_syn_backlog
          value: "16384"
      tolerations:
      - effect: NoSchedule
        key: node-role.kubernetes.io/master
        operator: Exists
      volumes:
      - name: projected-test
        projected:
          defaultMode: 420
          sources:
          - downwardAPI:
              items:
              - fieldRef:
                  fieldPath: metadata.name
                path: /opt/x
      - hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
          type: ""
        name: tz-config
      - hostPath:
          path: /etc/timezone
          type: ""
        name: timezone
      - name: nfs-test
        nfs:
          path: /data/nfs
          server: 1.1.1.1
status: {}

这个yaml比之前的稍复杂,并且添加了一些高级配置,手动编写的还是比较麻烦的,所以用Ratel自动生成还是比较方便的,并且不会出错。

 

3. 其他资源文件自动生成

 

目前支持了很多资源文件的自动生成,比如:Deployment、StatefulSet、DaemonSet、Service、Ingress、CronJob、Secret、ConfigMap、PV、PVC等,可以大大减少我们的工作量和k8s的复杂度。