kubeadm

1.13就GA了,general availability,表标明广泛使用

kubeadm最大杀手锏是代码放在k8s源码中

kubeadm二进制的子功能分割,可组合,

用kubeadm只生成证书;基于子功能开发集群部署工具,ansible+kubeadm init+kubeadm join等等

kubeadm有升级指南,在kubeadm-upgrade下

有关Kubeadm的更多详细信息,请参见https://github.com/kubernetes/kubeadm

 

setp1.初始化主节点。

主节点负责运行控制平面组件,etcd和API服务器。 客户端将与API进行通信,以安排工作量并管理集群的状态。

使用已知令牌初始化集群,以简化步骤。

$ kubeadm init --token=102952.1a7dd4cc8d1f4cc5 --kubernetes-version $(kubeadm version -o short)
[init] Using Kubernetes version: v1.14.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [controlplane localhost] and IPs [172.17.0.9 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [controlplane localhost] and IPs [172.17.0.9 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [controlplane kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.17.0.9]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
//通常这里会等待很久
[kubelet-check] Initial timeout of 40s passed.

//通常这里会等待很久
[apiclient] All control plane components are healthy after 52.009390 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node controlplane as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node controlplane as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 102952.1a7dd4cc8d1f4cc5
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 172.17.0.9:6443 --token 102952.1a7dd4cc8d1f4cc5 \
    --discovery-token-ca-cert-hash sha256:95ba80049e057927af0e8bc9b3dbffba4f2350c311d55fc764ef6fa8bb6fc79d

要管理Kubernetes集群,需要客户端配置和证书。 当kubeadm初始化集群时,将创建此配置。 该命令将配置复制到用户主目录,并设置环境变量以供CLI使用。

$ cat /etc/kubernetes/admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJd01Ea3hNREl6TkRNMU4xb1hEVE13TURrd09ESXpORE0xTjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBS2t2ClV2WFlJRS9Ibmg4YW5mbm5PcGFtZktJMWhPaHpjQ3Ayd1ZRTnBjYStETm5OMVA4MUhWb25aanpyL29lNGVVNGYKaVdzZzltaXcwOWxLbG1EYzB1cHZYRVRlcmI3MFpuREt4UWI3OFIwdnhPMExlOHFFcjRiK01DYlpkMWZwdC9yMQo4cmVHQ3d2ekNiWDA5N0U2Q0kvNWY2NHU2UEZobUlBZFFmRm91NG1xaVlLRTUrSFFIeDdGT2ZJSC9OeEFTczV5CjVETS9JbjR4c0dEdzdxUVJvcTlEUWpCT3M5MktZNEZXazIxTzdMamdtRUlTTlhuSEU4Y2VqRVJYVldKcmpteVoKMi9USmdyZFo5ME9lbTU4NVdVS2ZEL1Jkb0w1Qk5MN1BBbXVNeEd6NzFQMDJ2d2owMlVuZTBUUmM0RW9yOUlBbgp1U3g4Y3FSNmI3TjMxYTJUeEkwQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFCQllGVlZYNWc4cGx1aFQ3Z1RCN1htdWo5MWYKRjl2WWZBSTdEQldjdlA3a0JrakJVQVNyRHNNWEwyZWE2SGN6Tmt6cTA2T3ZOOTN4S0Mza0Mxei80MmFZbDZUSApzeGJkN0NBYUcwUVpRQTVFL0FPV1N2Q21lRzVmL05OdnB0MFIyaHErS1hTSjVCQWFXUVg0amxFSDV6dkpOSzJMCnpTSFNoRVRDTlpBR3k4Y2JyZzNzSkt6dDFBNEU1NTRtOFAydXRka2F6SVh4VjRwMTVHbzArR2c2NmhrVzM2RDEKQ0hrOWJGMkp2cDNMMi84WU1WbjRpeWtiN2FVc2pXdVVJSWdmbE1SV3hPVVcxalhXNmM1aWFQdzBBM2NUalA0dQo4Z0JGODkwcXFLQ1ZlTDFBSWoyY2hRUU1Xa3hoSlVxdnFNUVBSL0g1MVdndS9vSnRtN01ZcmVrUFBxND0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://172.17.0.9:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJRGlRemFXdnRCTEV3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TURBNU1UQXlNelF6TlRkYUZ3MHlNVEE1TVRBeU16UXpOVGhhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTVCbkhsRmlOcUFBTnUvck0KL0NQYzk2RG5ZNkN2RVBHK3BpTGttYktNa2poMTlkd2hmYWJiZXg2bk5vN1ZMRFEzWG5YKzVob1J3emVwbVA1TgozUE02MWF4SkNPbTJjNDBzSFVzZ0xuT2Zub0IyUUtPWElldmZzZGdyOUNoeFJCWXlaRVQ0aXlKWVBxdEdxQlJGCmpiMExFVE11cEtNeXEyVlpWbEY2ejkxV1VXL2pYREdvVm1mL25UMnVNdDhqRWNzZzdwQ0I0emxUV0RCeHBJaGwKK1g4bDhFN0JDTXVuS1BxamxGdmk1bHZoOXJxeFYzay9UdFQ5SVg5dEFjdjZTVE5IMXZ6STJJN2JiL1hzbHpOWAowaFA5d0tPaytrRDh1L1JXblJaSW5mK3hKZGJrdGxyZGpFbkFLTnc0RWp4YUJUN0Z2U2E5aWlwaHAyQVoxSTFuCitBNnI1UUlEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFET1B5cEJBU21qVmxjeVgyaTgyWnpmczh3WTJ0SGJJaURMYwpvcklMVXNLZUNIU3lpRU9kUzBSV3hmdlZyVllTWEpZc3F5WnFHZGRvdUVyTXRoaGsrNjgzUS9yQnhnRkVMRDE5Cm9wVWwyYkhsMDY3Zzg5LzE5cFlxWWd3Uk1FZThIS0RIUWpUMVRYcjNndERTcnllNjVlcERacWNuTkRxWWhVNGsKMk1XTStqb2xsUWZMNkZqeVdVbEdDSllQVTVWNlh0WGpKV1J3c0NRVzIydnBLZDlxbms5aFZuaGtzdnBMaS9nYQoveFdFZWJWYTBSSmkyb3AyVndNcFJQRDN6Tm9jV1dZbkp6Zzlrb2k1dWtSZThzYkdGQVA0L3pQZmxuaUQyS1ljCmNsbjI0SGFqcno3Y21aMVZGMDJ5Zmk0akdjVHFJcFZ3ZG1SSDBNMFUrREZ5Z0I3Z0ZXaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb2dJQkFBS0NBUUVBNUJuSGxGaU5xQUFOdS9yTS9DUGM5NkRuWTZDdkVQRytwaUxrbWJLTWtqaDE5ZHdoCmZhYmJleDZuTm83VkxEUTNYblgrNWhvUnd6ZXBtUDVOM1BNNjFheEpDT20yYzQwc0hVc2dMbk9mbm9CMlFLT1gKSWV2ZnNkZ3I5Q2h4UkJZeVpFVDRpeUpZUHF0R3FCUkZqYjBMRVRNdXBLTXlxMlZaVmxGNno5MVdVVy9qWERHbwpWbWYvblQydU10OGpFY3NnN3BDQjR6bFRXREJ4cElobCtYOGw4RTdCQ011bktQcWpsRnZpNWx2aDlycXhWM2svClR0VDlJWDl0QWN2NlNUTkgxdnpJMkk3YmIvWHNsek5YMGhQOXdLT2sra0Q4dS9SV25SWkluZit4SmRia3RscmQKakVuQUtOdzRFanhhQlQ3RnZTYTlpaXBocDJBWjFJMW4rQTZyNVFJREFRQUJBb0lCQUVhOHgrdjFXbGpUUzI4VQpaQ1Y5YWJWUnJQQjBrRllNNGpiYmMxRkcwZGsyc1Q1QnVoRWhnY2M0eGxwaENUTGdMcHVZdENmZnhjcG9wS3ZSCmtZd0gwaU1aZnJ6STNkQVA0N0poN2VUNTduSlZIRmNIWklNY2h3NjhIMFZrbFZ1c0ZveUo1eG9lMkliMnpyNnAKS3JpOSs0U2wvcjBINzFxLzMyaXBkQkNxYjI1cFVuZWVwTU5uNXRqOWhSZ1l4VGdjRzlUbk9yRHkwRCsvMTkyTQpWTm9qdGVSU1UxbXNKT21TMHVrTEs0bXlrRTg1alZseFB4ZnlrTGc0S3h5WDJjZmd4cE01WEpvUW5KM1hLZ2tECmFIRWppTkVPdyt0eGZISmhjcnRJMmRtbkErbkZ5WFFRUTZUc04remVOQU4vcDB6MGZjc2dvam1lTStrRzZaTDgKVXFWZHd3RUNnWUVBNkpHRXV4WVJvZzBKRy9Gd1NPYkRGc01Bdmkrc0lGQWlUVlVkQlRmZUNxZ0tTTENhckg0bQp4bUpNamtUeHRpWk1IcjNBNlhnQ1NTUnZQUnRhNnZSdmk4ZWZQTSswbnhZTnVZekhXUlJNZm5PU0t2eXI3S2NFClZNbjVreTZZdHhZM0JqUE9YWG8vUTFEL1pkd1Q4VkNpOFB0d0Fwc0RkQWU4WEIzdllpWmxzTFVDZ1lFQSt4VUgKS01kMVVyZzBtRzYwWFlIdFBIMm1DZ2p1NE4rbGN2My9MSG90azM5aWM5eEE2eE82M2hxbjFjRFQ1c1FRZjdnZQpnY0krM21Cb3EyR1h1WWZzSEl1RENPU1BsV1lhU0V6cEI0QlB0ZHYyUWNqcDNEQXIyVnQwdHdRUzZyd0VNV3NaClJyZ0J3clk2MDRBR1VIYVd3aEd4VW5YU2tRY2tpaVdXdnpYRWZIRUNnWUF6OEN1Z1RIRnJxMVdaYy9ZTGtkMkgKdTh6eXJGclliSXo4a0VHRzVNOGx1aGx4Mmw3d25zdXlDa25taStjZk1yWlZOek5aOEg4eUxuelpQTDYxTWhtbgpNZEdTRnlEVFZtMkNQcnBXWG40bXoxQ1pZUXhVTVloNkZ4RXhtWHBwaVFDSTFoRUVOMFRobDdreDJsQnAyQVJMCnBSdUN1WE92K2ZwSzZEU0p5dUZ5OFFLQmdHWng2ZjNsaENWQUs1V1dkZGxCVGY3RWRaN2FqajBLZzRNcng1WHEKTS9aZW4xa09vUjVKYXBGODVzWDhhM2xZdmpLZWVUUVFnWDVTYitLZGF6NjBDczZLemVndStiYkhkaW5SMTdMTAowN29zQ1lwRjQ4V0hraHlaaUVHMFU5T2E2MHNPbTcyVERvVFh2YURXTjcxVTVhWkJlY3hmYm83bUR5NmVyNmRNCjFPTEJBb0dBTXBvczFyeGcwanovV0FYR2l6cm1vYVhuLzNnOG5LUGhFOHZ2dEc4WXZLVm1BRmw0dDNYWVJ1Y2MKdk81cG5CNVhlK3BKU29ScldDLzM2NkRSemhvV2VWU24ydlMvL0NmMjF0Ry9wWlpLc0hyTW5pZTdQWGM1UDNmeQpjR3dhZWJrUzZNTWh6a1lkdFowVjFTckdGYU0wWFFtVlkxMDFLOCtnYUZPcHRzd2c5VWc9Ci0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==
$ sudo cp /etc/kubernetes/admin.conf $HOME/
$ sudo chown $(id -u):$(id -g) $HOME/admin.conf
$ export KUBECONFIG=$HOME/admin.conf

 

step2: 部署容器网络接口(CNI)

容器网络接口(CNI)定义了不同节点及其工作负载应如何通信。此处列出了一些here.这里将使用WeaveWorks。 可以在cat /opt/weave-kube.yaml 中查看部署定义,

weave更多详情,https://www.weave.works/docs/net/latest/kube-addon

$ cat /opt/weave-kube.yaml
apiVersion: v1
kind: List
items:
  - apiVersion: v1
    kind: ServiceAccount
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
    rules:
      - apiGroups:
          - ''
        resources:
          - pods
          - namespaces
          - nodes
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - networking.k8s.io
        resources:
          - networkpolicies
        verbs:
          - get
          - list
          - watch
      - apiGroups:
          - ''
        resources:
          - nodes/status
        verbs:
          - patch
          - update
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
    roleRef:
      kind: ClusterRole
      name: weave-net
      apiGroup: rbac.authorization.k8s.io
    subjects:
      - kind: ServiceAccount
        name: weave-net
        namespace: kube-system
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: Role
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    rules:
      - apiGroups:
          - ''
        resourceNames:
          - weave-net
        resources:
          - configmaps
        verbs:
          - get
          - update
      - apiGroups:
          - ''
        resources:
          - configmaps
        verbs:
          - create
  - apiVersion: rbac.authorization.k8s.io/v1
    kind: RoleBinding
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    roleRef:
      kind: Role
      name: weave-net
      apiGroup: rbac.authorization.k8s.io
    subjects:
      - kind: ServiceAccount
        name: weave-net
        namespace: kube-system
  - apiVersion: apps/v1
    kind: DaemonSet
    metadata:
      name: weave-net
      annotations:
        cloud.weave.works/launcher-info: |-
          {
            "original-request": {
              "url": "/k8s/v1.10/net.yaml?k8s-version=v1.16.0",
              "date": "Mon Oct 28 2019 18:38:09 GMT+0000 (UTC)"
            },
            "email-address": "support@weave.works"
          }
      labels:
        name: weave-net
      namespace: kube-system
    spec:
      minReadySeconds: 5
      selector:
        matchLabels:
          name: weave-net
      template:
        metadata:
          labels:
            name: weave-net
        spec:
          containers:
            - name: weave
              command:
                - /home/weave/launch.sh
              env:
                - name: IPALLOC_RANGE
                  value: 10.32.0.0/24
                - name: HOSTNAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
              image: 'docker.io/weaveworks/weave-kube:2.6.0'
              readinessProbe:
                httpGet:
                  host: 127.0.0.1
                  path: /status
                  port: 6784
              resources:
                requests:
                  cpu: 10m
              securityContext:
                privileged: true
              volumeMounts:
                - name: weavedb
                  mountPath: /weavedb
                - name: cni-bin
                  mountPath: /host/opt
                - name: cni-bin2
                  mountPath: /host/home
                - name: cni-conf
                  mountPath: /host/etc
                - name: dbus
                  mountPath: /host/var/lib/dbus
                - name: lib-modules
                  mountPath: /lib/modules
                - name: xtables-lock
                  mountPath: /run/xtables.lock
            - name: weave-npc
              env:
                - name: HOSTNAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
              image: 'docker.io/weaveworks/weave-npc:2.6.0'
              resources:
                requests:
                  cpu: 10m
              securityContext:
                privileged: true
              volumeMounts:
                - name: xtables-lock
                  mountPath: /run/xtables.lock
          hostNetwork: true
          hostPID: true
          restartPolicy: Always
          securityContext:
            seLinuxOptions: {}
          serviceAccountName: weave-net
          tolerations:
            - effect: NoSchedule
              operator: Exists
          volumes:
            - name: weavedb
              hostPath:
                path: /var/lib/weave
            - name: cni-bin
              hostPath:
                path: /opt
            - name: cni-bin2
              hostPath:
                path: /home
            - name: cni-conf
              hostPath:
                path: /etc
            - name: dbus
              hostPath:
                path: /var/lib/dbus
            - name: lib-modules
              hostPath:
                path: /lib/modules
            - name: xtables-lock
              hostPath:
                path: /run/xtables.lock
                type: FileOrCreate
      updateStrategy:
        type: RollingUpdate

用kubectl apply进行部署。kubectl apply -f /opt/weave-kube.yaml

$ kubectl apply -f /opt/weave-kube.yaml
serviceaccount/weave-net created
clusterrole.rbac.authorization.k8s.io/weave-net created
clusterrolebinding.rbac.authorization.k8s.io/weave-net created
role.rbac.authorization.k8s.io/weave-net created
rolebinding.rbac.authorization.k8s.io/weave-net created
daemonset.apps/weave-net created

查看weave状态

controlplane $ kubectl get pod -n kube-system -l name=weave-net -o wide
NAME              READY   STATUS    RESTARTS   AGE     IP            NODE           NOMINATED NODE   READINESS GATES
weave-net-7x6rb   2/2     Running   0          4m26s     172.xxx.xx.xx    controlplane   <none>           <none>

step3: 将工作节点加入集群

一旦Master和CNI初始化,只要其他节点具有正确的令牌,便可以加入群集。 令牌可以通过kubeadm令牌进行管理

$ kubeadm token list
TOKEN                     TTL       EXPIRES                USAGES                   DESCRIPTION                   EXTRA GROUPS
102952.1a7dd4cc8d1f4cc5   23h       2020-09-11T23:44:53Z   authentication,signing   The default bootstrap token generated by 'kubeadm init'.   system:bootstrappers:kubeadm:default-node-token

在第二个节点上,运行命令以加入提供主节点IP地址的群集。

--discovery-token-unsafe-skip-ca-verification标记用于绕过令牌验证。  在生产中,请使用kubeadm init 提供的令牌token。

为什么要skip?

node01 $ kubeadm join --discovery-token-unsafe-skip-ca-verification --token=102952.1a7dd4cc8d1f4cc5 172.17.0.9:6443
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activating the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
//此处会等很久

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

step4: 查看工作节点

集群现已初始化。 主节点将管理群集,而一个工作节点将运行我们的容器工作负载。

Kubernetes CLI(称为kubectl)现在可以使用该配置来访问集群

$ kubectl get nodes
NAME           STATUS   ROLES    AGE     VERSION
controlplane   Ready    master   27m     v1.14.0
node01         Ready    <none>   9m17s   v1.14.0

 

 

kind

kubernets in docker

docker搭建本地k8s集群

minikube 单节点镜像化组件环境

通过虚拟化能力,用类似kvm,vmwave创建虚拟机,然后用虚拟机搭建k8s集群

$ minikube version
minikube version: v1.8.1
commit: cbda04cf6bbe65e987ae52bb393c10099ab62014
$ minikube start --wait=false
* minikube v1.8.1 on Ubuntu 18.04
* Using the none driver based on user configuration


* Running on localhost (CPUs=2, Memory=2460MB, Disk=145651MB) ...
* OS release is Ubuntu 18.04.4 LTS
* Preparing Kubernetes v1.17.3 on Docker 19.03.6 ...
  - kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
* Launching Kubernetes ...
* Enabling addons: default-storageclass, storage-provisioner
* Configuring local host environment ...
* Done! kubectl is now configured to use "minikube"
$ cat /etc/kubernetes/manifests/kube-apiserver.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: kube-apiserver
    tier: control-plane
  name: kube-apiserver
  namespace: kube-system
spec:
  containers:
  - command:
    - kube-apiserver
    - --advertise-address=172.17.0.63
    - --allow-privileged=true
    - --authorization-mode=Node,RBAC
    - --client-ca-file=/var/lib/minikube/certs/ca.crt
    - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
    - --enable-bootstrap-token-auth=true
    - --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
    - --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
    - --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
    - --etcd-servers=https://127.0.0.1:2379
    - --insecure-port=0
    - --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
    - --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
    - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
    - --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
    - --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
    - --requestheader-allowed-names=front-proxy-client
    - --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
    - --requestheader-extra-headers-prefix=X-Remote-Extra-
    - --requestheader-group-headers=X-Remote-Group
    - --requestheader-username-headers=X-Remote-User
    - --secure-port=8443
    - --service-account-key-file=/var/lib/minikube/certs/sa.pub
    - --service-cluster-ip-range=10.96.0.0/12
    - --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
    - --tls-private-key-file=/var/lib/minikube/certs/apiserver.key
    image: k8s.gcr.io/kube-apiserver:v1.17.3
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 172.17.0.63
        path: /healthz
        port: 8443
        scheme: HTTPS
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: kube-apiserver
    resources:
      requests:
       cpu: 250m
    volumeMounts:
    - mountPath: /etc/ssl/certs
      name: ca-certs
      readOnly: true
    - mountPath: /etc/ca-certificates
      name: etc-ca-certificates
      readOnly: true
    - mountPath: /etc/pki
      name: etc-pki
      readOnly: true
    - mountPath: /var/lib/minikube/certs
      name: k8s-certs
      readOnly: true
    - mountPath: /usr/local/share/ca-certificates
      name: usr-local-share-ca-certificates
      readOnly: true
    - mountPath: /usr/share/ca-certificates
      name: usr-share-ca-certificates
      readOnly: true
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /etc/ssl/certs
      type: DirectoryOrCreate
      name: ca-certs
  - hostPath:
      path: /etc/ca-certificates
      type: DirectoryOrCreate
    name: etc-ca-certificates
  - hostPath:
      path: /etc/pki
      type: DirectoryOrCreate
    name: etc-pki
  - hostPath:
      path: /var/lib/minikube/certs
      type: DirectoryOrCreate
    name: k8s-certs
  - hostPath:
      path: /usr/local/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-local-share-ca-certificates
  - hostPath:
      path: /usr/share/ca-certificates
      type: DirectoryOrCreate
    name: usr-share-ca-certificates
status: {}
$ cat /etc/kubernetes/manifests/etcd.yaml
apiVersion: v1
kind: Pod
metadata:
  creationTimestamp: null
  labels:
    component: etcd
    tier: control-plane
  name: etcd
  namespace: kube-system
spec:
  containers:
  - command:
    - etcd
    - --advertise-client-urls=https://172.17.0.63:2379
    - --cert-file=/var/lib/minikube/certs/etcd/server.crt
    - --client-cert-auth=true
    - --data-dir=/var/lib/minikube/etcd
    - --initial-advertise-peer-urls=https://172.17.0.63:2380
    - --initial-cluster=minikube=https://172.17.0.63:2380
    - --key-file=/var/lib/minikube/certs/etcd/server.key
    - --listen-client-urls=https://127.0.0.1:2379,https://172.17.0.63:2379
    - --listen-metrics-urls=http://127.0.0.1:2381
    - --listen-peer-urls=https://172.17.0.63:2380
    - --name=minikube
    - --peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt
    - --peer-client-cert-auth=true
    - --peer-key-file=/var/lib/minikube/certs/etcd/peer.key
    - --peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
    - --snapshot-count=10000
    - --trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt
    image: k8s.gcr.io/etcd:3.4.3-0
    imagePullPolicy: IfNotPresent
    livenessProbe:
      failureThreshold: 8
      httpGet:
        host: 127.0.0.1
        path: /health
        port: 2381
        scheme: HTTP
      initialDelaySeconds: 15
      timeoutSeconds: 15
    name: etcd
    resources: {}
    volumeMounts:
    - mountPath: /var/lib/minikube/etcd
      name: etcd-data
    - mountPath: /var/lib/minikube/certs/etcd
      name: etcd-certs
  hostNetwork: true
  priorityClassName: system-cluster-critical
  volumes:
  - hostPath:
      path: /var/lib/minikube/certs/etcd
      type: DirectoryOrCreate
    name: etcd-certs
  - hostPath:
      path: /var/lib/minikube/etcd
      type: DirectoryOrCreate
    name: etcd-data
status: {}

kubeadm和minikube的区别:

kubeadm可以通过join来添加工作节点,可以搭建多node环境。minikube是自动化部署集群的,只能搭建单节点环境

整合的k8s集群搭建工具

kubespray

kops

升级

升级最好是轮转+灰度,

轮转升级,按小版本依次升级

灰度升级即局部升级,可最大程度减少升级后程序bug引起的后果。

轮转升级是因为没法做到一次升级到最高版本(1.09-》1.19)当解决配置比对的问题(旧参数不可用、用户自定义参数)之后,这个问题就不存在了