kube-master安装

1,防火墙相关配置

禁用 selinux,禁用 swap,卸载 firewalld-*

2,配置yum仓库,使master和三个node节点可以通过yum -y install docker kubeadm kubelet kubectl... 安装所需软件,如果可以公网访问,可以不配置本地yum源,但是为了保持版本统一,还是建议配置本地yum源(上传所有需要的软件,然后createrepo .)

将master,node1,node2,node3的yum配置文件全部指向镜像仓库。
[root@master ~]# vim /etc/yum.repos.d/local.repo
[k8s]
name=k8s
baseurl=ftp://192.168.1.252/localrepo
enabled=1
gpgcheck=0

3,master上面安装软件包

安装kubeadm、kubectl、kubelet、docker-ce
[root@master ~]# yum install -y kubeadm kubelet kubectl docker-ce
[root@master ~]# mkdir -p /etc/docker
[root@master ~]# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@master ~]# systemctl enable --now docker kubelet
[root@master ~]# docker info |grep Cgroup
Cgroup Driver: systemd
[root@master ~]# vim /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
[root@master ~]# modprobe br_netfilter
[root@master ~]# sysctl --system

4,将镜像导入私有仓库
      哪些镜像?答:使用kubeadm init初始化用到的镜像,包括:kube-proxy:v1.17.6, kube-apiserver:v1.17.6, kube-controller-manager:v1.17.6, kube-scheduler:v1.17.6, coredns:1.6.5, etcd:3.4.3-0, pause:3.1

[root@master ~]# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --insecure-registry 192.168.1.100:80 # 配置私有镜像仓库的位置,此处也可以使用daemon.json文件代替。
[root@master ~]# systemctl daemon-reload && systemctl enable docker && systemctl restart docker
登录harbor,如果harbor没有启动,需要去harbor主机启动
[root@master ~]# docker login http://192.168.1.100:80
Username: admin
Password:
Login Succeeded

master主机操作
将k8s所需要的镜像/组件下载到base-images/下:
注意:如果不能连接外网,可以使用阿里云的代理服务器加速;如果可以直接联网,可以直接docker pull拉取如下镜像

#查看指定k8s版本需要哪些镜像
kubeadm config images list --kubernetes-version v1.18.5

[root@master ~]# cd base-images/

[root@master base-image]# for i in *.tar.gz;do docker load -i ${i};done
[root@master base-image]# docker images
[root@master base-image]# docker tag k8s.gcr.io/kube-proxy:v1.17.6 192.168.1.100:80/library/k8s.gcr.io/kube-proxy:v1.17.6
[root@master base-image]# docker tag k8s.gcr.io/kube-apiserver:v1.17.6 192.168.1.100:80/library/k8s.gcr.io/kube-apiserver:v1.17.6
[root@master base-image]# docker tag k8s.gcr.io/kube-controller-manager:v1.17.6 192.168.1.100:80/library/k8s.gcr.io/kube-controller-manager:v1.17.6
[root@master base-image]# docker tag k8s.gcr.io/kube-scheduler:v1.17.6 192.168.1.100:80/library/k8s.gcr.io/kube-scheduler:v1.17.6
[root@master base-image]# docker tag k8s.gcr.io/coredns:1.6.5 192.168.1.100:80/library/k8s.gcr.io/coredns:1.6.5
[root@master base-image]# docker tag k8s.gcr.io/etcd:3.4.3-0 192.168.1.100:80/library/k8s.gcr.io/etcd:3.4.3-0
[root@master base-image]# docker tag k8s.gcr.io/pause:3.1 192.168.1.100:80/library/k8s.gcr.io/pause:3.1
[root@master base-image]# docker push 192.168.1.100:80/library/k8s.gcr.io/kube-proxy:v1.17.6
[root@master base-image]# docker push 192.168.1.100:80/library/k8s.gcr.io/kube-apiserver:v1.17.6
[root@master base-image]# docker push 192.168.1.100:80/library/k8s.gcr.io/kube-controllermanager:v1.17.6
[root@master base-image]# docker push 192.168.1.100:80/library/k8s.gcr.io/kube-scheduler:v1.17.6
[root@master base-image]# docker push 192.168.1.100:80/library/k8s.gcr.io/coredns:1.6.5
[root@master base-image]# docker push 192.168.1.100:80/library/k8s.gcr.io/etcd:3.4.3-0
[root@master base-image]# docker push 192.168.1.100:80/library/k8s.gcr.io/pause:3.1
测试私有仓库是否可以正常使用
node-0001,node-0002,node-0003同样操作
[root@node-0001 ~]# yum -y install docker-ce
[root@node-0001 ~]# mkdir -p /etc/docker
[root@node-0001 ~]# vim /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
[root@node-0001 ~]# vim /usr/lib/systemd/system/docker.service
ExecStart=/usr/bin/dockerd --insecure-registry 192.168.1.100:80
[root@node-0001 ~]# systemctl daemon-reload && systemctl enable docker && systemctl restart
docker
登录harbor
[root@node-0001 ~]# docker login http://192.168.1.100:80
Username: admin
Password:
Login Succeeded

5,在master主机上设置tab键

[root@master ~]# kubectl completion bash >/etc/bash_completion.d/kubectl
[root@master ~]# kubeadm completion bash >/etc/bash_completion.d/kubeadm
[root@master ~]# exit

6,安装IPVS代理软件包

[root@master ~]# yum install -y ipvsadm ipset

7,配置主机名

[root@master ~]# vim /etc/hosts
192.168.1.21 master
192.168.1.31 node-0001
192.168.1.32 node-0002
192.168.1.33 node-0003
192.168.1.100 harbor

8,使用kubeadm部署k8s

[root@master ~]# mkdir init;cd init

# 通过如下指令创建默认的kubeadm-config.yaml文件:
kubeadm config print init-defaults > kubeadm-config.yaml

[root@master init]# vim kubeadm-config.yaml

修改如下:

K8S集群环境部署_linuxK8S集群环境部署_vim_02
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.1.21
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: 192.168.1.100:80/library/k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.17.6
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.254.0.0/16
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
mode: ipvs
View Code

[root@master init]# kubeadm init --config=kubeadm-config.yaml |tee master-init.log
# 根据提示执行命令
[root@master init]# mkdir -p $HOME/.kube
[root@master init]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master init]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

9,验证安装结果

[root@master ~]# kubectl version
[root@master ~]# kubectl get componentstatuses
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}

 

计算节点安装: (计算节点安装使用ansible简化过程,ansible配置文件未列出)

注意:ansible的基本操作即disable swap,selinux,firewalld 安装必要软件包,以及相应的配置(如下面的yml文件):

K8S集群环境部署_linuxK8S集群环境部署_vim_02
---
- name:
  hosts:
  - nodes
  vars:
    master: '192.168.1.21:6443'
    token: 'fm6kui.mp8rr3akn74a3nyn'
    token_hash: 'sha256:f46dd7ee29faa3c096cad189b0f9aedf59421d8a881f7623a543065fa6b0088c'
  tasks:
  - name: disable swap
    lineinfile:
      path: /etc/fstab
      regexp: 'swap'
      state: absent
    notify: disable swap
  - name: Ensure SELinux is set to disabled mode
    lineinfile:
      path: /etc/selinux/config
      regexp: '^SELINUX='
      line: SELINUX=disabled
    notify: disable selinux
  - name: remove the firewalld
    yum:
      name:
      - firewalld 
      - firewalld-filesystem
      state: absent
  - name: install k8s node tools
    yum:
      name:
      - kubeadm
      - kubelet
      - docker-ce
      - ipvsadm
      - ipset
      state: present
      update_cache: yes
  - name: Create a directory if it does not exist
    file:
      path: /etc/docker
      state: directory
      mode: '0755'
  - name: Copy file with /etc/hosts
    copy:
      src: files/hosts
      dest: /etc/hosts
      owner: root
      group: root
      mode: '0644'
  - name: Copy file with /etc/docker/daemon.json
    copy:
      src: files/daemon.json
      dest: /etc/docker/daemon.json
      owner: root
      group: root
      mode: '0644'
  - name: Copy file with /etc/sysctl.d/k8s.conf
    copy:
      src: files/k8s.conf
      dest: /etc/sysctl.d/k8s.conf
      owner: root
      group: root
      mode: '0644'
    notify: enable sysctl args
  - name: enable k8s node service
    service:
      name: "{{ item }}"
      state: started
      enabled: yes
    with_items:
    - docker
    - kubelet
  - name: check node state
    stat:
      path: /etc/kubernetes/kubelet.conf
    register: result
  - name: node join
    shell: kubeadm join '{{ master }}' --token '{{ token }}' --discovery-token-ca-cert-hash '{{ token_hash }}'
    when: result.stat.exists == False
  handlers:
  - name: disable swap
    shell: swapoff -a
  - name: disable selinux
    shell: setenforce 0
  - name: enable sysctl args
    shell: sysctl --system
View Code

1,获取token

# 创建token
[root@master ~]# kubeadm token create --ttl=0 --print-join-command
[root@master ~]# kubeadm token list
# 获取token_hash
[root@master ~]# openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt |openssl rsa -pubin -outform der |openssl dgst -sha256 -hex

2,node安装,在jump-server主机,使用ansible执行,node节点的安装

[root@ecs-proxy ~]# cd project3/kubernetes/
[root@ecs-proxy kubernetes]# unzip ansible.zip
[root@ecs-proxy kubernetes]# cd ansible/
[root@ecs-proxy ansible]# yum -y install ansible-2.4.2.0-2.el7.noarch.rpm
[root@ecs-proxy ~]# ssh-keygen
[root@ecs-proxy ~]# ssh-copy-id 192.168.1.31
[root@ecs-proxy ~]# ssh-copy-id 192.168.1.32
[root@ecs-proxy ~]# ssh-copy-id 192.168.1.33
[root@ecs-proxy ~]# cd /root/project3/kubernetes/v1.17.6/node-install/
[root@ecs-proxy node-install]# vim files/hosts
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
192.168.1.21 master
192.168.1.31 node-0001
192.168.1.32 node-0002
192.168.1.33 node-0003
[root@ecs-proxy node-install]# vim node_install.yaml
... ...
vars:
master: '192.168.1.21:6443'
token: 'fm6kui.mp8rr3akn74a3nyn'
token_hash: 'sha256:f46dd7ee29faa3c096cad189b0f9aedf59421d8a881f7623a543065fa6b0088c'
... ...
[root@ecs-proxy node-install]# ansible-playbook node_install.yaml

3,master主机验证安装

[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 130m v1.17.6
node-0001 NotReady <none> 2m14s v1.17.6
node-0002 NotReady <none> 2m15s v1.17.6
node-0003 NotReady <none> 2m9s v1.17.6

 

网络插件安装配置
1,上传flannel镜像到私有仓库

下载flannel 目录到master上 (https://github.com/flannel-io/flannel/releases)
master主机操作
[root@master ~]# cd flannel/
[root@master flannel]# docker load -i flannel.tar.gz
[root@master flannel]# docker tag quay.io/coreos/flannel:v0.12.0-amd64
192.168.1.100:80/library/flannel:v0.12.0-amd64
[root@master flannel]# docker push 192.168.1.100:80/library/flannel:v0.12.0-amd64

2,修改配置文件并安装

[root@master flannel]# vim kube-flannel.yml
128: "Network": "10.244.0.0/16",
172: image: 192.168.1.100:80/library/flannel:v0.12.0-amd64
186: image: 192.168.1.100:80/library/flannel:v0.12.0-amd64
227-结尾: 删除
[root@master flannel]# kubectl apply -f kube-flannel.yml

3,验证结果

[root@master flannel]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 26h v1.17.6
node-0001 Ready <none> 151m v1.17.6
node-0002 Ready <none> 152m v1.17.6
node-0003 Ready <none> 153m v1.17.6