安装配置 Centos7 64位 内存 4G 硬盘 30G 以上 共三台 一台master节点 两台 node 节点
点我获取配置文件

master 192.168.5.124
node-1 192.168.5.125
node-2 192.168.5.126

192.168.5.124 docker-k8s01
192.168.5.125 docker-k8s02
192.168.5.126 docker-k8s03
关闭防火墙systemctl disable firewalldsystemctl stop firewalld

  1. 配置hostvi /etc/hosts 将所有节点配置进去
  2. 查看cat /etc/hostname并设置hostname 设置sudo hostnamectl set-hostname <hostname>(注意: master节点不要执行)
  3. yum管理 yum install -y yum-utils device-mapper-persistent-data lvm2
  4. 中央仓库地址 yum-config-manager --add-repo http://download.docker.com/linux/centos/docker-ce.repo
  5. 查看docker版本 yum list docker-ce --showduplicates | sort -r
  6. docker版本 yum install docker-ce-18.06.1.ce yum install docker-ce-版本号
  7. 开机启动docker systemctl start docker
  8. 关闭防火墙 systemctl stop firewalld 禁用systemctl disable firewalld
  9. 禁用selinux setenforce 0 编辑文件vi /etc/sysconfig/selinux 将SELINUX 设置为disabledSELINUX=disabled
  10. 清空iptables规则 iptables -F
  11. 重启docker服务 systemctl daemon-reload 重启docker systemctl restart docker
  12. 禁用swap交换分区 swapoff -a
  13. 编辑自动挂载的配置文件,将swap配置项注释掉 vi /etc/fstab注释:
    #/dev/mapper/cl-swap swap swap defaults 0 0
  14. 安装kubeadm cat <<EOF > /etc/yum.repos.d/kubernetes.repo输入:

    [kubernetes]

    name=Kubernetes
    baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

    enabled=1
    gpgcheck=0
    repo_gpgcheck=0
    gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

    EOF
    按回车
  15. 检查 Kubernetes yum repolist 其中 kubernetes这一项不能为0
  16. 制作yum源缓存 yum makecache
  17. 将yum源的配置文件复制到另外两台节点主机上scp /etc/yum.repos.d/kubernetes.repo host配置主机名称:/etc/yum.repos.d/并且执行yum repolistyum makecache
  18. 打开iptables桥接功能及路由转发(master上进行配置即可) vi /etc/sysctl.d/k8s.conf 输入net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1
  19. 刷新并加载配置sysctl -p /etc/sysctl.d/k8s.conf
  20. 将k8s.conf文件复制到另外两台节点scp /etc/sysctl.d/k8s.conf host配置主机名称:/etc/sysctl.d/ 并且执行 sysctl -p /etc/sysctl.d/k8s.conf
  21. 开启路由转发(三台docker主机上都需要进行以下操作)执行echo "net.ipv4.ip_forward = 1" >> /etc/sysctl.confsysctl -p
  22. 初始化执行 kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.15.0
    ----------------以上不能执行 则使用下面这个,手动拉去镜像并更改名称-------------------------
  23. 配置k8s群集(master) yum -y install kubelet-1.15.0-0 kubeadm-1.15.0-0 kubectl-1.15.0-0
  24. 查看需要那些镜像依赖(国外默认可以使用kubeadm config images pull'kubeadm config images list国内镜像地址

    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.12
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.12
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.12
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.12
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
  25. 更改镜像名称
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.15.12 k8s.gcr.io/kube-apiserver:v1.15.12
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.15.12 k8s.gcr.io/kube-controller-manager:v1.15.12
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.15.12 k8s.gcr.io/kube-scheduler:v1.15.12
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.12 k8s.gcr.io/kube-proxy:v1.15.12
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
    docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1
  26. 初始化k8s集群kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap 等待结束

    --------------------------这里结束----------------------------
  27. 安装成功过后看下控制台有一个这个命令(一定要记录,否则丢失很麻烦) 这个命令是添加node节点的命令kubeadm join 192.168.5.124:6443 --token 7lp9em.8wrm5wgb4c4fhm3o \ --discovery-token-ca-cert-hash sha256:2d3c555332c2f0b1c53ed43cc4125bdf632935316a1a7d70b08a0b4dbac964eb
  28. 若初始化失败,可以执行kubeadm reset进行重置群集
  29. 成功后执行mkdir -p $HOME/.kube

    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

    sudo chown $(id -u):$(id -g) $HOME/.kube/config
  30. 查看节点kubectl get nodes 状态是 NotReady
  31. 网络状态好则直接执行kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  32. 网络状态不好 将文件下载到本地后wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 查看本地是否有这个文件ls | grep flannel.yml 如果有则执行kubectl apply -f kube-flannel.yml
  33. (node节点执行)安装yum -y install kubelet-1.15.0-0 kubeadm-1.15.0-0开启开机启动systemctl enable kubelet.service
  34. (node节点执行) 安装
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.15.12
    docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
    ocker pull quay.io/coreos/flannel:v0.11.0-amd64
  35. (node节点执行)网络状态好则直接执行kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
  36. (node节点执行)网络状态不好 将文件下载到本地后wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 查看本地是否有这个文件ls | grep flannel.yml 如果有则执行kubectl apply -f kube-flannel.yml
  37. 在master执行kubectl get nodes查看节点情况,当所有的节点为Ready的时候则说明成功了
  38. 最后在master节点和node节点都执行命令设置docker和k8s为开机启动systemctl enable kubeletsystemctl enable docker
    ----------------------集群搭建完毕----------下面安装控制界面-------------------
  39. 网址点我UI界面安装教程
  40. Token过期重新获取kubeadm token create --print-join-command