目录

一、环境配置

1、关闭防火墙

2、 关闭selinux

3、关闭swap(k8s禁止虚拟内存以提高性能)

4、配置host

5、设置网桥参数

6、更新时间

二、docker安装

1、更新yum源      #可选

2、安装docker

3、配置开机自启

4、docker查看命令

三、k8s安装

1、添加k8s的阿里云YUM源

2、安装 kubeadm,kubelet 和 kubectl

3、开机自启

4、查看是否安装成功

5、版本查看

6、半路总结

7、初始化主节点

8、初始化node节点(在node节点配置)

9、网络插件

10、测试(k8s环境安装成功,拉取nginx镜像进行测试。)


一、环境配置

【环境要求】
一台或多台机器,操作系统CentOS 7
硬件配置:内存2GB或2G+,CPU 2核或CPU 2核+;
集群内各个机器之间能相互通信;
集群内各个机器可以访问外网,需要拉取镜像;
禁止swap分区;

1、关闭防火墙

k8s一般是在内部网络使用,而且这也是学习环境,直接关闭防火墙,避免网络通讯的困扰。
systemctl stop firewalld
systemctl disable firewalld

2、 关闭selinux

#永久
sed -i 's/enforcing/disabled/' /etc/selinux/config  
#临时
setenforce 0  

3、关闭swap(k8s禁止虚拟内存以提高性能)

#永久
sed -ri 's/.*swap.*/#&/' /etc/fstab 
#临时
swapoff -a 

4、配置host

这里要根据实际情况进行配置。配置好对应的master节点和node节点。
cat >> /etc/hosts << EOF
192.168.16.135 k8smaster
192.168.16.137 k8snode
EOF

5、设置网桥参数

cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
#生效
sysctl --system  

6、更新时间

yum install ntpdate -y
ntpdate time.windows.com

二、docker安装

1、更新yum源      #可选

yum install wget -y
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

报错:
Error: Package: 3:docker-ce-19.03.13-3.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
Error: Package: containerd.io-1.6.8-3.1.el7.x86_64 (docker-ce-stable)
           Requires: container-selinux >= 2:2.74
处理: wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

2、安装docker

yum install docker-ce-19.03.13 -y

3、配置开机自启

systemctl enable docker.service

4、docker查看命令

查看docker状态
systemctl status docker.service 

查看当前已下载的镜像
docker images

拉取镜像
docker pull hello-world

运行镜像
[root@130 ~]# docker run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

如此docker安装成功。

三、k8s安装

1、添加k8s的阿里云YUM源

cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

2、安装 kubeadm,kubelet 和 kubectl

yum install kubelet-1.19.4 kubeadm-1.19.4 kubectl-1.19.4 -y

3、开机自启

systemctl enable kubelet.service

4、查看是否安装成功

yum list installed | grep kubelet
yum list installed | grep kubeadm
yum list installed | grep kubectl

5、版本查看

kubelet --version

6、半路总结

如果正确安装到这里的时候,已经成功安装一大半了,如上的配置都是需要在所有的节点进行配置。
可通过xshell工具将所有指令发送到所有的虚拟机,操作如下【发送键输入到所有会话】。
另外,有一些配置是需要重启才能生效的,因此,这里可以重启一下。

7、初始化主节点

kubeadm init --apiserver-advertise-address=192.168.146.130 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.19.4 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16
192.168.146.130是主节点的地址,要自行修改。其它的不用修改。

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

最后查看节点信息
kubectl get nodes
报错:Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of “crypto/rsa: verification error” while trying to verify candidate authority certificate “kubernetes”)
处理:rm -rf $HOME/.kube

8、初始化node节点(在node节点配置)

kubeadm join 192.168.146.130:6443 --token ybxcrf.da1yd57pfqmxoa8n \
    --discovery-token-ca-cert-hash sha256:2339de46a8c771385fd01fb60755777872aca0719efc8adb1a55c1ea0f4e4776

*****如上这段仅仅是参考,实际部署的时候这段join命令会在主节点init命令的时候进行打印。
至此通过kubectl get nodes查看的时候所有的节点是NotReady状态。*****

报错1:
error execution phase preflight: [preflight] Some fatal errors occurred:
        [ERROR DirAvailable--etc-kubernetes-manifests]: /etc/kubernetes/manifests is not empty
        [ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
        [ERROR FileContent--proc-sys-net-ipv4-ip_forward]: /proc/sys/net/ipv4/ip_forward contents are not set to 1
        [ERROR Port-10250]: Port 10250 is in use
        [ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
处理:kubeadm reset

9、网络插件

在主节点上执行即可,将如下保存为kube-flannel.yml文件。

---
 apiVersion: policy/v1beta1
 kind: PodSecurityPolicy
 metadata:
   name: psp.flannel.unprivileged
   annotations:
     seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
     seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
     apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
     apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
 spec:
   privileged: false
   volumes:
   - configMap
   - secret
   - emptyDir
   - hostPath
   allowedHostPaths:
   - pathPrefix: "/etc/cni/net.d"
   - pathPrefix: "/etc/kube-flannel"
   - pathPrefix: "/run/flannel"
   readOnlyRootFilesystem: false
   # Users and groups
   runAsUser:
     rule: RunAsAny
   supplementalGroups:
     rule: RunAsAny
   fsGroup:
     rule: RunAsAny
   # Privilege Escalation
   allowPrivilegeEscalation: false
   defaultAllowPrivilegeEscalation: false
   # Capabilities
   allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
   defaultAddCapabilities: []
   requiredDropCapabilities: []
   # Host namespaces
   hostPID: false
   hostIPC: false
   hostNetwork: true
   hostPorts:
   - min: 0
     max: 65535
   # SELinux
   seLinux:
     # SELinux is unused in CaaSP
     rule: 'RunAsAny'
 ---
 kind: ClusterRole
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: flannel
 rules:
 - apiGroups: ['extensions']
   resources: ['podsecuritypolicies']
   verbs: ['use']
   resourceNames: ['psp.flannel.unprivileged']
 - apiGroups:
   - ""
   resources:
   - pods
   verbs:
   - get
 - apiGroups:
   - ""
   resources:
   - nodes
   verbs:
   - list
   - watch
 - apiGroups:
   - ""
   resources:
   - nodes/status
   verbs:
   - patch
 ---
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: flannel
 roleRef:
   apiGroup: rbac.authorization.k8s.io
   kind: ClusterRole
   name: flannel
 subjects:
 - kind: ServiceAccount
   name: flannel
   namespace: kube-system
 ---
 apiVersion: v1
 kind: ServiceAccount
 metadata:
   name: flannel
   namespace: kube-system
 ---
 kind: ConfigMap
 apiVersion: v1
 metadata:
   name: kube-flannel-cfg
   namespace: kube-system
   labels:
     tier: node
     app: flannel
 data:
   cni-conf.json: |
     {
       "name": "cbr0",
       "cniVersion": "0.3.1",
       "plugins": [
         {
           "type": "flannel",
           "delegate": {
             "hairpinMode": true,
             "isDefaultGateway": true
           }
         },
         {
           "type": "portmap",
           "capabilities": {
             "portMappings": true
           }
         }
       ]
     }
   net-conf.json: |
     {
       "Network": "10.244.0.0/16",
       "Backend": {
         "Type": "vxlan"
       }
     }
 ---
 apiVersion: apps/v1
 kind: DaemonSet
 metadata:
   name: kube-flannel-ds
   namespace: kube-system
   labels:
     tier: node
     app: flannel
 spec:
   selector:
     matchLabels:
       app: flannel
   template:
     metadata:
       labels:
         tier: node
         app: flannel
     spec:
       affinity:
         nodeAffinity:
           requiredDuringSchedulingIgnoredDuringExecution:
             nodeSelectorTerms:
             - matchExpressions:
               - key: kubernetes.io/os
                 operator: In
                 values:
                 - linux
       hostNetwork: true
       priorityClassName: system-node-critical
       tolerations:
       - operator: Exists
         effect: NoSchedule
       serviceAccountName: flannel
       initContainers:
       - name: install-cni
         image: quay.io/coreos/flannel:v0.13.0
         command:
         - cp
         args:
         - -f
         - /etc/kube-flannel/cni-conf.json
         - /etc/cni/net.d/10-flannel.conflist
         volumeMounts:
         - name: cni
           mountPath: /etc/cni/net.d
         - name: flannel-cfg
           mountPath: /etc/kube-flannel/
       containers:
       - name: kube-flannel
         image: quay.io/coreos/flannel:v0.13.0
         command:
         - /opt/bin/flanneld
         args:
         - --ip-masq
         - --kube-subnet-mgr
         resources:
           requests:
             cpu: "100m"
             memory: "50Mi"
           limits:
             cpu: "100m"
             memory: "50Mi"
         securityContext:
           privileged: false
           capabilities:
             add: ["NET_ADMIN", "NET_RAW"]
         env:
         - name: POD_NAME
           valueFrom:
             fieldRef:
               fieldPath: metadata.name
         - name: POD_NAMESPACE
           valueFrom:
             fieldRef:
               fieldPath: metadata.namespace
         volumeMounts:
         - name: run
           mountPath: /run/flannel
         - name: flannel-cfg
           mountPath: /etc/kube-flannel/
       volumes:
       - name: run
         hostPath:
           path: /run/flannel
       - name: cni
         hostPath:
           path: /etc/cni/net.d
       - name: flannel-cfg
         configMap:
           name: kube-flannel-cfg kubectl apply -f kube-flannel.yml


执行完毕稍等1分钟左右,再次查看通过kubectl get nodes查看。

10、测试(k8s环境安装成功,拉取nginx镜像进行测试。)

#创建deploy
kubectl create deployment nginx --image=nginx
#开放端口
kubectl expose deployment nginx --port=80 --type=NodePort

查看端口
[root@130 ~]# kubectl get service
NAME         TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1        <none>        443/TCP        26m
nginx        NodePort    10.103.156.184   <none>        80:31508/TCP   4s

31508是可以访问的端口。
node节点ip:31508即可访问。