Kubernetes是一个可移植的,可扩展的开源平台,用于管理容器化的工作负载和服务,可促进声明式配置和自动化。它拥有一个庞大且快速增长的生态系统。Kubernetes的服务,支持和工具使用的非常广泛。
Google在2014年开源Kubernetes项目。Kubernetes将超过15年的Google在大规模生产工作负载方面的经验与社区中最好的想法和实践相结合。
1.安装环境
注意:以下步骤均以root权限操作
1.1 服务器准备
三台系统是Centos7.5阿里服务器
要求机器以下配置
1.1.1 系统环境
| Ubuntu16.04 + |
|–|–|
| Debian9 + |
| CentOS 7 | |
| 红帽企业版Linux(RHEL)7 | |
|HypriotOSv1.0.1 +|–|
| Fedora25 + | |
|FlatcarContainer Linux(已测试2512.3.0) | |
3.2CPU或更多2.2G内存或以上
4.需要可以连接互联网
5.每个股东需要有唯一的主机名
6.需要开放部分端口
7.需要交换功能
1.1.2 开放端口:
Master
端口 | 用途 |
6443* | Kubernetes API服务器 |
2379-2380 | kubelet etcd服务器客户端API |
10250 | kubelet API |
10251 | Kube-scheduler |
10252 | Kube-controller-manager |
Worker node
端口 | 用途 |
10250 | kubelet API |
30000-32767 | NodePort Services |
2. 安装Docker
2.1 安装Docker,设置存储库。
[root@k8s ~]# sudo yum install -y yum-utils
[root@k8s ~]# sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
2.2,启用存储库,如需关闭将参数–enable替换–disable
[root@k8s ~]# sudo yum-config-manager --enable docker-ce-nightly
2.3,启用测试通道,如需关闭将参数–enable替换–disable
[root@k8s ~]# sudo yum-config-manager --enable docker-ce-test
2.4,安装Docker引擎
[root@k8s ~]# sudo yum install docker-ce docker-ce-cli containerd.io
2.5,启动Docker
[root@k8s ~]# sudo systemctl start docker
2.6,检测Docker是否正确安装
[root@k8s ~]# sudo docker run hello-world
Unableto find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
Digest:sha256: 1a523af650137b8accdaed439c17d684df61ee4d74feac151b5b337bd29e7eec
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
Docker安装网址:
https://docs.docker.com/engine/install/centos/#install-using-the-repository
3.使用工具安装Kubernetes
安装Kubernetes集群可以借助三种工具分别是Kubeadm、Kops、Kubespray。这里我们使用Kubeadm来进行安装。
3.1 安装Master
3.1.1 关闭swap
[root@k8s101 ~]# swapoff -a
3.1.2 配置yum源
[root@k8s101 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
3.1.3 安装kubeadm相关工具
[root@k8s101 ~]# yum install -y kubelet kubeadm kubectl
3.1.4 后去init.default初始化文件
[root@k8s101 ~]# kubeadm config print init-defaults >init.default.yaml
3.1.5 编辑init.default文件,修改仓库,修改pod的地址范围
[root@k8s101 ~]# vim init.default.yaml
imageRepository:registry.aliyuncs.com/google_containers
kind:ClusterConfiguration
kubernetesVersion:v1.20.0
networking:
podSubnet:"192.168.0.0/16"
dnsDomain: cluster.local
serviceSubnet: 10.96.0.0/12
scheduler:{}
3.1.6 下载Kubernetes的相关镜像
[root@k8s101 ~]# kubeadm config images pull --config=init.default.yaml
[config/images]Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.20.0
[config/images]Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.20.0
[config/images]Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.20.0
[config/images]Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.20.0
[config/images]Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images]Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images]Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
3.1.7 启动Kubelet,并设置开机自启,设置 cgroupDriver
[root@k8s101 ~]# vim /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"]
}
[root@k8s101 ~]# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf--kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"
[root@k8s101 ~]# systemctl daemon-reload
[root@k8s101 ~]# systemctl restart docker
[root@k8s101 ~]# systemctl enable docker
[root@k8s101 ~]# systemctl enable kubelet
3.1.8 Kubeadm init命令初始化集群集,集群先设置–pod-network-
cidr=192.168.0.0/16参数,关闭网络功能
[root@k8s101 ~]#kubeadm init --image-repositoryregistry.aliyuncs.com/google_containers --kubernetes-version=v1.20.0--pod-network-cidr=192.168.0.0/16
3.1.9 执行命令,最后提是安装成功,记下token
kubeadmjoin 172.26.64.121:6443 --token c4r8zo.38zrpieopx6l51re \
--discovery-token-ca-cert-hashsha256:5fae9d62bf7d6e7a7759784aa8585103b82e5a2368ab5e11e2bca8ede6187c8a
3.1.10 按照图片提示,创建k8s用户,将配置文件复制到普通用户下.如果是root则导出
[root@k8s software]# useradd k8s
[root@k8s software]# passwd k8s
[root@k8s ~]# usermod -aG docker k8s
[root@k8s ~]# vim /etc/sudoers
k8s ALL=(ALL) ALL
[root@k8s101 ~]# su k8s
[k8s@k8s101 ~]$ mkdir -p $HOME/.kube
[k8s@k8s101 ~]$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[k8s@k8s101 ~]$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
[k8s@k8s101 ~]$ exit
[root@k8s101 ~]# export KUBECONFIG=/etc/kubernetes/admin.conf
3.1.11 这个时候Master已经安装完毕,但是缺少NODE,并且没有容器网络功能,验证安装是否成功:
[root@k8s101 ~]# kubectl get -n kube-system configmap
NAME DATA AGE
coredns 1 22m
extension-apiserver-authentication 6 22m
kube-proxy 2 22m
kube-root-ca.crt 1 22m
kubeadm-config 2 22m
kubelet-config-1.20 1 22m
3.2 安装Node加入集群
3.2.1 安装Node前置工作和安装Master一样都要安装Docker,设置开机自启
3.2.2 安装完Docker之后同样安装kubeadm相关工具
[root@k8s102 ~]#cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s102 ~]# yum install -y kubelet kubeadmkubectl
[root@k8s103 ~]#cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpghttps://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s103 ~]# yum install -y kubelet kubeadmkubectl
[root@k8s102 ~]# vim /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"]
}
[root@k8s102 ~]# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf--kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"
[root@k8s102 ~]# systemctl daemon-reload
[root@k8s102 ~]# systemctl enable docker.service
[root@k8s102 ~]# systemctl restart docker
[root@k8s102 ~]# systemctl enable kubelet
[root@k8s103 ~]# vim /etc/docker/daemon.json
{
"exec-opts":["native.cgroupdriver=systemd"]
}
[root@k8s103 ~]# vim /usr/lib/systemd/system/kubelet.service.d/10-kubeadm.conf
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf--kubeconfig=/etc/kubernetes/kubelet.conf --cgroup-driver=systemd"
[root@k8s103 ~]# systemctl daemon-reload
[root@k8s103 ~]# systemctl enable docker.service
[root@k8s103 ~]# systemctl restart docker
[root@k8s103 ~]# systemctl enable kubelet
3.2.3 加入Master,创建join-config.ymal。apiServer为Master地址,token为上面Master创建完毕后生成记录的token
[root@k8s102 ~]# vim join-config.ymal
apiVersion:kubeadm.k8s.io/v1beta2
kind:JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: 172.26.64.121:6443
token: c4r8zo.38zrpieopx6l51re
unsafeSkipCAVerification: true
tlsBootstrapToken: c4r8zo.38zrpieopx6l51re
[root@k8s102 ~]# kubeadm join --config join-config.ymal
3.2.4 提示成功,103也执行同样操作,分发join-config.ymal,执行join命令
[root@k8s102 ~]# scp join-config.ymal 172.26.64.120:/root/
[root@k8s103 ~]# kubeadm join --config join-config.ymal
3.3 安装网络插件
3.3.1 查看状态,此时可以看到节点都已经有了,但是都是NotReady状态,原因就是没有安装CNI网络插件
[root@k8s101 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s101 NotReady control-plane,master 78m v1.20.1
k8s102 NotReady <none> 5m v1.20.1
k8s103 NotReady <none> 2m38s v1.20.1
3.3.2 安装CNI网络插件,选择weave插件
[root@k8s101 ~]# docker pull quay.io/coreos/flannel:v0.9.1-amd64
[root@k8s101 ~]# mkdir -p /etc/cni/net.d/
[root@k8s101 ~]# cat <<EOF> /etc/cni/net.d/10-flannel.conf
{"name":"cbr0","type":"flannel","delegate":{"isDefaultGateway": true}}
EOF
[root@k8s101 ~]# mkdir /usr/share/oci-umount/oci-umount.d -p
[root@k8s101 ~]# mkdir /run/flannel/
[root@k8s101 ~]# kubectl apply -f"https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64| tr -d '\n')"
3.3.3 验证集群是否安装成功。安装weave跟节点网速有关,可能会非常慢,需要等待
[root@k8s101 ~]# kubectl get pods --all-namespaces
集群安装成功。如果中途装失败了想重新安装可以使用kubeadm reset命令重置,再次进行安装。
4 配置Docker镜像加速
4.1首先登陆阿里云服务,搜索容器镜像服务
4.2 进入到镜像服务后点击镜像加速器
4.3 根据网站提示,给每台机器都配置docker镜像加速
[root@k8s101 root]$ vim /etc/docker/daemon.json
{
"registry-mirrors":["https://bl562v6z.mirror.aliyuncs.com"],
"exec-opts":["native.cgroupdriver=systemd"]
}
[root@k8s101 root]$ sudo systemctl daemon-reload
[root@k8s101 root]$ sudo systemctl restart docker
[root@k8s102 ~]# vim /etc/docker/daemon.json
{
"registry-mirrors":["https://bl562v6z.mirror.aliyuncs.com"],
"exec-opts":["native.cgroupdriver=systemd"]
}
[root@k8s102 root]$ sudo systemctl daemon-reload
[root@k8s102 root]$ sudo systemctl restart docker
{
"registry-mirrors":["https://bl562v6z.mirror.aliyuncs.com"],
"exec-opts":["native.cgroupdriver=systemd"]
}
[root@k8s103 root]$ sudo systemctl daemon-reload
[root@k8s103 root]$ sudo systemctl restart docker
5 配置k8s镜像拉取
5.1根据阿里云账号docker(自己的阿里云账号),登录成功会在用户根目录下生成.docker目录和config.json认证密钥
[root@k8s101 ~]# docker login --username=lzt_otzregistry.cn-zhangjiakou.aliyuncs.com
[root@k8s102 ~]# docker login --username=lzt_otzregistry.cn-zhangjiakou.aliyuncs.com
[root@k8s103 ~]# docker login --username=lzt_otzregistry.cn-zhangjiakou.aliyuncs.com
5.2 k8s拉取镜像并不会默认读取docker认证密钥所欲需要配置,根据官网提示https://kubernetes.io/docs/concepts/containers/images/#using-a-private-regist。将docker的认证密钥复制到k8s下
[root@k8s101 ~]# cd ~
[root@k8s101 ~]# cp .docker/config.json /var/lib/kubelet/
[root@k8s101 ~]# systemctl restart kubelet
[root@k8s102 ~]# cp .docker/config.json /var/lib/kubelet/
[root@k8s102 ~]# systemctl restart kubelet
[root@k8s103 ~]# cp .docker/config.json /var/lib/kubelet/
[root@k8s103 ~]# systemctl restart kubelet
6、 简单任务尝试
使用K8s运行MySql
6.1 编写MySql RC(Replication Controller)文件,注意空格对齐(很重要)
[root@k8s101~]# su k8s
[k8s@k8s101 root]$ cd ~
[k8s@k8s101 ~]$ vim mysql-rc.yaml
apiVersion:v1
kind:ReplicationController #副本控制器RC
metadata:
name: mysql #RC的名称,全局唯一
spec:
replicas: 1 #Pod副本的期待数量
selector:
app: mysql #符合目标的Pod拥有此标签
template: #根据模板创建Pod的副本(实例)
metadata:
labels:
app: mysql #Pod副本拥有的标签,对应RC的Selector
spec:
containers: #Pod内容器的定义部分
- name: mysql #容器名称
image: docker.io/library/mysql:5.7 #容器对应的Docker Image
ports:
- containerPort: 3306 #容器应用监听的端口号
env:
- name: MYSQL_ROOT_PASSWORD
value: "123456"
6.2 发布到Kubernetes集群中
[k8s@k8s101 ~]$ kubectl create -f mysql-rc.yaml
replicationcontroller/mysqlcreated
6.3 查看刚创建的RC
[k8s@k8s101 ~]$ kubectl get rc
NAME DESIRED CURRENT READY AGE
mysql 1 1 0 76s
6.4 查看Pod的创建情况
[k8s@k8s101 ~]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysql-82pvs 1/1 Running 0 62s
6.5 查看详情,可以看到容器创建在了k8s102机器上,所以102上的docker必须配置好了镜像加速,否则mysql镜像会拉取不下来。
[k8s@k8s101 ~]$ kubectl describe pod mysql-82pvs
6.6 来到k8s102机器,查看容器详情,此时会有两个mysql相关容器
[root@k8s102 ~]# docker ps |grep mysql
6.7 创建关联的Kubernets Service文件关联MySql。
[k8s@k8s101 ~]$ vim mysql-svc.yaml
apiVersion:v1
kind:Service #表名是Kubernetes Service
metadata:
name: mysql #Service的全局唯一名称
spec:
type: NodePort
ports:
- port: 3306 #Service提供服务器的端口号
nodePort: 30001 #堆外暴露端口
selector: #Service对应的Pod拥有这里定义的标签
app: mysql
[k8s@k8s101 ~]$ kubectl create -f mysql-svc.yaml
service/mysqlcreated
6.8 查看创建好的Service
[k8s@k8s101 ~]$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h
mysql NodePort 10.102.49.161 <none> 3306:30001/TCP 7s
6.9 通过客户端工具访问k8s101 30001端口,访问Mysql。这样通过k8s部署MySql就完成了
7、 动态存储管理NFS
[root@k8s101 ~]# yum -y install nfs-utils rpcbind
[root@k8s102 ~]# yum -y install nfs-utils rpcbind
[root@k8s103 ~]# yum -y install nfs-utils rpcbind
[root@k8s101 ~]# systemctl start rpcbind.service
[root@k8s101 ~]# systemctl start nfs
[root@k8s101 ~]# systemctl enable rpcbind.service
[root@k8s101 ~]# systemctl enable nfs
[root@k8s102 ~]# systemctl start rpcbind.service
[root@k8s102 ~]# systemctl start nfs
[root@k8s102 ~]# systemctl enable rpcbind.service
[root@k8s102 ~]# systemctl enable nfs
[root@k8s103 ~]# systemctl start rpcbind.service
[root@k8s103 ~]# systemctl start nfs
[root@k8s103 ~]# systemctl enable rpcbind.service
[root@k8s103 ~]# systemctl enable nfs
[root@k8s101 ~]# mkdir /data/nfs -p
[root@k8s101 ~]# chown nfsnobody.nfsnobody /data/nfs
[root@k8s102 ~]# mkdir /data/nfs -p
[root@k8s102 ~]# chown nfsnobody.nfsnobody /data/nfs
[root@k8s103 ~]# mkdir /data/nfs -p
[root@k8s103 ~]# chown nfsnobody.nfsnobody /data/nfs
Master地址
[root@k8s101 ~]# cat>>/etc/exports<<EOF
/data/nfs172.26.64.121/20(rw,sync,no_root_squash,no_all_squash)
EOF
[root@k8s101 ~]# mkdir nfs
[root@k8s101 ~]# cd nfs/
[root@k8s101nfs]# wget https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy/rbac.yaml
[root@k8s101 nfs]# wgethttps://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy/class.yaml
[root@k8s101 nfs]# wget https://github.com/kubernetes-retired/external-storage/tree/master/nfs-client/deploy/deployment.yaml
[root@k8s101 nfs]# su k8s
[k8s@k8s101 nfs]$ kubectl apply -f class.yaml
[k8s@k8s101 nfs]$ kubectl apply -f rbac.yaml
7.1 修改下载好的deoplyment.yaml
[k8s@k8s101 nfs]$ vim deployment.yaml
apiVersion:apps/v1
kind:Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
# replace with namespace where provisioner isdeployed
namespace: default
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
serviceAccountName:nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: registry.cn-zhangjiakou.aliyuncs.com/my-bonc/nfs-client-provisioner:latest
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: fuseim.pri/ifs
- name: NFS_SERVER
value:172.26.64.121 #nfs服务的IP K8S101私有ip
- name: NFS_PATH
value: /data/nfs #nfs服务挂载目录
volumes:
- name: nfs-client-root
nfs:
server: 172.26.64.121 #nfs服务的IP k8s101
path: /data/nfs #nfs服务挂载目录
7.2 导入deploy.yaml
[k8s@k8s101 nfs]$ kubectl create -f deployment.yaml
查看
[k8s@k8s101 nfs]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
managed-nfs-storage fuseim.pri/ifs Delete Immediate false 5m4s
[k8s@k8s101 nfs]$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nfs-client-provisioner-7859c747f5-p82js 1/1 Running 0 31s
[k8s@k8s101 nfs]$ sudo vim /etc/kubernetes/manifests/kube-apiserver.yaml
- --feature-gates=RemoveSelfLink=false #在command里添加此内容
7.3 测试创建pvc
[k8s@k8s101 nfs]$ vim test-pvc.yaml
kind:PersistentVolumeClaim
apiVersion:v1
metadata:
name: test-claim
annotations:
volume.beta.kubernetes.io/storage-class:"managed-nfs-storage"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
7.4 导入
[k8s@k8s101 nfs]$ kubectl create -f test-pvc.yaml
7.5 查看,自动创建了pvcpv
[k8s@k8s101 nfs]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
test-claim Bound pvc-a22d6cad-f7e1-4b38-bcc3-7099d7a964b8 1Mi RWX managed-nfs-storage 29s
[k8s@k8s101 nfs]$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-a22d6cad-f7e1-4b38-bcc3-7099d7a964b8 1Mi RWX Delete Bound default/test-claim managed-nfs-storage 83s