一、准备工作
1、vwmare部署安装一台虚拟机Ubuntu 18.04
2、更新已安装的软件
apt-upgrade #更新已安装的软件包
3、关闭防火墙
(1)sudo ufw status #查看防火墙当前的状态
(2)sudo ufw disenable #关闭防火墙,并设置开机禁用
4、关闭swap
(1)vim /etc/fstab #编辑/etc/fstab
(2)注释掉swap相关条目
5、关闭selinxu
(1)vim /etc/selinux/config #编辑配置文件
(2)SELINUX=disabled #修改/etc/selinux/config文件中设置selinux为disabled
(3)reboot #重启系统
6、更新yum源
yum源文件在/etc/yum.repos.d/目录下
7、 安装docker,并设置其开机启动
参考docker官网:Install Docker Engine on Ubuntu | Docker Documentation
8、配置docker加速器
登录阿里云,获取加速器配置信息
9、修改内核参数,并设置其立即生效
(1) vim /etc/sysctl.d/k8s.conf
(2)在文件里添加如下内容,后保存
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call.iptables = 1
net.ipv4.ip_forward = 1
(3)设置其立即生效
systemctl -p /etc/sysctl.d/k8s.conf
10、关闭虚拟机,并将其再克隆两份。
11、编辑设置三台虚拟机的使用固定IP。
三台及其IP依次为:192.168.26.50、192.168.26.51、192.168.26.52
(1)Ubuntu18.04采用的是netplan来管理network
root@vms50:~# ls /etc/netplan/
01-netcfg.yaml
(2)# cat /etc/netplan/01-netcfg.yaml
network:
version: 2
renderer: networkd
ethernets:
ens32:
dhcp4: no #设置为no表示要使用固定IP
addresses: [192.168.26.50/24] #静态IP地址
gateway4: 192.168.26.2 #网关
nameservers:
addresses: [192.168.26.2]
(3)保存编辑内容
(4)使用如下命令设置其生效
netplan apply
12、设置机器名称
(1)vim /etc/hostname #主机名保存在/etc/hostname文件里
(2)保存编辑内容
二、安装Kubernetes
1、安装kubelet kubeadm kubectl
(1)、三台虚拟机都要添加 Kubernetes Signing Key
#添加密钥
#如果你可以访问Google时,使用此命令:
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add
#如果不能访问Google,请使用下面这条命令:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
#或者主机下载https://packages.cloud.google.com/apt/doc/apt-key.gpg文件后复制到虚拟机上,在虚拟机上cd到apt-key.gpg的目录下,再执行添加密钥命令:
apt-key add apt-key.gpg
(2)、三台虚拟机添加官方Kubernetes软件源
#添加官方Kubernetes软件源
#如果你可以访问kubernetes官方软件源http://apt.kubernetes.io,就使用下面命令:
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb http://apt.kubernetes.io/ kubernetes-bionic main
EOF
#如果不能访问官方地址,请使用阿里云软件源
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list
(3)、三台虚拟机上都要先执行apt-get update命令
root@vms51:~# apt-get update
Hit:1 http://mirrors.aliyun.com/ubuntu bionic InRelease
Get:2 http://mirrors.aliyun.com/ubuntu bionic-security InRelease [88.7 kB]
Get:3 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease [9,383 B]
Get:4 http://mirrors.aliyun.com/ubuntu bionic-updates InRelease [88.7 kB]
Get:5 http://mirrors.aliyun.com/ubuntu bionic-proposed InRelease [242 kB]
Hit:6 https://download.docker.com/linux/ubuntu bionic InRelease
Hit:7 https://mirrors.aliyun.com/docker-ce/linux/ubuntu bionic InRelease
Get:8 http://mirrors.aliyun.com/ubuntu bionic-backports InRelease [74.6 kB]
Ign:9 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
Get:9 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages [52.6 kB]
Get:10 http://mirrors.aliyun.com/ubuntu bionic-proposed/restricted Sources [4,744 B]
...
(4)、三台虚拟机都有安装kubelet、kubeadm、kubectl,版本都是1.21.3-00
root@vms50:~# apt install -y kubelet=1.21.3-00 kubeadm=1.21.3-00 kubectl=1.21.3-00
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
conntrack cri-tools kubernetes-cni socat
The following NEW packages will be installed:
conntrack cri-tools kubeadm kubectl kubelet kubernetes-cni socat
0 upgraded, 7 newly installed, 0 to remove and 9 not upgraded.
Need to get 72.9 MB of archives.
After this operation, 313 MB of additional disk space will be used.
Get:1 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 cri-tools amd64 1.19.0- 00 [11.2 MB]
Get:2 http://mirrors.aliyun.com/ubuntu bionic/main amd64 conntrack amd64 1:1.4.4+snapshot20161117-6u buntu2 [30.6 kB]
...
(5)、然后再设置kubelet开机启动,并启动kubelet
root@vms52:~# systemctl enable kubelet && systemctl start kubelet
root@vms52:~# kubelet --version
Kubernetes v1.21.3
root@vms52:~#
2、准备镜像
(1)、所有节点上准备镜像,vim ./k8s.sh,然后输入一下内容
#!/bin/bash
images=(
kube-apiserver:v1.21.3
kube-controller-manager:v1.21.3
kube-scheduler:v1.21.3
kube-proxy:v1.21.3
pause:3.2
etcd:3.4.13-0
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done
docker pull coredns/coredns:1.8.0
docker tag coredns/coredns:1.8.0 registry.aliyuncs.com/google_containers/coredns:v1.8.0
docker rmi coredns/coredns:1.8.0
(2)给./k8s.sh 添加可执行权限,并执行
root@vms51:~# ls -al ./k8s.sh
-rw-r--r-- 1 root root 605 Jan 19 13:32 ./k8s.sh
root@vms51:~# chmod +x ./k8s.sh
root@vms51:~# ls -al ./k8s.sh
-rwxr-xr-x 1 root root 605 Jan 19 13:32 ./k8s.sh
root@vms51:~# ./k8s.sh
v1.21.3: Pulling from google_containers/kube-apiserver
b49b96595fd4: Pull complete
b91f78c1d2c5: Pull complete
59e5d583c89f: Downloading [======================> ] 12.95MB/29.07MB
...
3、初始化Master
以下操作只在master节点上操作(在哪台机器上执行,那台机器就会是master,本例是用192.168.26.50做master节点)
(1)初始化master节点
root@vms50:~# kubeadm init --image-repository=registry.aliyuncs.com/google_containers --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
I0119 13:49:30.170109 1674 version.go:254] remote version is much newer: v1.23.1; falling back to: stable-1.21
[init] Using Kubernetes version: v1.21.8
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local vms50.rhce.cc] and IPs [10.96.0.1 192.168.26.50]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [localhost vms50.rhce.cc] and IPs [192.168.26.50 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [localhost vms50.rhce.cc] and IPs [192.168.26.50 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 11.003399 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.21" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node vms50.rhce.cc as control-plane by adding the labels: [node-role.kubernetes.io/master(deprecated) node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
[mark-control-plane] Marking the node vms50.rhce.cc as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 12bbdr.mp72q6cgz078h431
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Alternatively, if you are the root user, you can run:
export KUBECONFIG=/etc/kubernetes/admin.conf
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.26.50:6443 --token 12bbdr.mp72q6cgz078h431 \
--discovery-token-ca-cert-hash sha256:7b55321bf0d86969b4e2b7158c9b4551f5417208f8cd327a67e345dd41b06217
root@vms50:~#
(2)根据步骤(1)的提示信息,在master上执行以下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=/etc/kubernetes/admin.conf
4、其他节点加入集群
(1)在其他节点(192.168.26.51、192.168.26.52)上执行如下命令就可以将work节点加入当前集群:
root@vms51:~# kubeadm join 192.168.26.50:6443 --token 12bbdr.mp72q6cgz078h431 \
> --discovery-token-ca-cert-hash sha256:7b55321bf0d86969b4e2b7158c9b4551f5417208f8cd327a67e345dd41b06217
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@vms51:~#
(2)在master上查看节点
root@vms50:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms50.rhce.cc NotReady control-plane,master 7m v1.21.3
vms51.rhce.cc NotReady <none> 28s v1.21.3
vms52.rhce.cc NotReady <none> 9s v1.21.3
root@vms50:~#
5、安装calico网络
(1)下载calico.yaml
https://docs.projectcalico.org/v3.19/manifests/calico.yaml (2)修改calico.yaml的pod网段,要将其修改为kubeadm init时选项--pod-network-cidr选项所指定的网段。
(3)所有节点上下载calico.yaml文件内要用到的镜像
(4)在master上执行 kubectl apply命令
root@vms50:~# kubectl apply -f calico.yml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.apps/calico-node created
serviceaccount/calico-node created
deployment.apps/calico-kube-controllers created
serviceaccount/calico-kube-controllers created
Warning: policy/v1beta1 PodDisruptionBudget is deprecated in v1.21+, unavailable in v1.25+; use policy/v1 PodDisruptionBudget
poddisruptionbudget.policy/calico-kube-controllers created
root@vms50:~#
(5)过一分钟左右,在master上执行 kubectl get nodes命令,就可以看到节点的网络是Ready状态了
root@vms50:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms50.rhce.cc NotReady control-plane,master 46m v1.21.3
vms51.rhce.cc NotReady <none> 39m v1.21.3
vms52.rhce.cc NotReady <none> 39m v1.21.3
root@vms50:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms50.rhce.cc Ready control-plane,master 47m v1.21.3
vms51.rhce.cc Ready <none> 40m v1.21.3
vms52.rhce.cc Ready <none> 40m v1.21.3
root@vms50:~#
三、设置
1、 设置智能提示
编辑 /etc/profile 文件,在文件首行插入一行命令:source <(kubectl completion bash)
root@vms50:~# cat /etc/profile
source <(kubectl completion bash)
# /etc/profile: system-wide .profile file for the Bourne shell (sh(1))
# and Bourne compatible shells (bash(1), ksh(1), ash(1), ...).
...
设置其生效:
root@vms50:~# source /etc/profile
到这里,Ubuntu18.04上部署搭建kubernetes集群就成功了。
现在就可以查看集群信息了:
root@vms50:~# kubectl cluster-info
Kubernetes control plane is running at https://192.168.26.50:6443
CoreDNS is running at https://192.168.26.50:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
root@vms50:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms50.rhce.cc Ready control-plane,master 5h47m v1.21.3
vms51.rhce.cc Ready <none> 5h40m v1.21.3
vms52.rhce.cc Ready <none> 5h40m v1.21.3
root@vms50:~# kubectl get pod -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6b59cd85f8-hqs2t 1/1 Running 0 5h1m
kube-system calico-node-cxjch 1/1 Running 0 5h1m
kube-system calico-node-lxbcg 1/1 Running 0 5h1m
kube-system calico-node-r7fwn 1/1 Running 0 5h1m
kube-system coredns-59d64cd4d4-lphk7 1/1 Running 0 5h47m
kube-system coredns-59d64cd4d4-zt7cf 1/1 Running 0 5h47m
kube-system etcd-vms50.rhce.cc 1/1 Running 0 5h47m
kube-system kube-apiserver-vms50.rhce.cc 1/1 Running 0 5h47m
kube-system kube-controller-manager-vms50.rhce.cc 1/1 Running 0 5h47m
kube-system kube-proxy-8xnms 1/1 Running 0 5h40m
kube-system kube-proxy-wcbps 1/1 Running 0 5h47m
kube-system kube-proxy-zcmw7 1/1 Running 0 5h41m
kube-system kube-scheduler-vms50.rhce.cc 1/1 Running 0 5h47m
root@vms50:~#