目录
k8s 集群搭建
介绍 & 目标
环境准备
软件安装
主节点搭建
从节点搭建
测试
参考文献
k8s 集群搭建
介绍 & 目标
原生云最近几年很火,让原来一个运维团队做的事情可以变成一个人搞定,它无需我们关心服务的依赖,隔离,安全,调度,资源伸缩的问题,我们只需要准备物理机,软件的一切调度,安全,监控全部搞定。
本文主要是让用户学习如何搭建一个k8s 原生云集群环境,开始是主机准备环境,安装软件,然后是主节点搭建,后面是丛节点加入主节点搭建,再然后测试部署一个应用,观察他们的调度及生命周期
环境准备
主机准备
- 192.168.56.101 master
- 192.168.56.102 node1
- 192.168.56.103 node2
设置hostname
hostnamectl set-hostname your_new_hostname
-
vim /etc/hostname
,将hostname
修改为k8s-master
- sudo vim /etc/hosts 追加 192.168.56.101 k8s-master
然后同样的方式 设置好 node1 node2
网络环境配置
1、禁止 SELinux
setenforce 0
sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
2、开启 br_netfilter kernel module
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables
echo '1' > /proc/sys/net/ipv4/ip_forward
3、禁止 SWAP
swapoff -a
想永久生效把 vim /etc/fstab 注释 swap 那一行
4、关闭防火墙
systemctl stop firewalld & systemctl disable firewalld
软件安装
1、安装 docker ce
yum install -y yum-utils device-mapper-persistent-data lvm2
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install -y docker-ce
2、安装 k8s
设置k8s依赖库
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
安装
yum install -y kubelet kubeadm kubectl
3、重启
sudo reboot
4、启动 docker and kublet
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
主节点搭建
master节点初始化
kubeadm init --apiserver-advertise-address=192.168.56.101 --pod-network-cidr=10.244.0.0/16
注意
--apiserver-advertise-address = 确定Kubernetes应该在哪个IP地址上公布其API服务.
--pod-network-cidr = 指定Pod网络的IP地址范围。 我们正在使用'flannel'虚拟网络。 如果要使用其他pod网络(如weave-net或calico),请更改范围IP地址。具体参照 [https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/]
Kubernetes初始化完成后,您将得到如下结果
[root@k8s-master ~]# kubeadm init --apiserver-advertise-address=192.168.56.101 --pod-network-cidr=10.244.0.0/16
[init] Using Kubernetes version: v1.13.4
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 18.09.3. Latest validated version: 18.06
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master localhost] and IPs [192.168.56.101 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.56.101]
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 24.005860 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "k8s-master" as an annotation
[mark-control-plane] Marking the node k8s-master as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: bzkk1o.utt4o833fk00rzbx
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.56.101:6443 --token bzkk1o.utt4o833fk00rzbx --discovery-token-ca-cert-hash sha256:5546030cadbbaec5d85ca1a9402f6eeb1a1550c80ef638ae1753e1348c077e58
将'kubeadm join ... ... ...'命令复制到文本编辑器。 该命令将用于node1 和node2 向master 集群注册新节点。
现在为了使用Kubernetes,我们需要在结果上运行一些命令。
创建新的“.kube”配置目录并复制配置“admin.conf”。
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
接下来,使用kubectl命令将flannel网络部署到kubernetes集群。如果是其他网络,得安装其他的,具体参照[https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/]
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
执行结果如下:
让主节点能安排pod (可选,如果只有两台机器,必须选择,才能看到效果)
kubectl taint nodes --all node-role.kubernetes.io/master-
主节点已安装完毕,看下效果
从节点搭建
将刚才保存的join 命令分别在node1 和 node2 上执行
kubeadm join 192.168.56.101:6443 --token bzkk1o.utt4o833fk00rzbx --discovery-token-ca-cert-hash sha256:5546030cadbbaec5d85ca1a9402f6eeb1a1550c80ef638ae1753e1348c077e58
结果如下
两个节点加入完毕,再查看下节点情况 (3个节点都出现了)
测试
登录到'k8s-master'服务器并使用kubectl命令创建名为'nginx'的新部署。
kubectl create deployment nginx --image=nginx
要查看“nginx”部署sepcification的详细信息,请运行以下命令。
kubectl describe deployment nginx
您将获得nginx pod部署规范。
接下来,我们将展示可通过互联网访问的nginx pod。 我们需要为此创建新的服务NodePort。
运行下面的kubectl命令。
kubectl create service nodeport nginx --tcp=80:80
确保没有错误。 现在使用下面的kubectl命令检查nginx服务nodeport和IP。
kubectl get pods
kubectl get svc