k8s集群部署步骤

准备环境
[root@localhost ~]# cat /etc/redhat-release
CentOS Linux release 7.8.2003 (Core)

本次实验使用三台机器用于部署k8s的运行环境,1台master,2台node.
10.10.21.8 k8s-master (Master)
10.10.21.28 k8s-node1 (Node1)
10.10.21.38 k8s-node2 (Node2)

Kubernetes集群组件:
• etcd 一个高可用的K/V键值对存储和服务发现系统
• flannel 实现夸主机的容器网络的通信
• kube-apiserver 提供kubernetes集群的API调用
• kube-controller-manager 确保集群服务
• kube-scheduler 调度容器,分配到Node
• kubelet 在Node节点上按照配置文件中定义的容器规格启动容器
• kube-proxy 提供网络代理服务

设置主机名并永久生效
Master上执行:
hostnamectl set-hostname k8s-master
Node1上执行:
hostnamectl set-hostname k8s-node1
Node2上执行:
hostnamectl set-hostname k8s-node2

三台机器,所有机器相互做解析
[root@localhost ~]# vi /etc/hosts
10.10.21.8 k8s-master
10.10.21.28 k8s-node1
10.10.21.38 k8s-node2

安装依赖包
每个节点都需要安装这些依赖
[root@k8s-master ~]# yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget net-tools git

设置防火墙为 Iptables 并设置空规则
每个节点都要执行,禁用firewalld,启用iptables,并且清空iptables的规则
[root@k8s-master ~]# systemctl stop firewalld && systemctl disable firewalld
[root@k8s-master ~]# yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save

关闭虚拟内存(swap)
每个节点都需要执行,如果pod运行在虚拟内存中,会大大降低效率,因此最好关闭虚拟内存
[root@k8s-master ~]# swapoff -a && sed -i '/ swap / s/^(.*)$/#\1/g' /etc/fstab

关闭SELINUX
每个节点都需要执行
[root@k8s-master ~]# setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config

调整内核参数,对于 K8S
每个节点都需要执行
[root@k8s-master ~]# pwd
/root
[root@k8s-master ~]# vi kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1 # 上面两条的作用是开启网桥模式,这两步是必须的
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1 # 关闭ipv6,这步也是必须的
net.netfilter.nf_conntrack_max=2310720

使开机时能调用
[root@k8s-master ~]# cp kubernetes.conf /etc/sysctl.d/kubernetes.conf

[root@k8s-master ~]# sysctl -p /etc/sysctl.d/kubernetes.conf # 手动刷新

调整系统时区
每个节点都需要执行,根据自己环境的需求来修改,如果已经是CST的时区,就可以跳过这步
#设置系统时区为 中国/上海
[root@k8s-master ~]# timedatectl set-timezone Asia/Shanghai # 将当前的 UTC 时间写入硬件时钟
[root@k8s-master ~]# timedatectl set-local-rtc 0
#重启依赖于系统时间的服务
[root@k8s-master ~]# systemctl restart rsyslog
[root@k8s-master ~]# systemctl restart crond

关闭系统不需要的服务
每个节点都需要执行,这是关闭邮件服务
[root@k8s-master ~]# systemctl stop postfix && systemctl disable postfix

设置 rsyslogd 和 systemd journald
每个节点都需要执行,因为centos7的引导方式改为了systemd,所以在centos7中就有两个日志系统,这里我们配置使用systemd journald
[root@k8s-master ~]# mkdir /var/log/journal # 持久化保存日志的目录
[root@k8s-master ~]# mkdir /etc/systemd/journald.conf.d
[root@k8s-master ~]# vi /etc/systemd/journald.conf.d/99-prophet.conf
[Journal]
#持久化保存到磁盘
Storage=persistent

#压缩历史日志
Compress=yes

SyncIntervalSec=5m
RateLimitInterval=30s
RateLimitBurst=1000

#最大占用空间10G
SystemMaxUse=10G

#单日志文件最大200M
SystemMaxFileSize=200M

#日志保存时间 2 周
MaxRetentionSec=2week

#不将日志转发到syslog
ForwardToSyslog=no

[root@k8s-master ~]# systemctl restart systemd-journald

升级内核为4.4版本
CentOS 7.x 系统自带的 3.10.x 内核存在一些 Bugs,导致运行的 Docker、Kubernetes 不稳定,例如: rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
[root@k8s-master ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
#安装完成后检查 /boot/grub2/grub.cfg 中对应内核 menuentry 中是否包含 initrd16 配置,如果没有,再安装 一次!
[root@k8s-master ~]# yum --enablerepo=elrepo-kernel install -y kernel-lt
#设置开机从新内核启动
[root@k8s-master ~]# grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)'
然后重启

升级前,内核
[root@k8s-master ~]# uname -r
3.10.0-1127.10.1.el7.x86_64
升级后,内核
[root@k8s-master ~]# uname -r
4.4.245-1.el7.elrepo.x86_64

安装Kubernetes
kube-proxy开启ipvs的前置条件
每个节点都需要执行
[root@k8s-master ~]# modprobe br_netfilter # 加载netfilter模块

[root@k8s-master ~]# vi /etc/sysconfig/modules/ipvs.modules
#!/bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4

[root@k8s-master ~]# chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

nf_conntrack_ipv4 20480 0
nf_defrag_ipv4 16384 1 nf_conntrack_ipv4
ip_vs_sh 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs 147456 6 ip_vs_rr,ip_vs_sh,ip_vs_wrr
nf_conntrack 114688 2 ip_vs,nf_conntrack_ipv4
libcrc32c 16384 2 xfs,ip_vs

安装Docker
每个节点都需要执行
[root@k8s-master ~]# yum install -y yum-utils device-mapper-persistent-data lvm2

[root@k8s-master ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

这里不yum update也能安装docker-ce
[root@k8s-master ~]# yum update -y && yum install -y docker-ce

[root@k8s-master ~]# docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:03:45 2020
OS/Arch: linux/amd64

再次查看内核,变回了原来的
[root@k8s-master ~]# uname -r
3.10.0-1127.10.1.el7.x86_64

[root@k8s-master ~]# grub2-set-default 'CentOS Linux (4.4.189-1.el7.elrepo.x86_64) 7 (Core)' && reboot # 重启一下

##启动 Docker
[root@k8s-master ~]# systemctl start docker && systemctl enable docker

#配置 daemon.
[root@k8s-master ~]# vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
}
}

[root@k8s-master ~]# mkdir -p /etc/systemd/system/docker.service.d # 创建文件夹来存放 Docker的配置文件

#重启docker服务
[root@k8s-master ~]# systemctl daemon-reload && systemctl restart docker && systemctl enable docker

安装Kubeadm(主从配置)
每个节点都需要执行
##导入ali的源
[root@k8s-master ~]# vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

[root@k8s-master ~]# yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
[root@k8s-master ~]# systemctl enable kubelet.service # 一定要开机自启,如果没设置,当你重新节点后,pod不会自启动

初始化节点
因为我是提前下载好了镜像,所以这里是直接导入到docker中,因为镜像较多,写了个简单脚本来导入,每个节点都需要执行这个导入镜像脚本
[root@k8s-master ~]# pwd
/root
[root@k8s-master ~]# tar -zxvf kubeadm-basic.images.tar.gz
kubeadm-basic.images/
kubeadm-basic.images/coredns.tar
kubeadm-basic.images/etcd.tar
kubeadm-basic.images/pause.tar
kubeadm-basic.images/apiserver.tar
kubeadm-basic.images/proxy.tar
kubeadm-basic.images/kubec-con-man.tar
kubeadm-basic.images/scheduler.tar

[root@k8s-master ~]# vi load-images.sh
#!/bin/bash
ls /root/kubeadm-basic.images > /tmp/image-list.txt
cd /root/kubeadm-basic.images
for i in $( cat /tmp/image-list.txt )
do
docker load -i $i
done
rm -rf /tmp/image-list.txt

[root@k8s-master ~]# chmod +x load-images.sh

[root@k8s-master ~]# ./load-images.sh

在主节点执行初始化kubeadm
[root@k8s-master ~]# kubeadm config print init-defaults > kubeadm-config.yaml # 获取初始化文件的模板
[root@k8s-master ~]# vi kubeadm-config.yaml
localAPIEndpoint:
advertiseAddress: 10.10.21.8
kubernetesVersion: v1.15.1
networking:
podSubnet: "10.244.0.0/16" # 这个需要我们自己添加
serviceSubnet: 10.96.0.0/12

--- # 下面这个字段也需要自己添加,将默认的调度方式改为ipvs
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs

[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Flag --experimental-upload-certs has been deprecated, use --upload-certs instead
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.13. Latest validated version: 18.09
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR NumCPU]: the number of available CPUs 1 is less than the required 2

解决:CPU给2个core就可以了。

[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
出错:
error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR Port-6443]: Port 6443 is in use
解决:
[root@k8s-master ~]# kubeadm reset
[root@k8s-master ~]# kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
Your Kubernetes control-plane has initialized successfully!

[root@k8s-master ~]# cd /etc/kubernetes/pki

[root@k8s-master pki]# ls
apiserver.crt etcd
apiserver-etcd-client.crt front-proxy-ca.crt
apiserver-etcd-client.key front-proxy-ca.key
apiserver.key front-proxy-client.crt
apiserver-kubelet-client.crt front-proxy-client.key
apiserver-kubelet-client.key sa.key
ca.crt sa.pub
ca.key

Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 10.10.21.8:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:61cda12938fe5d2dc0bcd2acff29578eb45a0ec692bf77fd59cd647671be6a7d

根据上面安装成功的提示,在主节点执行(根据kubeadm初始化结果中的参数)
[root@k8s-master ~]# mkdir -p $HOME/.kube
[root@k8s-master ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@k8s-master ~]# chown $(id -u):$(id -g) $HOME/.kube/config

[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master NotReady master 5m39s v1.15.1

部署网络
[root@k8s-master ~]# wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2020-11-24 12:26:53-- https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 0.0.0.0, ::
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|0.0.0.0|:443... failed: Connection refused.
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|::|:443... failed: Cannot assign requested address.
Retrying.
以上错误通过***下载后传给本机。

[root@k8s-master ~]# pwd
/root

[root@k8s-master ~]# ls
anaconda-ks.cfg kubeadm-basic.images.tar.gz kubeadm-init.log kubernetes.conf
kubeadm-basic.images kubeadm-config.yaml kube-flannel.yml load-images.sh

[root@k8s-master ~]# kubectl create -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created

[root@k8s-master ~]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-2bkf8 1/1 Running 0 21m
coredns-5c98db65d4-5mgxd 1/1 Running 0 21m
etcd-k8s-master 1/1 Running 0 20m
kube-apiserver-k8s-master 1/1 Running 0 20m
kube-controller-manager-k8s-master 1/1 Running 0 20m
kube-flannel-ds-clx8n 1/1 Running 0 116s
kube-proxy-rg2nm 1/1 Running 0 21m
kube-scheduler-k8s-master 1/1 Running 0 20m

[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 21m v1.15.1

加入其余的的节点,在其余的的节点执行如下(根据kubeadm初始化结果中的参数)
[root@k8s-node1 ~]# kubeadm join 10.10.21.8:6443 --token abcdef.0123456789abcdef \

--discovery-token-ca-cert-hash sha256:61cda12938fe5d2dc0bcd2acff29578eb45a0ec692bf77fd59cd647671be6a7d

[root@k8s-node2 ~]# kubeadm join 10.10.21.8:6443 --token abcdef.0123456789abcdef \

--discovery-token-ca-cert-hash sha256:61cda12938fe5d2dc0bcd2acff29578eb45a0ec692bf77fd59cd647671be6a7d

结果:

到此基本安装完成,可以通过kubectl get nodes来查看状态了
[root@k8s-master ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 41m v1.15.1
k8s-node1 Ready <none> 12m v1.15.1
k8s-node2 Ready <none> 13m v1.15.1

安装harbor

安装docker(这里的docker安装可以和前面的一样)
Get the most up-to-date version of Docker
https://get.docker.com

This script is meant for quick & easy install via:

[root@linux-node0 ~]# curl -fsSL https://get.docker.com -o get-docker.sh
[root@linux-node0 ~]# sh get-docker.sh

[root@linux-node0 ~]# service docker start
Redirecting to /bin/systemctl start docker.service
[root@linux-node0 ~]# docker version
Client:
Version: 18.09.0
API version: 1.39
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:48:22 2018
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 18.09.0
API version: 1.39 (minimum version 1.12)
Go version: go1.10.4
Git commit: 4d60db4
Built: Wed Nov 7 00:19:08 2018
OS/Arch: linux/amd64
Experimental: false

设置开机自启动并开启服务
[root@k8s-master ~]# systemctl enable docker
[root@k8s-master ~]# systemctl start docker

设置主机名并永久生效
[root@localhost ~]# hostnamectl set-hostname harbor

在所有节点操作
[root@localhost ~]# vi /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"insecure-registries":["https://hub.atguigu.com"]
}

[root@localhost ~]# systemctl restart docker

[root@localhost ~]# echo "10.10.21.229 hub.atguigu.com" >> /etc/hosts
为了实验,自己的Windows主机上也添加
C:\Windows\System32\drivers\etc\hosts

Harbor上安装 docker-compose
[root@localhost ~]# curl -L https://get.daocloud.io/docker/compose/releases/download/1.25.0/docker-compose-`uname -s-uname -m` > /usr/local/bin/docker-compose
#给docker-compose添加执行权限
[root@localhost ~]# chmod +x /usr/local/bin/docker-compose
#查看docker-compose是否安装成功
[root@localhost ~]# docker-compose -version
docker-compose version 1.25.0, build 0a186604

下载Harbor安装包
下载地址如下:
https://github.com/goharbor/harbor/releases/download/v1.10.6/harbor-offline-installer-v1.10.6.tgz
把下载的文件传到 /home/norman
[root@localhost ~]# cd /home/norman
[root@localhost norman]# tar -zxvf harbor-offline-installer-v1.10.6.tgz
[root@localhost norman]# mv harbor /usr/local/
[root@localhost norman]# cd /usr/local/harbor
[root@localhost harbor]# vi harbor.yml
hostname: hub.atguigu.com
certificate: /data/cert/server.crt
private_key: /data/cert/server.key

[root@localhost ~]# mkdir /data/cert
[root@localhost ~]# cd /data/cert

生成CA证书私钥 ca.key。Generate a CA certificate private key.
[root@localhost cert]# openssl genrsa -des3 -out server.key 2048
Generating RSA private key, 2048 bit long modulus
............................................................+++
......+++
e is 65537 (0x10001)
Enter pass phrase for server.key:
Verifying - Enter pass phrase for server.key:

根据上面生成的CA证书私钥,再来生成CA证书 ca.crt。 Generate the CA certificate.
[root@localhost cert]# openssl req -new -key server.key -out server.csr (输入上面设置的密码)
Country Name (2 letter code) [XX]:CN
State or Province Name (full name) []:SH
Locality Name (eg, city) [Default City]:SH
Organization Name (eg, company) [Default Company Ltd]:atguigu
Organizational Unit Name (eg, section) []:atguigu
Common Name (eg, your name or your server's hostname) []:hub.atguigu.com
Email Address []:normanjin@163.com

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []: (不用写)
An optional company name []: (不用写)

备份私钥
[root@localhost cert]# cp server.key server.key.org

把私钥的密码去掉,去除文件口令
[root@localhost cert]# openssl rsa -in server.key.org -out server.key
Enter pass phrase for server.key.org:
writing RSA key

生成证书
[root@localhost cert]# openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt
Signature okbr/>subject=/C=CN/ST=SH/L=SH/O=atguigu/OU=atguigu/CN=hub.atguigu.com/emailAddress=normanjin@163.com
Getting Private key

[root@localhost cert]# chmod -R 777 /data/cert

[root@localhost harbor]# ./install.sh

启动,停止harbor
docker-compose up -d 启动
docker-compose stop 停止
docker-compose restart 重新启动

harbor-offline-installer-v1.10.6.tgz这个版本可以成功安装,一开始用了harbor-offline-installer-v1.10.2.tgz这个版本。一直报以下错。
ERROR: for registryctl Cannot restart container de55d0f103c78e9b8dde7786305fca6c614eae226f261218fa0ebe0730e01eb4: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused

ERROR: for redis Cannot restart container 24373a9084a1db768141feca4a0c117822347eb6245f1a9558e7f28d469b7524: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused

ERROR: for registry Cannot restart container de09d7aaac8eb36c64af91ae3dbe5442b2d061b93806bb585dd539d28de4256d: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused

ERROR: for harbor-db Cannot restart container d339aa6970b46d31be2a2606cb4372a8dffeaec60676e6c2feabd6a18310e1b2: failed to initialize logging driver: dial tcp 127.0.0.1:1514: connect: connection refused

访问https://hub.atguigu.com/

由于是自签名证书。
默认的账号密码:admin/Harbor12345

在别的节点测试登陆
[root@k8s-node1 ~]# docker login
https://hub.atguigu.com
Username: admin
Password:
Error response from daemon: Get https://hub.atguigu.com/v1/users/: x509: certificate signed by unknown authority

解决:
在harbor服务器上找到/data/cert/server.crt,将server.crt复制到各个节点的/etc/ssl/certs目录里。
[root@k8s-master ~]# ls /etc/ssl/certs
ca-bundle.crt ca-bundle.trust.crt make-dummy-cert Makefile renew-dummy-cert server.crt
[root@k8s-node1 ~]# systemctl daemon-reload
[root@k8s-node1 ~]# systemctl restart docker
再次登陆
[root@k8s-node1 ~]# docker login https://hub.atguigu.com
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded

下载个镜像然后推到harbor
[root@k8s-node1 ~]# docker pull wangyanglinux/myapp:v1

[root@k8s-node1 ~]# docker tag wangyanglinux/myapp:v1 hub.atguigu.com/library/myapp:v1
[root@k8s-node1 ~]# docker push hub.atguigu.com/library/myapp:v1
The push refers to a repository [hub.atguigu.com/library/myapp]
a0d2c4392b06: Pushed
05a9e65e2d53: Pushed
68695a6cfd7d: Pushed
c1dc81a64903: Pushed
8460a579ab63: Pushed
d39d92664027: Pushed
v1: digest: sha256:9eeca44ba2d410e54fccc54cbe9c021802aa8b9836a0bcf3d3229354e4c8870e size: 1569

在master上从harbor上拉取运行镜像
[root@k8s-master ~]# kubectl run nginx-development --image=hub.atguigu.com/library/myapp:v1 --port=80 --replicas=1

[root@k8s-master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-development 1/1 1 1 105s

[root@k8s-master ~]# kubectl get rs
NAME DESIRED CURRENT READY AGE
nginx-development-999b6fb7c 1 1 1 205s

[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-development-999b6fb7c-25lft 1/1 Running 0 82s 10.244.2.2 k8s-node1 <none> <none>

在k8s-node1查看(只有有1个pod 运行,就会有/pause)
[root@k8s-node1 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
99ed4f791e82 d4a5e0eaa84f "nginx -g 'daemon of…" 21 minutes ago Up 21 minutes k8s_nginx-development_nginx-development-999b6fb7c-qbqcj_default_af416d91-f967-4242-b80f-5b65f6e52c14_0
56cceadea634 k8s.gcr.io/pause:3.1 "/pause" 21 minutes ago Up 21 minutes k8s_POD_nginx-development-999b6fb7c-qbqcj_default_af416

[root@k8s-master ~]# curl 10.244.2.2
Hello MyApp | Version: v1 | <a rel="nofollow" href="hostname.html">Pod Name</a>

[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-development-999b6fb7c-25lft 1/1 Running 0 2m15s

[root@k8s-master ~]# kubectl delete pod nginx-development-999b6fb7c-25lft

[root@k8s-master ~]# kubectl get pod (删除pod后会自动生成新的pod,因为前面定义了--replicas=1)
NAME READY STATUS RESTARTS AGE
nginx-development-3133788093-vg4jd 1/1 Running 0 5s

扩容:
[root@k8s-master ~]# kubectl scale --replicas=3 deployment/nginx-development
deployment.extensions/nginx-development scaled

[root@k8s-master ~]# kubectl get pod
NAME READY STATUS RESTARTS AGE
nginx-development-999b6fb7c-dbcjk 1/1 Running 0 29m
nginx-development-999b6fb7c-qbqcj 1/1 Running 0 29m
nginx-development-999b6fb7c-vtclg 1/1 Running 0 29m

创建集群IP ClusterIP
[root@k8s-master ~]# kubectl expose deployment nginx-development --port=30000 --target-port=80
service "nginx-development" exposed

[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 148m
nginx-development ClusterIP 10.99.91.88 <none> 30000/TCP 9s

访问ClusterIP :
[root@k8s-master ~]# curl 10.99.91.88:30000
Hello MyApp | Version: v1 | <a rel="nofollow" href="hostname.html">Pod Name</a>

并且是轮询:
[root@k8s-node2 ~]# curl 10.99.91.88:30000/hostname.html
nginx-development-999b6fb7c-qbqcj
[root@k8s-node2 ~]# curl 10.99.91.88:30000/hostname.html
nginx-development-999b6fb7c-vtclg
[root@k8s-node2 ~]# curl 10.99.91.88:30000/hostname.html
nginx-development-999b6fb7c-dbcjk

[root@k8s-master ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.96.0.1:443 rr
-> 10.10.21.8:6443 Masq 1 3 0
TCP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.0.6:9153 Masq 1 0 0
-> 10.244.0.7:9153 Masq 1 0 0
TCP 10.99.91.88:30000 rr
-> 10.244.1.2:80 Masq 1 0 0
-> 10.244.1.3:80 Masq 1 0 0
-> 10.244.2.3:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.0.6:53 Masq 1 0 0
-> 10.244.0.7:53 Masq 1 0 0

[root@k8s-master ~]# kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-development-999b6fb7c-dbcjk 1/1 Running 0 45m 10.244.1.2 k8s-node2 <none> <none>
nginx-development-999b6fb7c-qbqcj 1/1 Running 0 44m 10.244.2.3 k8s-node1 <none> <none>
nginx-development-999b6fb7c-vtclg 1/1 Running 0 44m 10.244.1.3 k8s-node2 <none> <none>

创建外部访问集群
[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h6m
nginx-development ClusterIP 10.99.91.88 <none> 30000/TCP 38m

修改ClusterIP 为NodePort,使集群能被外部访问
[root@k8s-master ~]# kubectl edit svc nginx-development
type: NodePort

[root@k8s-master ~]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 3h10m
nginx-development NodePort 10.99.91.88 <none> 30000:31231/TCP 42m

访问10.10.21.8:31231

再开一个浏览器查看,可以看到轮询。