配置离线安装环境(https://centos.pkgs.org/ 按依赖顺序) - centos7.8(CPU一定要大于2核,内存大于2G不然会无法安装)

服务器

节点

192.168.6.100

k8s-master

192.168.6.101

k8s-node1

192.168.6.102

k8s-node2

1.准备需要用到的包(找一台可以联网的centos服务器)

cd /opt/repo
yum install --downloadonly --downloaddir=/opt/repo ntpdate wget httpd createrepo
wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo
yum install --downloadonly --downloaddir=/opt/repo docker-ce
cat > /etc/yum.repos.d/kubernetes.repo << EOF
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
yum install --downloadonly --downloaddir=/opt/repo  kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
#可查看版本官方k8s 与 calico版本对应
#https://projectcalico.docs.tigera.io/about/about-calico
wget   --no-check-certificate https://projectcalico.docs.tigera.io/v3.23/manifests/calico.yaml
wget   https://raw.githubusercontent.com/kubernetes/dashboard/v2.4.0/aio/deploy/recommended.yaml

2.下载离线包,使用 createrepo 创建本地仓

deltarpm-version.x86_64.rpm
python-deltarpm-version.x86_64.rpm
libxml2-python-version.x86_64.rpm
createrepo-version.noarch.rpm

下载拷贝至/opt/repo/createrepocd /opt/repo/createrepo 切换至安装目录
安装:

rpm -ivh /opt/repo/createrepo/deltarpm-3.6-3.el7.x86_64.rpm
rpm -ivh /opt/repo/createrepo/python-deltarpm-3.6-3.el7.x86_64.rpm
rpm -ivh /opt/repo/createrepo/libxml2-python-2.9.9-2.7.mga7.x86_64.rpm
rpm -ivh /opt/repo/createrepo/createrepo-0.9.9-28.el7.noarch.rpm

创建本地仓

cd /opt/repo
createrepo .

制作本地yum源

1、拷贝createrepo制作的本地仓库

将第三步制作的yum源库,即连接互联网服务器的/opt/repo目录下的内容,打包后,拷贝上传到内网服务器/opt/repo目录下,解压。
注意以下操作是在内网服务器上完成的。

2、修改/etc/yum.repos.d下的配置文件,将源指向自己。

1)进入/etc/yum.repos.d

cd /etc/yum.repos.d/

2)备份所有配置文件

rename .repo .repo.bak *

3)编辑本机配置文件

vi CentOS-Local.repo

内容:

[base]

name=CentOS-Local

baseurl=file:///opt/repo

gpgcheck=0

enabled=1

3、列出可用的YUM源*

yum clean all
yum repolist

此时本地yum源制作完成,即可使用yum命令安装仓库中存在的软件了。

3.安装httpd(发布本地仓:方式1)

下载离线包:

apr-version.el7.x86_64.rpm
apr-util-version.el7.x86_64.rpm
httpd-tools-version.el7.centos.x86_64.rpm
mailcap-version.el7.noarch.rpm
httpd-version.el7.centos.x86_64.rpm

下载拷贝至/opt/repo/httpd 安装:

rpm -ivh /opt/repo/httpd/apr-1.4.8-7.el7.x86_64.rpm
rpm -ivh /opt/repo/httpd/apr-util-1.5.2-6.el7.x86_64.rpm
rpm -ivh /opt/repo/httpd/httpd-tools-2.4.6-95.el7.centos.x86_64.rpm
rpm -ivh /opt/repo/httpd/mailcap-2.1.41-2.el7.noarch.rpm
rpm -ivh /opt/repo/httpd/httpd-2.4.6-95.el7.centos.x86_64.rpm
service httpd start
cp -r /opt/repo/ /var/www/html/CentOS-7

可通过浏览器查看软件安装包情况,输入http://xxx.xxx.xxx.xxx/CentOS-7回车,如下图:

4.安装nginx(发布本地仓:方式2)

如果本机有nginx可以使用nginx发布,在配置文件中加入

#发布本地仓
location /repo/ {
	     autoindex on;
		 alias  /opt/repo/;
		 proxy_connect_timeout   3; 
		 proxy_send_timeout      30; 
		 proxy_read_timeout      30; 
	}

5.其它服务器访问本地源

注意:以下操作是在想要访问内网yum源的服务器上进行的配置。
1)备份/etc/yum.repos.d下的所有配置文件。

cd /etc/yum.repos.d/
rename .repo .repo.bak *

2)编辑本机配置文件指向内网yum源

vi CentOS-Local.repo

内容:

[base]

name=CentOS-Local

baseurl=http://192.168.6.100/CentOS-7

gpgcheck=0

enabled=1

3)列出可用YUM源

yum clean all
yum repolist

6. 初始化配置

5.1、安装环境准备:下面的操作需要在所有的节点上执行。

# 关闭防火墙
systemctl stop firewalld
systemctl disable firewalld
# 关闭selinux
sed -i 's/enforcing/disabled/' /etc/selinux/config  # 永久
setenforce 0  # 临时
# 关闭swap
swapoff -a  # 临时
sed -ri 's/.*swap.*/#&/' /etc/fstab    # 永久
# 根据规划设置主机名
# 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system  # 生效
# 时间同步
yum install ntpdate -y
ntpdate time.windows.com
---如果出错则使用 ntpdate 202.112.29.82
hostnamectl set-hostname k8s-master(三台都要执行对应的)
# 在master添加hosts
cat >> /etc/hosts << EOF
192.168.6.100 k8s-master
192.168.6.101 k8s-node1
192.168.6.102 k8s-node2
EOF
# 在每个节点创建离线环境repo源 参考 5.其它服务器访问本地源
cd /etc/yum.repos.d/
rename .repo .repo.bak *

cat > /etc/yum.repos.d/CentOS-Local.repo << EOF
[base]
name=CentOS-Local
baseurl=http://192.168.6.100/CentOS-7
gpgcheck=0
enabled=1
EOF

5.2、安装 Docker、kubeadm、kubelet【所有节点】

#安装docker:
yum -y install docker-ce
systemctl enable docker && systemctl start docker
增加本地docker 源

1、在外网服务器下载registry镜像,以及需要用到的包

#本地镜像发布
docker pull registry
#k8s依赖服务
docker pull registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0
docker pull registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0
docker pull registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0
docker pull registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0
docker pull registry.aliyuncs.com/google_containers/pause:3.6
docker pull registry.aliyuncs.com/google_containers/etcd:3.5.1-0
docker pull registry.aliyuncs.com/google_containers/coredns:v1.8.6
#安装calico nci服务可以在下载的calico.yaml里搜索查看依赖image或输入grep image calico.yaml查看
docker pull docker.io/calico/cni:v3.23.2
docker pull docker.io/calico/node:v3.23.2
docker pull docker.io/calico/kube-controllers:v3.23.2

保存镜像

docker save registry > /www/ten/registry.tar

docker save registry.aliyuncs.com/google_containers/kube-apiserver > /www/ten/kube-apiserver.tar
docker save registry.aliyuncs.com/google_containers/kube-controller-manager > /www/ten/kube-controller-manager.tar
docker save registry.aliyuncs.com/google_containers/kube-scheduler > /www/ten/kube-scheduler.tar
docker save registry.aliyuncs.com/google_containers/kube-proxy > /www/ten/kube-proxy.tar
docker save registry.aliyuncs.com/google_containers/pause > /www/ten/pause.tar
docker save registry.aliyuncs.com/google_containers/etcd > /www/ten/etcd.tar
docker save registry.aliyuncs.com/google_containers/coredns > /www/ten/coredns.tar

docker save docker.io/calico/cni > /www/ten/calico/cni.tar
docker save docker.io/calico/node > /www/ten/calico/node.tar
docker save docker.io/calico/kube-controllers> /www/ten/calico/kube-controllers.tar

2、将下载好的镜像导入到内网master服务器 docker中

docker load -i /opt/docker/registry.tar

docker load -i /opt/docker/kube-apiserver.tar
docker load -i /opt/docker/kube-controller-manager.tar
docker load -i /opt/docker/kube-scheduler.tar
docker load -i /opt/docker/kube-proxy.tar
docker load -i /opt/docker/pause.tar
docker load -i /opt/docker/etcd.tar
docker load -i /opt/docker/coredns.tar

docker load -i /opt/docker/calico/cni.tar
docker load -i /opt/docker/calico/node.tar
docker load -i /opt/docker/calico/kube-controllers.tar

2、在 daemon.json 文件中添加私有镜像仓库的地址并重启

cat >  /etc/docker/daemon.json  << EOF
{
"insecure-registries": ["192.168.6.100:5000"],
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
systemctl daemon-reload
systemctl restart docker
docker info                                                        #查看docker信息,进行确认

3、创建registry容器并开放端口

docker create -it registry /bin/bash
docker run -d -p 5000:5000 --restart=on-failure:10 -v /data/registry:/tmp/registry registry
'//-p指定端口,一内一外;-v表示挂载,前者是宿主机,后者是容器'

4、手动上传镜像到 docker中

所有集群服务器都加入以下内容
vi /etc/docker/daemon.json
{
"insecure-registries": ["192.168.6.100:5000"],
  "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],
  "exec-opts": ["native.cgroupdriver=systemd"]
}
打上标签并传到本地仓库中
docker tag registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.0 192.168.6.100:5000/kube-apiserver:v1.23.0
docker push 192.168.6.100:5000/kube-apiserver:v1.23.0

docker tag registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.0 192.168.6.100:5000/kube-controller-manager:v1.23.0
docker push 192.168.6.100:5000/kube-controller-manager:v1.23.0

docker tag registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.0 192.168.6.100:5000/kube-scheduler:v1.23.0
docker push 192.168.6.100:5000/kube-scheduler:v1.23.0

docker tag registry.aliyuncs.com/google_containers/kube-proxy:v1.23.0 192.168.6.100:5000/kube-proxy:v1.23.0
docker push 192.168.6.100:5000/kube-proxy:v1.23.0


docker tag registry.aliyuncs.com/google_containers/pause:3.6 192.168.6.100:5000/pause:3.6
docker push 192.168.6.100:5000/pause:3.6

docker tag registry.aliyuncs.com/google_containers/etcd:3.5.1-0 192.168.6.100:5000/etcd:3.5.1-0
docker push 192.168.6.100:5000/etcd:3.5.1-0

docker tag registry.aliyuncs.com/google_containers/coredns:v1.8.6 192.168.6.100:5000/coredns:v1.8.6
docker push 192.168.6.100:5000/coredns:v1.8.6

查看仓库是否创建成功

curl -XGET http://192.168.6.100:5000/v2/_catalog

若成功会返回以下值

{“repositories”:[“coredns”,“etcd”,“kube-apiserver”,“kube-controller-manager”,“kube-proxy”,“kube-scheduler”,“pause”]}

在master节点 初始化kubeadm
yum install -y kubelet-1.23.0 kubeadm-1.23.0 kubectl-1.23.0
systemctl enable kubelet
kubeadm init \
  --apiserver-advertise-address=192.168.6.100 \
  --image-repository 192.168.6.100:5000 \
  --kubernetes-version v1.23.0 \
  --service-cidr=10.96.0.0/12 \
  --pod-network-cidr=10.244.0.0/16 \
  --ignore-preflight-errors=all

–apiserver-advertise-address 集群通告地址
–image-repository 由于默认拉取镜像地址k8s.gcr.io国内无法访问,这里指定阿里云镜像仓库地址
–kubernetes-version K8s版本,与上面安装的一致
–service-cidr 集群内部虚拟网络,Pod统一访问入口
–pod-network-cidr Pod网络,与下面部署的CNI网络组件yaml中保持一致
初始化之后,会输出一个join命令,先复制出来,node节点加入master会使用。

如果安装出错需要在所有节点执行kubeadm reset -f 重新初始化,再执行安装

拷贝k8s认证文件
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
#查看工作节点:
kubectl get nodes
注:由于网络插件还没有部署,还没有准备就绪 NotReady,继续操作。
配置k8s的node节点【node节点操作】
向集群添加新节点,执行在kubeadm init输出的kubeadm join命令
kubeadm join k8s-master:6443 --token ajslxs.tjvr3gk0m42eewen \
	--discovery-token-ca-cert-hash sha256:b8e814a4f8f887753f48157b0768c438bb7329813751b9e08e3eb5c50240ace4 

#默认token有效期为24小时,当过期之后,该token就不可用了。这时就需要重新创建token,可以直接使用命令快捷生成:
kubeadm token create --print-join-command
部署容器网络 (master执行)
#Calico是一个纯三层的数据中心网络方案,是目前Kubernetes主流的网络方案。
#下载YAML:
 wget    https://192.168.6.100/calico.yaml
#1、下载完后还需要修改里面定义Pod网络(CALICO_IPV4POOL_CIDR),与前面kubeadm init的 --pod-network-cidr指定的一样。
#2、修改里面镜像引用的地址将`docker.io/calico` 替换为本地镜像`192.168.6.100:5000`
#3、在  
- name: CLUSTER_TYPE
  value: "k8s,bgp"
# 下加入          
  - name: IP_AUTODETECTION_METHOD
     value: "interface=ens.*"
修改完后文件后,进行部署:
cd /opt/repo/
kubectl apply -f calico-v3.23.2.yaml
kubectl get pods -n kube-system                        #执行结束要等上一会才全部running

这里是引用#等Calico Pod都Running后,节点也会准备就绪,如果依旧没有启动可以重启一下,kubectl systemctl restart kubectl
#注:以后所有yaml文件都只在Master节点执行。
#安装目录:/etc/kubernetes/
#组件配置文件目录:/etc/kubernetes/manifests/

执行过程出现问题

网络出现问题Unable to update cni config: No networks found in /etc/cni/net.d

mkdir -p /etc/cni/net.d

cat > /etc/cni/net.d/10-flannel.conflist << EOF
{
  "name": "cbr0",
  "plugins": [
    {
      "type": "flannel",
      "delegate": {
        "hairpinMode": true,
        "isDefaultGateway": true
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    }
  ]
}
EOF

执行systemctl restart kubelet 重启服务然后初始化集群节点即可

如果出现

unable to connect the server: net/http: request canceld(Clinet.Timeout
exceeded while awating headers)

可重启物理机