1、问题:ailed to create sandbox for pod :拉取 registry.k8s.io/pause:3.6 镜像失败 journalctl -xeu kubelet查看的日志报错信息

Failed to create sandbox for pod” err="rpc error: code = Unknown desc = failed to get sandbox image “registry.k8s.io/pause:3.6”: failed to pull image “registry.k8s.io/pause:3.6 问题:

拉取 registry.k8s.io/pause:3.6 镜像失败 导致sandbox 创建不了而报错

解决:重新配置 sandbox 镜像 仓库,将默认的 registry.k8s.io/pause:3.6 修改成 “registry.aliyuncs.com/google_containers/pause:3.6”

##生成 containerd 的默认配置文件

containerd config default > /etc/containerd/config.toml


##查看 sandbox 的默认镜像仓库在文件中的第几行

cat /etc/containerd/config.toml | grep -n "sandbox_image"

##使用 vim 编辑器 定位到 sandbox_image,将 仓库地址修改成 k8simage/pause:3.6

vim /etc/containerd/config.toml

sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"


##重启 containerd 服务

systemctl daemon-reload
systemctl restart containerd.service


##重新重启初始化文件

kubeadm init --image-repository registry.aliyuncs.com/google_containers --kubernetes-version=v1.27.2 --apiserver-advertise-address=192.168.0.207 --pod-network-cidr=10.244.0.0/16


出现以下日志为初始化成功:标红部分

[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes

[bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key

[addons] Applied essential addon: CoreDNS

[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown kubernetes新版本使用kubeadm init的相关问题和解决方法_bootstrap(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.0.207:6443 --token z5f5zn.bl1v73gijb0sc4kl \ --discovery-token-ca-cert-hash sha256:0bd6d6dd3faa04cad26b34d34ba9b9b1202c99ff9e4b7ebe81a4742b4ba788ee



2、问题:The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?

解决:

出现报错:The connection to the server raw.githubusercontent.com was refused - did you specify the right host or port?

原因:外网不可访问

解决办法: 在https://www.ipaddress.com/查询raw.githubusercontent.com的真实IP。

vim /etc/hosts 1 加入

185.199.108.133 raw.githubusercontent.com

再次运行

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml 1 即可成功安装fannel


3、k8s主节点更换证书后,从节点使用kubectl get pods命令报Config not found: /etc/kubernetes/admin.conf  

    还有一种是安装完毕后没有启动export KUBECONFIG=/etc/kubernetes/admin.conf 导致的 无法连接http://ip:6443的apiserver 错误

解决方法:export KUBECONFIG=/etc/kubernetes/admin.conf

放入环境随机启动 

echo 'export KUBECONFIG=/etc/kubernetes/admin.conf' >> /root/.bashrc  

source /root/.bashrc


4、Kubernetes 将Pod调度到Master节点

处于安全考虑,默认配置下Kubernetes不会将Pod调度到Maser节点。如果希望将k8s-master用于Node节点使用,可以执行以下命令: 可以查看到master 的污点情况 [root@master ~]# kubectl describe nodes master | grep Taints Taints: node-role.kubernetes.io/control-plane:NoSchedule

去掉污点,执行如下 kubectl taint node master node-role.kubernetes.io/control-plane- 备注:由于我的master节点的名称为control-plane

或者kubectl taint node master node-role.kubernetes.io/master-

恢复污点状态为only状态,不让其他服务写入master,开启污点状态 执行如下命令: kubectl taint node master node-role.kubernetes.io/control-plane=:NoSchedule 备注:由于我的master节点的名称为control-plane

或者kubectl taint node master node-role.kubernetes.io/master=:NoSchedule

kubectl taint node master node-role.kubernetes.io/control-plane-