Centos7 搭建k8s单master集群(版本1.20.1-0)

目录:

Centos7 搭建k8s单master集群(版本1.20.1-0)

机器和环境

注意

准备机器环境

修改服务器名

添加路径映射

关闭防火墙

关闭selinux

关闭swap

开启 bridge-nf

重启验证

安装docker

时间同步

安装Kubernetes基本组件

设置Kubernetes国内源

安装相关组件

安装Master节点

创建集群

安装Flannel网络插件

安装Node

环境变量配置

Node节点加入Master

报错处理

节点状态NotReady排查

机器和环境

系统:Centos7.4

机器:

192.168.131.130 k8s-master

192.168.131.141 k8s-node1

192.168.131.142 k8s-node2

1

2

3

机器性能:

k8s-master: 2核 4G

k8s-node1: 4核 8G

k8s-node2: 4核 8G

注意: cpu最低要2核,内存最低要2G,自己看机器配置用虚拟机划分资源

注意

以下操作不指明操作那台主机,就是所有机器都要执行一边

准备机器环境

修改服务器名

k8s-master机器执行:hostnamectl set-hostname k8s-master

k8s-node1机器执行:hostnamectl set-hostname k8s-node1

k8s-node2机器执行:hostnamectl set-hostname k8s-node2

查询结果:hostname

添加路径映射

vi /etc/hosts 向里面添加

192.168.131.130 k8s-master

192.168.131.141 k8s-node1

192.168.131.142 k8s-node2

1

2

3

注意:这个ip要根据自己电脑情况

检查:

- ping k8s-master

- ping k8s-node1

- ping k8s-node2

关闭防火墙

systemctl stop firewalld && systemctl disable firewalld

1

关闭selinux

临时关闭

setenforce 0

永久关闭

修改配置/etc/selinux/config,将SELINUX设置为disabled

vi /etc/selinux/config

关闭swap

临时关闭

swapoff -a

永久关闭

vi /etc/fstab

修改/etc/fstab,注释最后一行

检查是否关闭

swap都是零则是swap关闭

开启 bridge-nf

修改vi /etc/sysctl.conf,末尾添加如下配置:

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-arptables = 1

1

2

3

重启验证

重启服务器:reboot now

防火墙是否关闭: systemctl status firewalld

selinux是否关闭: getenforce

swap是否关闭: free -m

安装docker

Docker安装博客

简化命令:

yum install docker -y && systemctl start docker && systemctl enable docker

1

时间同步

安装时间同步:yum install -y ntp ntpdate

开启时间同步: ntpdate cn.pool.ntp.org

开启设置开机启动: systemctl start ntpd && systemctl enable ntpd

安装Kubernetes基本组件

设置Kubernetes国内源

cat <<EOF > /etc/yum.repos.d/kubernetes.repo

[kubernetes]

name=Kubernetes

baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64

enabled=1

gpgcheck=1

repo_gpgcheck=1

gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg

EOF

1

2

3

4

5

6

7

8

9

安装相关组件

安装kubelet kubeadm kubectl

yum install -y kubelet-1.20.1-0 kubeadm-1.20.1-0 kubectl-1.20.1-0 --disableexcludes=kubernetes

1

设置开机启动kubelet

systemctl start kubelet && systemctl enable kubelet

安装Master节点

注意:该部操作在k8s-master执行

创建集群

执行脚本

kubeadm init \

--kubernetes-version=v1.20.1 \

--pod-network-cidr=10.244.0.0/16 \

--image-repository registry.aliyuncs.com/google_containers \

--apiserver-advertise-address 192.168.131.130 \

--v=6

1

2

3

4

5

6

参数说明:

kubernetes-version:要安装的版本

pod-network-cidr:负载容器的子网网段

image-repository:指定镜像仓库(由于从阿里云拉镜像,解决了k8s.gcr.io镜像拉不下来的问题)

apiserver-advertise-address:节点绑定的服务器ip(多网卡可以用这个参数指定ip)

v=6:这个参数我还没具体查文档,用法是初始化过程显示详细内容,部署过程如果有问题看详细信息很多时候能找到问题所在

1

2

3

4

5

有可能存在镜像拉不下来,这里可以查看需要镜像版本,可以考虑手动去拉取镜像,然后tag镜像

kubeadm config images list

报错解决:

根据报错日志,解决报错

重置取消init: kubeadm reset

再次执行init

安装成功后

添加到环境变量里面

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source ~/.bash_profile

1

2

检查状态: kubectl get cs

发现scheduler和controller-manager的状态为Unhealthy,一般

是因为kube-scheduler和kube-controller-manager组件配置默认

禁用了非安全端口,修改配置文件中的port=0配置,配置文件路

径:

vi /etc/kubernetes/manifests/kube-scheduler.yaml

vi /etc/kubernetes/manifests/kube-controller-manager.yaml

1

2

3

4

5

6

 

重启kubelet:systemctl restart kubelet

再次检查状态: kubectl get cs

安装Flannel网络插件

创建目录:mkdir -p /opt/yaml

创建yaml文件

cd /opt/yaml && touch kube-flannel.yaml && vi kube-flannel.yaml

1

讲下面内容粘贴到文件里面

---

apiVersion: policy/v1beta1

kind: PodSecurityPolicy

metadata:

name: psp.flannel.unprivileged

annotations:

seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default

seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default

apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default

apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default

spec:

privileged: false

volumes:

- configMap

- secret

- emptyDir

- hostPath

allowedHostPaths:

- pathPrefix: "/etc/cni/net.d"

- pathPrefix: "/etc/kube-flannel"

- pathPrefix: "/run/flannel"

readOnlyRootFilesystem: false

# Users and groups

runAsUser:

rule: RunAsAny

supplementalGroups:

rule: RunAsAny

fsGroup:

rule: RunAsAny

# Privilege Escalation

allowPrivilegeEscalation: false

defaultAllowPrivilegeEscalation: false

# Capabilities

allowedCapabilities: ['NET_ADMIN', 'NET_RAW']

defaultAddCapabilities: []

requiredDropCapabilities: []

# Host namespaces

hostPID: false

hostIPC: false

hostNetwork: true

hostPorts:

- min: 0

max: 65535

# SELinux

seLinux:

# SELinux is unused in CaaSP

rule: 'RunAsAny'

---

kind: ClusterRole

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: flannel

rules:

- apiGroups: ['extensions']

resources: ['podsecuritypolicies']

verbs: ['use']

resourceNames: ['psp.flannel.unprivileged']

- apiGroups:

- ""

resources:

- pods

verbs:

- get

- apiGroups:

- ""

resources:

- nodes

verbs:

- list

- watch

- apiGroups:

- ""

resources:

- nodes/status

verbs:

- patch

---

kind: ClusterRoleBinding

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: flannel

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: flannel

subjects:

- kind: ServiceAccount

name: flannel

namespace: kube-system

---

apiVersion: v1

kind: ServiceAccount

metadata:

name: flannel

namespace: kube-system

---

kind: ConfigMap

apiVersion: v1

metadata:

name: kube-flannel-cfg

namespace: kube-system

labels:

tier: node

app: flannel

data:

cni-conf.json: |

{

"name": "cbr0",

"cniVersion": "0.3.1",

"plugins": [

{

"type": "flannel",

"delegate": {

"hairpinMode": true,

"isDefaultGateway": true

}

},

{

"type": "portmap",

"capabilities": {

"portMappings": true

}

}

]

}

net-conf.json: |

{

"Network": "10.244.0.0/16",

"Backend": {

"Type": "vxlan"

}

}

---

apiVersion: apps/v1

kind: DaemonSet

metadata:

name: kube-flannel-ds

namespace: kube-system

labels:

tier: node

app: flannel

spec:

selector:

matchLabels:

app: flannel

template:

metadata:

labels:

tier: node

app: flannel

spec:

affinity:

nodeAffinity:

requiredDuringSchedulingIgnoredDuringExecution:

nodeSelectorTerms:

- matchExpressions:

- key: kubernetes.io/os

operator: In

values:

- linux

hostNetwork: true

priorityClassName: system-node-critical

tolerations:

- operator: Exists

effect: NoSchedule

serviceAccountName: flannel

initContainers:

- name: install-cni

image: quay.io/coreos/flannel:v0.13.1-rc1

command:

- cp

args:

- -f

- /etc/kube-flannel/cni-conf.json

- /etc/cni/net.d/10-flannel.conflist

volumeMounts:

- name: cni

mountPath: /etc/cni/net.d

- name: flannel-cfg

mountPath: /etc/kube-flannel/

containers:

- name: kube-flannel

image: quay.io/coreos/flannel:v0.13.1-rc1

command:

- /opt/bin/flanneld

args:

- --ip-masq

- --kube-subnet-mgr

resources:

requests:

cpu: "100m"

memory: "50Mi"

limits:

cpu: "100m"

memory: "50Mi"

securityContext:

privileged: false

capabilities:

add: ["NET_ADMIN", "NET_RAW"]

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

volumeMounts:

- name: run

mountPath: /run/flannel

- name: flannel-cfg

mountPath: /etc/kube-flannel/

volumes:

- name: run

hostPath:

path: /run/flannel

- name: cni

hostPath:

path: /etc/cni/net.d

- name: flannel-cfg

configMap:

name: kube-flannel-cfg

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

57

58

59

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

127

128

129

130

131

132

133

134

135

136

137

138

139

140

141

142

143

144

145

146

147

148

149

150

151

152

153

154

155

156

157

158

159

160

161

162

163

164

165

166

167

168

169

170

171

172

173

174

175

176

177

178

179

180

181

182

183

184

185

186

187

188

189

190

191

192

193

194

195

196

197

198

199

200

201

202

203

204

205

206

207

208

209

210

211

212

213

214

215

216

217

218

219

220

221

222

223

将quay.io换成quay.mirrors.ustc.edu.cn(中科大)的镜像

sed -i 's#quay.io/coreos/flannel#quay.mirrors.ustc.edu.cn/coreos/flannel#' /opt/yaml/kube-flannel.yaml

1

执行安装命令

kubectl apply -f /opt/yaml/kube-flannel.yaml

1

查看节点状态:kubectl get nodes

安装Node

注意: 该步操作只在node节点执行

环境变量配置

配置文件从master拷贝到node,注意:在master节点执行

scp /etc/kubernetes/admin.conf root@k8s-node1://etc/kubernetes/

scp /etc/kubernetes/admin.conf root@k8s-node2://etc/kubernetes/

1

2

给node节点添加环境,注意: 在node节点执行

echo "export KUBECONFIG=/etc/kubernetes/admin.conf" >> ~/.bash_profile

source ~/.bash_profile

1

2

Node节点加入Master

因为修改过一些配置,重启过master,这里为了防止报错,我们重新生成加入token,在master机器执行:

kubeadm token create --print-join-command

在node节点执行:

kubeadm join 192.168.131.130:6443 --token d59oyt.zleslhkqdbtiq5tg --discovery-token-ca-cert-hash sha256:28cb37c295368b14070590b85703fadea268edef7893ee04649f7d5e4e9124ab --v=6

1

注意结尾有个 --v=6 要自己手动加上去

在master执行: kubectl get nodes

报错处理

节点状态NotReady排查

可能是某个docker镜像没起来

执行: kubectl get pods -o wide --all-namespaces

找到0/1类型: docker logs 容器id, 看看为什么起不来

————————————————

版权声明:本文为CSDN博主「随丶芯」的原创文章,遵循CC 4.0 BY-SA版权协议,转载请附上原文出处链接及本声明。

原文链接:https://blog.csdn.net/luohongtuCSDN/article/details/118730587

 




弄个Nginx试一下



kubectl create deployment nginx-deploy --image=nginx
kubectl expose deployment nginx-deploy --port=80 --type=NodePort


查看状态



kubectl get pod,svc




 


Centos7 搭建k8s单master集群(版本1.20.1-0)_重启


在这里插入图片描述


访问试一下

http://10.254.193.115:32007/

效果




 


Centos7 搭建k8s单master集群(版本1.20.1-0)_linux_02


在这里插入图片描述