一、环境准备
操作系统:Centos 7.5
- 每台机器 2 GB 或更多的 RAM (如果少于这个数字将会影响您应用的运行内存)
- 2 CPU 核心或更多
- 集群中的所有机器的网络彼此均能相互连接(公网和内网都可以)
- 节点之中不可以有重复的主机名,MAC 地址,product_uuid。更多详细信息请参见这里 。
- 开启主机上的一些特定端口. 更多详细信息请参见这里。
- 禁用 Swap 交换分区。为了保证 kubelet 正确运行,必须 禁用交换分区。
二、安装步骤
- 定义hostname
hostname k8s-master
- 编辑
/etc/hosts
vi /etc/hosts 192.168.194.135 k8s-master
当然我们在这里根据实际情况指定自己的ip地址即可 - 禁用交换分区等操作
# 将 SELinux 设置为 permissive 模式(将其禁用) swapoff -a setenforce 0 sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
- 配置yum源并安装相关相关核心文件
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安装docker与k8syum install -y docker kubelet-1.18.2-0 kubeadm-1.18.2-0 kubectl-1.18.2-0 --disableexcludes=kubernetes --disableexcludes=kubernetes
- 设置docker与kubelet开机启动
$ systemctl enable docker.service kubelet.service $ systemctl start docker kubelet
- 初始化master环境
kubeadm init --image-repository=registry.aliyuncs.com/google_containers --service-cidr=10.1.0.0/16 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.18.2-0
当运行成功后会出现的提示信息,我们注意以下几点
- 根据提示信息我们可以运行:
mkdir -p HOME/.kube sudo cp -i /etc/kubernetes/admin.conf HOME/.kube/config sudo chown (id -u):(id -g) $HOME/.kube/config
- 记录加入token的秘钥
kubeadm join 172.17.0.13:6443 --token a5tzoh.svr1xpsh2kpdjfcn --discovery-token-ca-cert-hash sha256:816577e3c2b2c3184002f49089de08963dd34e63166017bbe7edbeba15fdc2b2
- 按需开启master创建pod的功能
kubectl taint nodes --all node-role.kubernetes.io/master-
默认情况出于安全考虑 master节点是不允许创建pod的,我们可以通过如上命令开启此功能
- 安装网络插件
$ iptables -P FORWARD ACCEPT
$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
当所有运行完成后 我们可以通过 kubectl get pod --all-namespaces
来查看pod的运行状态,当所有为run的状态时,证明启动完毕
- 自动补全的设置
- Linux Bash
source /usr/share/bash-completion/bash_completion source <(kubectl completion bash)
- Linux Zsh
source <(kubectl completion zsh)
三、配置dashboard
在这里我们还是预先通过国内镜像仓库拉取,dashboard的镜像文件:
docker pull mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1
docker tag docker.io/mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.10.1 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.1
yaml文件:
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ------------------- Dashboard Secret ------------------- #
apiVersion v1
kind Secret
metadata
labels
k8s-app kubernetes-dashboard
name kubernetes-dashboard-certs
namespace kube-system
type Opaque
---
# ------------------- Dashboard Service Account ------------------- #
apiVersion v1
kind ServiceAccount
metadata
labels
k8s-app kubernetes-dashboard
name kubernetes-dashboard
namespace kube-system
---
# ------------------- Dashboard Role & Role Binding ------------------- #
kind Role
apiVersion rbac.authorization.k8s.io/v1
metadata
name kubernetes-dashboard-minimal
namespace kube-system
rules
# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.
apiGroups""
resources"secrets"
verbs"create"
# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.
apiGroups""
resources"configmaps"
verbs"create"
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
apiGroups""
resources"secrets"
resourceNames"kubernetes-dashboard-key-holder" "kubernetes-dashboard-certs"
verbs"get" "update" "delete"
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
apiGroups""
resources"configmaps"
resourceNames"kubernetes-dashboard-settings"
verbs"get" "update"
# Allow Dashboard to get metrics from heapster.
apiGroups""
resources"services"
resourceNames"heapster"
verbs"proxy"
apiGroups""
resources"services/proxy"
resourceNames"heapster" "http:heapster:" "https:heapster:"
verbs"get"
---
apiVersion rbac.authorization.k8s.io/v1
kind RoleBinding
metadata
name kubernetes-dashboard-minimal
namespace kube-system
roleRef
apiGroup rbac.authorization.k8s.io
kind Role
name kubernetes-dashboard-minimal
subjects
kind ServiceAccount
name kubernetes-dashboard
namespace kube-system
---
# ------------------- Dashboard Deployment ------------------- #
kind Deployment
apiVersion apps/v1
metadata
labels
k8s-app kubernetes-dashboard
name kubernetes-dashboard
namespace kube-system
spec
replicas1
revisionHistoryLimit10
selector
matchLabels
k8s-app kubernetes-dashboard
template
metadata
labels
k8s-app kubernetes-dashboard
spec
containers
name kubernetes-dashboard
image k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.1
ports
containerPort8443
protocol TCP
args
--auto-generate-certificates
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts
name kubernetes-dashboard-certs
mountPath /certs
# Create on-disk volume to store exec logs
mountPath /tmp
name tmp-volume
livenessProbe
httpGet
scheme HTTPS
path /
port8443
initialDelaySeconds30
timeoutSeconds30
volumes
name kubernetes-dashboard-certs
secret
secretName kubernetes-dashboard-certs
name tmp-volume
emptyDir
serviceAccountName kubernetes-dashboard
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations
key node-role.kubernetes.io/master
effect NoSchedule
---
# ------------------- Dashboard Service ------------------- #
kind Service
apiVersion v1
metadata
labels
k8s-app kubernetes-dashboard
name kubernetes-dashboard
namespace kube-system
spec
type NodePort
ports
port443
targetPort8443
nodePort30005
selector
k8s-app kubernetes-dashboard
紧接着我们要创建匿名用户的角色并查看其对应的token :
k8s-dashborad-admin.yaml
:
kind ClusterRoleBinding
apiVersion rbac.authorization.k8s.io/v1beta1
metadata
name admin
annotations
rbac.authorization.kubernetes.io/autoupdate"true"
roleRef
kind ClusterRole
name cluster-admin
apiGroup rbac.authorization.k8s.io
subjects
kind ServiceAccount
name admin
namespace kube-system
---
apiVersion v1
kind ServiceAccount
metadata
name admin
namespace kube-system
labels
kubernetes.io/cluster-service"true"
addonmanager.kubernetes.io/mode Reconcile
$ kubectl create clusterrolebinding test:anonymous --clusterrole=cluster-admin --user=system:anonymous
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep cluster-admin | awk '{print $1}')
此时我们通过https://xxx.xxx.xxx:30005即可访问dashboard,注意通过token信息我们可以登录:
四、配置heapster
4.1、heapster.yaml
apiVersion v1
kind ServiceAccount
metadata
name heapster
namespace kube-system
---
apiVersion extensions/v1beta1
kind Deployment
metadata
name heapster
namespace kube-system
spec
replicas1
template
metadata
labels
task monitoring
k8s-app heapster
spec
serviceAccountName heapster
containers
name heapster
image registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-amd64 v1.5.4
imagePullPolicy IfNotPresent
command
/heapster
--source=kubernetes:https://kubernetes.default?kubeletHttps=true&kubeletPort=10250&insecure=true
--sink=influxdb:http://monitoring-influxdb.kube-system.svc.cluster.local:8086
---
apiVersion v1
kind Service
metadata
labels
task monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service'true'
kubernetes.io/name Heapster
name heapster
namespace kube-system
spec
ports
port80
targetPort8082
selector
k8s-app heapster
4.2、heapster-rbac.yaml
kind ClusterRoleBinding
apiVersion rbac.authorization.k8s.io/v1beta1
metadata
name heapster
roleRef
apiGroup rbac.authorization.k8s.io
kind ClusterRole
name system heapster
subjects
kind ServiceAccount
name heapster
namespace kube-system
4.3、 influx_db.yaml
apiVersion extensions/v1beta1
kind Deployment
metadata
name monitoring-influxdb
namespace kube-system
spec
replicas1
template
metadata
labels
task monitoring
k8s-app influxdb
spec
containers
name influxdb
image registry.cn-hangzhou.aliyuncs.com/google_containers/heapster-influxdb-amd64 v1.5.2
volumeMounts
mountPath /data
name influxdb-storage
volumes
name influxdb-storage
emptyDir
---
apiVersion v1
kind Service
metadata
labels
task monitoring
# For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
# If you are NOT using this as an addon, you should comment out this line.
kubernetes.io/cluster-service'true'
kubernetes.io/name monitoring-influxdb
name monitoring-influxdb
namespace kube-system
spec
ports
port8086
targetPort8086
selector
k8s-app influxdb
4.4、生成修改权限的文件
运行上述三个文件后,我们要修改一下权限,否则会报403错误:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: heapster
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: heapster
namespace: kube-system
而后运行 kubectl create -f heapster_modify.yaml
五、附录
5.1、更改NodePort端口
比如像把端口范围改成1-65535,则在apiserver的启动命令里面添加如下参数:
--service-node-port-range=1-65535
,该配置文件位于/etc/kubernetes/manifests/kube-apiserver.yaml
下
5.2、DNS反复重启的解决方法
kubernetes(k8s)DNS 服务反复重启解决:
k8s.io/dns/pkg/dns/dns.go:150: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?resourceVersion=0: dial tcp 10.96.0.1:443: getsockopt: no route to host
在使用 Minikube 部署 kubernetes 服务时,出现 Kube DNS 服务反复重启现象(错误如上),
这很可能是 iptables 规则乱了,我通过执行以下命令解决了,在此记录:
- # systemctl stop kubelet
- # systemctl stop docker
- # iptables --flush
- # iptables -tnat --flush
- # systemctl start kubelet
- # systemctl start docker
5.3、kube-flannel.yaml
---
apiVersion policy/v1beta1
kind PodSecurityPolicy
metadata
name psp.flannel.unprivileged
annotations
seccomp.security.alpha.kubernetes.io/allowedProfileNames docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName runtime/default
spec
privilegedfalse
volumes
configMap
secret
emptyDir
hostPath
allowedHostPaths
pathPrefix"/etc/cni/net.d"
pathPrefix"/etc/kube-flannel"
pathPrefix"/run/flannel"
readOnlyRootFilesystemfalse
# Users and groups
runAsUser
rule RunAsAny
supplementalGroups
rule RunAsAny
fsGroup
rule RunAsAny
# Privilege Escalation
allowPrivilegeEscalationfalse
defaultAllowPrivilegeEscalationfalse
# Capabilities
allowedCapabilities'NET_ADMIN'
defaultAddCapabilities
requiredDropCapabilities
# Host namespaces
hostPIDfalse
hostIPCfalse
hostNetworktrue
hostPorts
min0
max65535
# SELinux
seLinux
# SELinux is unused in CaaSP
rule'RunAsAny'
---
kind ClusterRole
apiVersion rbac.authorization.k8s.io/v1beta1
metadata
name flannel
rules
apiGroups'extensions'
resources'podsecuritypolicies'
verbs'use'
resourceNames'psp.flannel.unprivileged'
apiGroups
""
resources
pods
verbs
get
apiGroups
""
resources
nodes
verbs
list
watch
apiGroups
""
resources
nodes/status
verbs
patch
---
kind ClusterRoleBinding
apiVersion rbac.authorization.k8s.io/v1beta1
metadata
name flannel
roleRef
apiGroup rbac.authorization.k8s.io
kind ClusterRole
name flannel
subjects
kind ServiceAccount
name flannel
namespace kube-system
---
apiVersion v1
kind ServiceAccount
metadata
name flannel
namespace kube-system
---
kind ConfigMap
apiVersion v1
metadata
name kube-flannel-cfg
namespace kube-system
labels
tier node
app flannel
data
cni-conf.json
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion apps/v1
kind DaemonSet
metadata
name kube-flannel-ds-amd64
namespace kube-system
labels
tier node
app flannel
spec
selector
matchLabels
app flannel
template
metadata
labels
tier node
app flannel
spec
affinity
nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms
matchExpressions
key kubernetes.io/os
operator In
values
linux
key kubernetes.io/arch
operator In
values
amd64
hostNetworktrue
tolerations
operator Exists
effect NoSchedule
serviceAccountName flannel
initContainers
name install-cni
image quay.io/coreos/flannel v0.11.0-amd64
command
cp
args
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
volumeMounts
name cni
mountPath /etc/cni/net.d
name flannel-cfg
mountPath /etc/kube-flannel/
containers
name kube-flannel
image quay.io/coreos/flannel v0.11.0-amd64
command
/opt/bin/flanneld
args
--ip-masq
--kube-subnet-mgr
resources
requests
cpu"100m"
memory"50Mi"
limits
cpu"100m"
memory"50Mi"
securityContext
privilegedfalse
capabilities
add"NET_ADMIN"
env
name POD_NAME
valueFrom
fieldRef
fieldPath metadata.name
name POD_NAMESPACE
valueFrom
fieldRef
fieldPath metadata.namespace
volumeMounts
name run
mountPath /run/flannel
name flannel-cfg
mountPath /etc/kube-flannel/
volumes
name run
hostPath
path /run/flannel
name cni
hostPath
path /etc/cni/net.d
name flannel-cfg
configMap
name kube-flannel-cfg
---
apiVersion apps/v1
kind DaemonSet
metadata
name kube-flannel-ds-arm64
namespace kube-system
labels
tier node
app flannel
spec
selector
matchLabels
app flannel
template
metadata
labels
tier node
app flannel
spec
affinity
nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms
matchExpressions
key kubernetes.io/os
operator In
values
linux
key kubernetes.io/arch
operator In
values
arm64
hostNetworktrue
tolerations
operator Exists
effect NoSchedule
serviceAccountName flannel
initContainers
name install-cni
image quay.io/coreos/flannel v0.12.0-arm64
command
cp
args
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
volumeMounts
name cni
mountPath /etc/cni/net.d
name flannel-cfg
mountPath /etc/kube-flannel/
containers
name kube-flannel
image quay.io/coreos/flannel v0.12.0-arm64
command
/opt/bin/flanneld
args
--ip-masq
--kube-subnet-mgr
resources
requests
cpu"100m"
memory"50Mi"
limits
cpu"100m"
memory"50Mi"
securityContext
privilegedfalse
capabilities
add"NET_ADMIN"
env
name POD_NAME
valueFrom
fieldRef
fieldPath metadata.name
name POD_NAMESPACE
valueFrom
fieldRef
fieldPath metadata.namespace
volumeMounts
name run
mountPath /run/flannel
name flannel-cfg
mountPath /etc/kube-flannel/
volumes
name run
hostPath
path /run/flannel
name cni
hostPath
path /etc/cni/net.d
name flannel-cfg
configMap
name kube-flannel-cfg
---
apiVersion apps/v1
kind DaemonSet
metadata
name kube-flannel-ds-arm
namespace kube-system
labels
tier node
app flannel
spec
selector
matchLabels
app flannel
template
metadata
labels
tier node
app flannel
spec
affinity
nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms
matchExpressions
key kubernetes.io/os
operator In
values
linux
key kubernetes.io/arch
operator In
values
arm
hostNetworktrue
tolerations
operator Exists
effect NoSchedule
serviceAccountName flannel
initContainers
name install-cni
image quay.io/coreos/flannel v0.12.0-arm
command
cp
args
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
volumeMounts
name cni
mountPath /etc/cni/net.d
name flannel-cfg
mountPath /etc/kube-flannel/
containers
name kube-flannel
image quay.io/coreos/flannel v0.12.0-arm
command
/opt/bin/flanneld
args
--ip-masq
--kube-subnet-mgr
resources
requests
cpu"100m"
memory"50Mi"
limits
cpu"100m"
memory"50Mi"
securityContext
privilegedfalse
capabilities
add"NET_ADMIN"
env
name POD_NAME
valueFrom
fieldRef
fieldPath metadata.name
name POD_NAMESPACE
valueFrom
fieldRef
fieldPath metadata.namespace
volumeMounts
name run
mountPath /run/flannel
name flannel-cfg
mountPath /etc/kube-flannel/
volumes
name run
hostPath
path /run/flannel
name cni
hostPath
path /etc/cni/net.d
name flannel-cfg
configMap
name kube-flannel-cfg
---
apiVersion apps/v1
kind DaemonSet
metadata
name kube-flannel-ds-ppc64le
namespace kube-system
labels
tier node
app flannel
spec
selector
matchLabels
app flannel
template
metadata
labels
tier node
app flannel
spec
affinity
nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms
matchExpressions
key kubernetes.io/os
operator In
values
linux
key kubernetes.io/arch
operator In
values
ppc64le
hostNetworktrue
tolerations
operator Exists
effect NoSchedule
serviceAccountName flannel
initContainers
name install-cni
image quay.io/coreos/flannel v0.12.0-ppc64le
command
cp
args
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
volumeMounts
name cni
mountPath /etc/cni/net.d
name flannel-cfg
mountPath /etc/kube-flannel/
containers
name kube-flannel
image quay.io/coreos/flannel v0.12.0-ppc64le
command
/opt/bin/flanneld
args
--ip-masq
--kube-subnet-mgr
resources
requests
cpu"100m"
memory"50Mi"
limits
cpu"100m"
memory"50Mi"
securityContext
privilegedfalse
capabilities
add"NET_ADMIN"
env
name POD_NAME
valueFrom
fieldRef
fieldPath metadata.name
name POD_NAMESPACE
valueFrom
fieldRef
fieldPath metadata.namespace
volumeMounts
name run
mountPath /run/flannel
name flannel-cfg
mountPath /etc/kube-flannel/
volumes
name run
hostPath
path /run/flannel
name cni
hostPath
path /etc/cni/net.d
name flannel-cfg
configMap
name kube-flannel-cfg
---
apiVersion apps/v1
kind DaemonSet
metadata
name kube-flannel-ds-s390x
namespace kube-system
labels
tier node
app flannel
spec
selector
matchLabels
app flannel
template
metadata
labels
tier node
app flannel
spec
affinity
nodeAffinity
requiredDuringSchedulingIgnoredDuringExecution
nodeSelectorTerms
matchExpressions
key kubernetes.io/os
operator In
values
linux
key kubernetes.io/arch
operator In
values
s390x
hostNetworktrue
tolerations
operator Exists
effect NoSchedule
serviceAccountName flannel
initContainers
name install-cni
image quay.io/coreos/flannel v0.12.0-s390x
command
cp
args
-f
/etc/kube-flannel/cni-conf.json
/etc/cni/net.d/10-flannel.conflist
volumeMounts
name cni
mountPath /etc/cni/net.d
name flannel-cfg
mountPath /etc/kube-flannel/
containers
name kube-flannel
image quay.io/coreos/flannel v0.12.0-s390x
command
/opt/bin/flanneld
args
--ip-masq
--kube-subnet-mgr
resources
requests
cpu"100m"
memory"50Mi"
limits
cpu"100m"
memory"50Mi"
securityContext
privilegedfalse
capabilities
add"NET_ADMIN"
env
name POD_NAME
valueFrom
fieldRef
fieldPath metadata.name
name POD_NAMESPACE
valueFrom
fieldRef
fieldPath metadata.namespace
volumeMounts
name run
mountPath /run/flannel
name flannel-cfg
mountPath /etc/kube-flannel/
volumes
name run
hostPath
path /run/flannel
name cni
hostPath
path /etc/cni/net.d
name flannel-cfg
configMap
name kube-flannel-cfg