【云原生实战】KubeSphere平台安装之Linux单节点和多节点部署
原创
©著作权归作者所有:来自51CTO博客作者陶然同学的原创作品,请联系作者获取转载授权,否则将追究法律责任
🔎这里是【云原生实战】,关注我学习云原生不迷路
👍如果对你有帮助,给博主一个免费的点赞以示鼓励
欢迎各位🔎点赞👍评论收藏⭐️
👀专栏介绍
【云原生实战】 目前主要更新Kubernetes,一起学习一起进步。
👀本期介绍
主要介绍Kubernetes安装KubeSphere
文章目录
Linux单节点部署KubeSphere
1、开通服务器
2、安装
3、安装后开启功能
Linux多节点部署KubeSphere
1、准备三台服务器
2、使用KubeKey创建集群
附录
Linux单节点部署KubeSphere
1、开通服务器
4c8g;centos7.9;防火墙放行 30000~32767;指定hostname
hostnamectl set-hostname node1
2、安装
1、准备KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
chmod +x kk
2、使用KubeKey引导安装集群
#可能需要下面命令
yum install -y conntrack
./kk create cluster --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
3、安装后开启功能
Linux多节点部署KubeSphere
1、准备三台服务器
- 4c8g (master)
- 8c16g * 2(worker)
- centos7.9
- 内网互通
- 每个机器有自己域名
- 防火墙开放30000~32767端口
2、使用KubeKey创建集群
1、下载KubeKey
export KKZONE=cn
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
chmod +x kk
2、创建集群配置文件
./kk create config --with-kubernetes v1.20.4 --with-kubesphere v3.1.1
3、创建集群
./kk create cluster -f config-sample.yaml
4、查看进度
kubectl logs -n kubesphere-system $(kubectl get pod -n kubesphere-system -l app=ks-install -o jsonpath='{.items[0].metadata.name}') -f
附录
1、config-sample.yaml示例文件
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: master, address: 10.140.126.6, internalAddress: 10.140.126.6, user: root, password: Hello777}
- {name: node1, address: 10.140.122.56, internalAddress: 10.140.122.56, user: root, password: Hello777}
- {name: node2, address: 10.140.122.39, internalAddress: 10.140.122.39, user: root, password: Hello777}
roleGroups:
etcd:
- master
master:
- master
worker:
- node1
- node2
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.20.4
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
redis:
enabled: false
redisVolumSize: 2Gi
openldap:
enabled: false
openldapVolumeSize: 2Gi
minioVolumeSize: 20Gi
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
es:
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true
port: 30880
alerting:
enabled: false
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
devops:
enabled: false
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
ruler:
enabled: true
replicas: 2
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: false
monitoring:
storageClass: ""
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: false
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []