一、部署Ingress、Ingress-Ngins、Ingress-Nginx-Controller
1. 从ingress nginx上下载跟K8S版本一致的deploy.yaml文件,链接:ingress-nginx/deploy.yaml at main · kubernetes/ingress-nginx · GitHubNGINX Ingress Controller for Kubernetes. Contribute to kubernetes/ingress-nginx development by creating an account on GitHub.https://github.com/kubernetes/ingress-nginx/blob/main/deploy/static/provider/baremetal/1.22/deploy.yaml
apiVersion: v1
kind: Namespace
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx
---
apiVersion: v1
automountServiceAccountToken: true
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- endpoints
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resourceNames:
- ingress-controller-leader
resources:
- configmaps
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission
namespace: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- create
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
- namespaces
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- networking.k8s.io
resources:
- ingresses/status
verbs:
- update
- apiGroups:
- networking.k8s.io
resources:
- ingressclasses
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission
rules:
- apiGroups:
- admissionregistration.k8s.io
resources:
- validatingwebhookconfigurations
verbs:
- get
- update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission
namespace: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx
subjects:
- kind: ServiceAccount
name: ingress-nginx
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: ingress-nginx-admission
subjects:
- kind: ServiceAccount
name: ingress-nginx-admission
namespace: ingress-nginx
---
apiVersion: v1
data:
allow-snippet-annotations: "true"
kind: ConfigMap
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-controller
namespace: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- appProtocol: http
name: http
port: 80
protocol: TCP
targetPort: http
- appProtocol: https
name: https
port: 443
protocol: TCP
targetPort: https
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-controller-admission
namespace: ingress-nginx
spec:
ports:
- appProtocol: https
name: https-webhook
port: 443
targetPort: webhook
selector:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
type: ClusterIP
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-controller
namespace: ingress-nginx
spec:
minReadySeconds: 0
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
spec:
containers:
- args:
- /nginx-ingress-controller
- --election-id=ingress-controller-leader
- --controller-class=k8s.io/ingress-nginx
- --ingress-class=nginx
- --configmap=$(POD_NAMESPACE)/ingress-nginx-controller
- --validating-webhook=:8443
- --validating-webhook-certificate=/usr/local/certificates/cert
- --validating-webhook-key=/usr/local/certificates/key
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: LD_PRELOAD
value: /usr/local/lib/libmimalloc.so
image: registry.k8s.io/ingress-nginx/controller:v1.2.1@sha256:5516d103a9c2ecc4f026efbd4b40662ce22dc1f824fb129ed121460aaa5c47f8
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
requests:
cpu: 100m
memory: 90Mi
securityContext:
allowPrivilegeEscalation: true
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
runAsUser: 101
volumeMounts:
- mountPath: /usr/local/certificates/
name: webhook-cert
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
kubernetes.io/os: linux
serviceAccountName: ingress-nginx
terminationGracePeriodSeconds: 300
volumes:
- name: webhook-cert
secret:
secretName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission-create
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission-create
spec:
containers:
- args:
- create
- --host=ingress-nginx-controller-admission,ingress-nginx-controller-admission.$(POD_NAMESPACE).svc
- --namespace=$(POD_NAMESPACE)
- --secret-name=ingress-nginx-admission
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
imagePullPolicy: IfNotPresent
name: create
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: batch/v1
kind: Job
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission-patch
namespace: ingress-nginx
spec:
template:
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission-patch
spec:
containers:
- args:
- patch
- --webhook-name=ingress-nginx-admission
- --namespace=$(POD_NAMESPACE)
- --patch-mutating=false
- --secret-name=ingress-nginx-admission
- --patch-failure-policy=Fail
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
image: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.1.1@sha256:64d8c73dca984af206adf9d6d7e46aa550362b1d7a01f3a0a91b20cc67868660
imagePullPolicy: IfNotPresent
name: patch
securityContext:
allowPrivilegeEscalation: false
nodeSelector:
kubernetes.io/os: linux
restartPolicy: OnFailure
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 2000
serviceAccountName: ingress-nginx-admission
---
apiVersion: networking.k8s.io/v1
kind: IngressClass
metadata:
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: nginx
spec:
controller: k8s.io/ingress-nginx
---
apiVersion: admissionregistration.k8s.io/v1
kind: ValidatingWebhookConfiguration
metadata:
labels:
app.kubernetes.io/component: admission-webhook
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
app.kubernetes.io/version: 1.2.1
name: ingress-nginx-admission
webhooks:
- admissionReviewVersions:
- v1
clientConfig:
service:
name: ingress-nginx-controller-admission
namespace: ingress-nginx
path: /networking/v1/ingresses
failurePolicy: Fail
matchPolicy: Equivalent
name: validate.nginx.ingress.kubernetes.io
rules:
- apiGroups:
- networking.k8s.io
apiVersions:
- v1
operations:
- CREATE
- UPDATE
resources:
- ingresses
sideEffects: None
在deploy.yaml文件里面会使用到一些镜像, 一般都是docker pull不下来的。如:414,511,560这几行。
可以直接改镜像的仓库路径,改为:
414行:registry.cn-hangzhou.aliyuncs.com/google_containers/nginx-ingress-controller:v1.2.1
511行:registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
560行:registry.cn-hangzhou.aliyuncs.com/google_containers/kube-webhook-certgen:v1.1.1
2. 创建Ingress nginx
# 执行创建命令,会创建新的命名空间“ingress-nginx”,并把相关资源创建在这个命名空间下
kubectl create -f deploy.yaml
# 查询相关命名空间下的资源
kubectl get all -n ingress-nginx
二、部署相关服务
我的架构是在微服务外做了一层外壳,前端调用外壳服务,由外壳服务走内网调用服务网关。服务发现跟配置中心用的Nacos,单独部署。这里不再多说~
整体逻辑图
1. 部署后端壳shell服务
1.1 使用jar包,制作镜像
Dockerfile文件:
FROM adoptopenjdk:8-jdk-openj9
MAINTAINER lxw
ENV JAVA_OPTS="-Xms512m -Xmx1024m -Xmn1024m -Xss512k -XX:+UseG1GC -XX:ParallelGCThreads=8 -XX:MaxDirectMemorySize=1024m -XX:NativeMemoryTracking=detail"
ENV NACOS_ADDR="192.168.0.103:8848"
COPY shell-0.0.1-SNAPSHOT.jar shell-0.0.1-SNAPSHOT.jar
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' > /etc/timezone
EXPOSE 8090
# java jar包启动命令视各自情况而定
ENTRYPOINT java -jar shell-0.0.1-SNAPSHOT.jar --spring.cloud.nacos.discovery.server-addr=${NACOS_ADDR} --spring.cloud.nacos.config.server-addr=${NACOS_ADDR} --spring.profiles.active=dev
build-docker.sh 制作镜像的文件:
IMAGE_NAME=application-shell
docker build -t ${IMAGE_NAME}:latest .
# 注意:不使用镜像库的话,需要在每个k8s的work节点上都打包本地镜像才行哦
# 我这里镜像直接打在本地,没有自己搭建harbor镜像库。后续搭建镜像库之后,补充docker push到镜像库的命令
# docker push ********
将Dockerfile、build-docker.sh、shell-0.0.1-SNAPSHOT.jar,放在同一文件夹内,执行命令:“sh build-docker.sh”,构建镜像。
2. 编写application-shell.yaml文件,创建Service、Pod。
---
apiVersion: v1
kind: Service
metadata:
name: shell
namespace: china
spec:
ports:
- port: 8090
name: shell
selector:
project: china
app: shell
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: shell
namespace: china
spec:
replicas: 2
selector:
matchLabels:
project: china
app: shell
template:
metadata:
labels:
project: china
app: shell
spec:
containers:
- name: shell
image: application-shell:latest # 镜像跟你在步骤1.1构建的镜像名要一致
imagePullPolicy: IfNotPresent # 本地存在镜像时不拉取镜像(这个配置生产最好注释掉)
ports:
- protocol: TCP
containerPort: 8090
注意:imagePullPolicy属性,这个属性是描述镜像的拉取策略
- Always 总是拉取镜像
- IfNotPresent 本地有则使用本地镜像,不拉取
- Never 只使用本地镜像,从不拉取,即使本地没有
- 如果省略imagePullPolicy, 策略为always
执行创建相关资源:
kubectl create -f application-shell.yaml -n china
2. 部署Gateway
参照步骤1,构建镜像,创建K8S资源
Dockerfile文件:
FROM adoptopenjdk:8-jdk-openj9
MAINTAINER lxw
ENV JAVA_OPTS="-Xms512m -Xmx1024m -Xmn1024m -Xss512k -XX:+UseG1GC -XX:ParallelGCThreads=8 -XX:MaxDirectMemorySize=1024m -XX:NativeMemoryTracking=detail"
ENV NACOS_ADDR="192.168.0.103:8848"
COPY gateway-0.0.1-SNAPSHOT.jar gateway-0.0.1-SNAPSHOT.jar
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' > /etc/timezone
EXPOSE 20001
# java jar包启动命令视各自情况而定
ENTRYPOINT java -jar gateway-0.0.1-SNAPSHOT.jar --spring.cloud.nacos.discovery.server-addr=${NACOS_ADDR} --spring.cloud.nacos.config.server-addr=${NACOS_ADDR} --spring.profiles.active=dev
build-docker.sh 制作镜像的文件:
IMAGE_NAME=cloud-gateway
docker build -t ${IMAGE_NAME}:latest .
# 注意:不使用镜像库的话,需要在每个k8s的work节点上都打包本地镜像才行哦
# 我这里镜像直接打在本地,没有自己搭建harbor镜像库。后续搭建镜像库之后,补充docker push到镜像库的命令
# docker push ********
cloud-gateway.yaml文件,创建Service、Pod
---
apiVersion: v1
kind: Service
metadata:
name: cloud-gateway
namespace: china
labels:
app: cloud-gateway
spec:
ports:
- port: 20001
name: cloud-gateway
selector:
app: cloud-gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cloud-gateway
namespace: china
labels:
app: cloud-gateway
spec:
replicas: 2
selector:
matchLabels:
app: cloud-gateway
template:
metadata:
labels:
app: cloud-gateway
spec:
containers:
- name: cloud-gateway
image: cloud-gateway:latest
imagePullPolicy: IfNotPresent
ports:
- protocol: TCP
containerPort: 20001
3. 部署Report
参照步骤1,构建镜像,创建K8S资源
Dockerfile文件:
FROM adoptopenjdk:8-jdk-openj9
MAINTAINER lxw
ENV JAVA_OPTS="-Xms512m -Xmx1024m -Xmn1024m -Xss512k -XX:+UseG1GC -XX:ParallelGCThreads=8 -XX:MaxDirectMemorySize=1024m -XX:NativeMemoryTracking=detail"
ENV NACOS_ADDR="192.168.0.103:8848"
COPY report-0.0.1-SNAPSHOT.jar report-0.0.1-SNAPSHOT.jar
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' > /etc/timezone
EXPOSE 10577
# java jar包启动命令视各自情况而定
ENTRYPOINT java -jar report-0.0.1-SNAPSHOT.jar --spring.cloud.nacos.discovery.server-addr=${NACOS_ADDR} --spring.cloud.nacos.config.server-addr=${NACOS_ADDR} --spring.profiles.active=dev
build-docker.sh 制作镜像的文件:
IMAGE_NAME=application-report
docker build -t ${IMAGE_NAME}:latest .
# 注意:不使用镜像库的话,需要在每个k8s的work节点上都打包本地镜像才行哦
# 我这里镜像直接打在本地,没有自己搭建harbor镜像库。后续搭建镜像库之后,补充docker push到镜像库的命令
# docker push ********
application-report.yaml文件,创建Pod:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-report
namespace: china
labels:
app: application-report
spec:
replicas: 2
selector:
matchLabels:
app: application-report
template:
metadata:
labels:
app: application-report
spec:
containers:
- name: application-report
image: application-report:latest
imagePullPolicy: IfNotPresent
ports:
- protocol: TCP
containerPort: 10577
4. 部署report-read
参照步骤1,构建镜像,创建K8S资源
Dockerfile文件:
FROM adoptopenjdk:8-jdk-openj9
MAINTAINER lxw
ENV JAVA_OPTS="-Xms512m -Xmx1024m -Xmn1024m -Xss512k -XX:+UseG1GC -XX:ParallelGCThreads=8 -XX:MaxDirectMemorySize=1024m -XX:NativeMemoryTracking=detail"
ENV NACOS_ADDR="192.168.0.103:8848"
COPY report-0.0.1-SNAPSHOT.jar report-0.0.1-SNAPSHOT.jar
RUN /bin/cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime && echo 'Asia/Shanghai' > /etc/timezone
EXPOSE 11577
# java jar包启动命令视各自情况而定
ENTRYPOINT java -jar report-0.0.1-SNAPSHOT.jar --spring.cloud.nacos.discovery.server-addr=${NACOS_ADDR} --spring.cloud.nacos.config.server-addr=${NACOS_ADDR} --spring.profiles.active=dev
build-docker.sh 制作镜像的文件:
IMAGE_NAME=application-report-read
docker build -t ${IMAGE_NAME}:latest .
# 注意:不使用镜像库的话,需要在每个k8s的work节点上都打包本地镜像才行哦
# 我这里镜像直接打在本地,没有自己搭建harbor镜像库。后续搭建镜像库之后,补充docker push到镜像库的命令
# docker push ********
application-report-read.yaml文件,创建Pod:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: application-report-read
namespace: china
labels:
app: application-report-read
spec:
replicas: 2
selector:
matchLabels:
app: application-report-read
template:
metadata:
labels:
app: application-report-read
spec:
containers:
- name: cloud-gateway
image: cloud-gateway:latest
imagePullPolicy: IfNotPresent
ports:
- protocol: TCP
containerPort: 11577
5. 部署前端静态页面
前提:已由前端人员打好的静态的dist文件
步骤跟部署后端服务基本没差别,不过在build镜像时Dockerfile文件有所差异
Dockerfile文件:
FROM nginx:stable-alpine
MAINTAINER lxw
# 这个dist是前端打包的静态文件的文件夹,这里是把你的dist文件夹内容复制到镜像里的“/usr/share/nginx/html/”这个目录下,这个目录是nginx的静态页面的默认路径
COPY dist/ /usr/share/nginx/html/
# 将自己写的nginx配置,覆盖镜像里面的配置。
# 感兴趣的可以在容器启动之后,进入容器,进入相关路径去看看内容
COPY nginx-report.conf /etc/nginx/conf.d/default.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
nginx-report.conf文件:
server {
listen 80;
server_name localhost;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
client_max_body_size 1000m;
try_files $uri $uri/ /index.html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
build-docker.sh 制作镜像的文件:
IMAGE_NAME=web-report
docker build -t ${IMAGE_NAME}:latest .
# 注意:不使用镜像库的话,需要在每个k8s的work节点上都打包本地镜像才行哦
# 我这里镜像直接打在本地,没有自己搭建harbor镜像库。后续搭建镜像库之后,补充docker push到镜像库的命令
# docker push ********
web-report.yaml文件,创建Service、Pod:
---
apiVersion: v1
kind: Service
metadata:
name: web-report
namespace: china
labels:
app: web-report
spec:
ports:
- port: 80
name: web-report
selector:
app: web-report
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-report
namespace: china
labels:
app: web-report
spec:
replicas: 2
selector:
matchLabels:
app: web-report
template:
metadata:
labels:
app: web-report
spec:
containers:
- name: web-report
image: web-report:latest
imagePullPolicy: IfNotPresent
ports:
- protocol: TCP
containerPort: 80
6. 部署Ingress
web-report-ingress.yaml文件,创建Ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: web-report
namespace: china
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
defaultBackend:
service:
name: shell
port:
number: 8090
rules:
- host: kreport.chinalxwvtown.com
http:
paths:
- path: /
pathType: Prefix #值类型说明参考官方文档,有Exact和Prefix两种
backend:
service:
name: web-report
port:
number: 80
- path: /api
pathType: Prefix #值类型说明参考官方文档,有Exact和Prefix两种
backend:
service:
name: shell
port:
number: 8090
7. 都创建完docker镜像之后,挨个执行"kubectl create -f ***.yaml -n china"命令,创建相关的K8S资源。创建好之后,资源全家福:
三、浏览器访问相关域名,看看是否正常
域名访问,如图:
注意:因为域名是乱写的域名,需要在本机的hosts文件,添加域名解析。域名解析的地址就是ingress里面的映射出来的ip(192.168.0.242)
注意:根据Ingress的配置,域名后面带的请求路径中,
“/api”匹配之后会被转发到后端的service“shell”中,shell提供后端接口;
而其他的请求路径,会被转发到前端的service“web-report”中,制作镜像中看出里面是nginx做的静态服务器;
后续更新:域名使用外部域名,配置https等....
~~完结撒花🎉~~