介绍
EFAK(Eagle For Apache Kafka,以前称为 Kafka Eagle),EFAK是开源的可视化和管理软件。它允许您查询、可视化、提醒和探索指标,无论它们存储在何处。用通俗易懂的英语来说,它为您提供了将 Kafka 集群数据转换为精美图形和可视化的工具。
官方网站https://docs.kafka-eagle.org/
官网仅给出了非docker部署方式,但是docker部署已经成为了一种趋势,且能让kafka集群不对外暴露端口的前提下完成对kafka的监控,下面给出kafka-eagle构建流程:
1、下载需要版本的kafka-eagle,使用的是3.0.1版本,对应文件为kafka-eagle-bin-3.0.1.tar.gz
2、编写Dockerfile,给出笔者使用的Dockerfile,Dockerfile与kafka-eagle-bin-3.0.1.tar.gz在同一目录
要求创建的镜像
1、能够运行
2、能够接受外部的环境变量到调整配置文件
首先解压压缩包
tar -zxvf efak-xxx-bin.tar.gz
创建对应文件,完成后目录如下
- entrypoint.sh 运行脚本,容器程序运行入口,并能够接收外部的环境变量作为参数导入到配置文件中
- system-config.properties 配置文件,将外部的环境变量作为参数导入,作为容器运行的内部配置文件
- dockerfile 镜像构建文件
构建
编写dockerfile
#单机部署
#基础镜像,使用了轻量级的OpenJDK 8版本作为基础镜像,
#具体版本为8u342,这有助于减小最终镜像的体积。
#针对下载的安装包,改名为efak
FROM openjdk:8u282-slim
#环境变量
#定义了Kafka Eagle的安装目录为/opt/efak
#定义版本
ENV KE_HOME=/opt/efak
ENV EAGLE_VERSION=3.0.1
#设置环境变量参数
ENV EFAK_CLUSTER_ZK_LIST=zookeeper:2181
ENV EFAK_WEBUI_PORT=8048
#容器内部创目录
#安装缺失的软件包
RUN mkdir -p /opt/efak &&\
apt-get update && apt-get install -y --no-install-recommends procps gettext &&\
rm -rf /var/lib/apt/lists/*
#将按装包复制到容器的/opt目录下
#将带有参数模板的配置文件复制到容器/opt/efak下
COPY efak-web-${EAGLE_VERSION}-bin.tar.gz /opt/
COPY system-config.properties /opt/efak
#解压文件
#设置时区
RUN tar zxvf /opt/efak-web-${EAGLE_VERSION}-bin.tar.gz -C /opt/efak --strip-components 1 &&\
rm /opt/efak-web-${EAGLE_VERSION}-bin.tar.gz &&\
ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime &&\
echo 'Asia/Shanghai' >/etc/timezone
#复制entrypoint.sh脚本到Kafka Eagle的bin目录,并赋予执行权限,
#这个脚本是用于启动Kafka Eagle前执行一些初始化操作或参数配置。
COPY entrypoint.sh /opt/efak/bin/entrypoint.sh
#修改ke.sh脚本的权限,使其可执行,
RUN chmod +x /opt/efak/bin/entrypoint.sh &&\
chmod +x /opt/efak/bin/ke.sh
#暴露容器端口,分别供Kafka Eagle的Web服务使用
EXPOSE 8048 8080
#执行工作目录
WORKDIR /opt/efak
#容器启动时,执行entrypoint.sh脚本,这是Docker容器的入口点,用于启动Kafka Eagle服务。
ENTRYPOINT [ "bash","/opt/efak/bin/entrypoint.sh" ]
编写配置文件system-config.properies
该文件主要从程序中复制出来,添加变量参数。详细参数见官方说明
######################################
# multi zookeeper & kafka cluster list
# Settings prefixed with 'kafka.eagle.' will be deprecated, use 'efak.' instead
######################################
efak.zk.cluster.alias=cluster
cluster.zk.list=${EFAK_CLUSTER_ZK_LIST}
######################################
# zookeeper enable acl
######################################
cluster1.zk.acl.enable=false
cluster1.zk.acl.schema=digest
cluster1.zk.acl.username=test
cluster1.zk.acl.password=test123
######################################
# broker size online list
######################################
cluster1.efak.broker.size=20
######################################
# zk client thread limit
######################################
kafka.zk.limit.size=16
######################################
# EFAK webui port
######################################
efak.webui.port=${EFAK_WEBUI_PORT}
######################################
# EFAK enable distributed
######################################
efak.distributed.enable=false
efak.cluster.mode.status=master
efak.worknode.master.host=localhost
efak.worknode.port=8085
######################################
# kafka jmx acl and ssl authenticate
######################################
cluster1.efak.jmx.acl=false
cluster1.efak.jmx.user=keadmin
cluster1.efak.jmx.password=keadmin123
cluster1.efak.jmx.ssl=false
cluster1.efak.jmx.truststore.location=/data/ssl/certificates/kafka.truststore
cluster1.efak.jmx.truststore.password=ke123456
######################################
# kafka offset storage
######################################
cluster1.efak.offset.storage=kafka
cluster2.efak.offset.storage=zk
######################################
# kafka jmx uri
######################################
cluster1.efak.jmx.uri=service:jmx:rmi:///jndi/rmi://%s/jmxrmi
######################################
# kafka metrics, 15 days by default
######################################
efak.metrics.charts=true
efak.metrics.retain=15
######################################
# kafka sql topic records max
######################################
efak.sql.topic.records.max=5000
efak.sql.topic.preview.records.max=10
######################################
# delete kafka topic token
######################################
efak.topic.token=keadmin
######################################
# kafka sasl authenticate
######################################
cluster1.efak.sasl.enable=false
cluster1.efak.sasl.protocol=SASL_PLAINTEXT
cluster1.efak.sasl.mechanism=SCRAM-SHA-256
cluster1.efak.sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required username="kafka" password="kafka-eagle";
cluster1.efak.sasl.client.id=
cluster1.efak.blacklist.topics=
cluster1.efak.sasl.cgroup.enable=false
cluster1.efak.sasl.cgroup.topics=
cluster2.efak.sasl.enable=false
cluster2.efak.sasl.protocol=SASL_PLAINTEXT
cluster2.efak.sasl.mechanism=PLAIN
cluster2.efak.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="kafka-eagle";
cluster2.efak.sasl.client.id=
cluster2.efak.blacklist.topics=
cluster2.efak.sasl.cgroup.enable=false
cluster2.efak.sasl.cgroup.topics=
######################################
# kafka ssl authenticate
######################################
cluster3.efak.ssl.enable=false
cluster3.efak.ssl.protocol=SSL
cluster3.efak.ssl.truststore.location=
cluster3.efak.ssl.truststore.password=
cluster3.efak.ssl.keystore.location=
cluster3.efak.ssl.keystore.password=
cluster3.efak.ssl.key.password=
cluster3.efak.ssl.endpoint.identification.algorithm=https
cluster3.efak.blacklist.topics=
cluster3.efak.ssl.cgroup.enable=false
cluster3.efak.ssl.cgroup.topics=
######################################
# kafka sqlite jdbc driver address
######################################
#efak.driver=org.sqlite.JDBC
#efak.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
#efak.username=root
#efak.password=www.kafka-eagle.org
######################################
# kafka mysql jdbc driver address
######################################
efak.driver=com.mysql.cj.jdbc.Driver
efak.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
efak.username=root
efak.password=123456
编写entrypoint.sh
容器启动脚本,
#!/bin/bash
#ver1.0
#yangchao
#envsubst命令用于替换文件中的环境变量。
#这里,它读取/tmp/system-config.properties模板文件,将其中的环境变量替换为实际值,
#然后将结果输出到/opt/efak/conf/system-config.properties
envsubst < "/opt/efak/system-config.properties" > "/opt/efak/conf/system-config.properties"
/opt/efak/bin/ke.sh start
#最后,tail -f /dev/null命令使脚本保持运行状态而不退出,这通常是为了让容器持续运行,直到手动停止或因其他原因终止。在Docker容器中,这是常见的做法,以保持容器活跃
tail -f /dev/null
构建
docker build -t efak:v1 .
运行测试
docker run --name=efak_v1 -d 3dd3e4c2f4ce -e EFAK_CLUSTER_ZK_LIST=231312:2181
docker exec -it efak_v1 bash
上传到k8s中
上传到harbor镜像仓库
sudo docker tag efak:v1 harbor.test.0a9c8cbe.nip.io/yunweiops/efak:v1
sudo docker push harbor.test.0a9c8cbe.nip.io/yunweiops/efak:v1
在k8s中运行
运行的时候设置好环境变量,并暴露,对接kafka
运行查看日志
配置好service并外部暴露后,进入efak
配置helm
现在要把制作的镜像,做成helm包的格式,上传到harbor
模板deployeement
直接使用helm create
helm create efak
#生成的目录结构如下
├── Chart.yaml
├── charts
├── templates
│ ├── NOTES.txt
│ ├── _helpers.tpl
│ ├── deployment.yaml
│ ├── hpa.yaml
│ ├── ingress.yaml
│ ├── service.yaml
│ ├── serviceaccount.yaml
│ └── tests
│ └── test-connection.yaml
└── values.yaml
该命令会创建一个默认的helm chart包的脚手架,可以删掉以下使用不到的文件
templates/tests/test-connection.yaml
templates/serviceaccount.yaml
templates/ingress.yaml
templates/hpa.yaml
templates/NOTES.txt
然后修改deployment.yaml模板文件
# templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: efak #名称
labels:
{{- include "efak.labels" . | nindent 4 }}
spec:
{{- if not .Values.autoscaling.enabled }}
replicas: {{ .Values.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "efak.selectorLabels" . | nindent 6 }}
template:
metadata:
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
labels:
{{- include "efak.selectorLabels" . | nindent 8 }}
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: {{ .Chart.Name }}
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag | default .Chart.AppVersion }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: 8048 #报错容器的端口
protocol: TCP
resources:
{{- toYaml .Values.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
将 replicas 的值使用 {{ .Values.replicaCount }} 模板来进行替换了,表示会用 replicaCount 这个 Values 值进行渲染,
模板service
为了能够兼容多个场景,这里我们允许用户来定制 Service 的 type,如果是 NodePort 类型则还可以配置 nodePort 的值,不过需要注意这里的判断,因为有可能即使配置为 NodePort 类型,用户也可能不会主动提供 nodePort,所以这里我们在模板中做了一个条件判断:
{{- if (and (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) (not (empty .Values.service.nodePort))) }}
需要 service.type 为 NodePort 或者 LoadBalancer 并且 service.nodePort 不为空的情况下才会渲染 nodePort。
apiVersion: v1
kind: Service
metadata:
name: {{ include "efak.fullname" . }}
labels:
{{- include "efak.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- protocol: TCP
targetPort: 8048
port: {{ .Values.service.port }}
{{- if (and (or (eq .Values.service.type "NodePort") (eq .Values.service.type "LoadBalancer")) (not (empty .Values.service.nodePort))) }}
nodePort: {{ .Values.service.nodePort }}
{{- else if eq .Values.service.type "ClusterIP" }}
nodePort: 38048
{{- end }}
selector:
{{- include "efak.selectorLabels" . | nindent 4 }}
values配置
# Default values for efak.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
replicaCount: 1
image:
repository: harbor.test.0a9c8cbe.nip.io/yunweiops/efak #此处定义镜像地址
pullPolicy: IfNotPresent
# Overrides the image tag whose default is the chart appVersion.
tag: ""
imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
podAnnotations: {}
podSecurityContext: {}
# fsGroup: 2000
securityContext: {}
# capabilities:
# drop:
# - ALL
# readOnlyRootFilesystem: true
# runAsNonRoot: true
# runAsUser: 1000
service:
type: NodePort #定义使用nodeport方式
port: 38048
ingress:
enabled: false
className: ""
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: "true"
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
# - secretName: chart-example-tls
# hosts:
# - chart-example.local
resources: {}
# We usually recommend not to specify default resources and to leave this as a conscious
# choice for the user. This also increases chances charts run on environments with little
# resources, such as Minikube. If you do want to specify resources, uncomment the following
# lines, adjust them as necessary, and remove the curly braces after 'resources:'.
# limits:
# cpu: 100m
# memory: 128Mi
# requests:
# cpu: 100m
# memory: 128Mi
autoscaling:
enabled: false
minReplicas: 1
maxReplicas: 100
targetCPUUtilizationPercentage: 80
# targetMemoryUtilizationPercentage: 80
nodeSelector: {}
tolerations: []
affinity: {}
Chart配置
使用默认生成的即可,不用修改
apiVersion: v2
name: efak
description: A Helm chart for Kubernetes
# A chart can be either an 'application' or a 'library' chart.
#
# Application charts are a collection of templates that can be packaged into versioned archives
# to be deployed.
#
# Library charts provide useful utilities or functions for the chart developer. They're included as
# a dependency of application charts to inject those utilities and functions into the rendering
# pipeline. Library charts do not define any templates and therefore cannot be deployed.
type: application
# This is the chart version. This version number should be incremented each time you make changes
# to the chart and its templates, including the app version.
# Versions are expected to follow Semantic Versioning (https://semver.org/)
version: 0.1.0
# This is the version number of the application being deployed. This version number should be
# incremented each time you make changes to the application. Versions are not expected to
# follow Semantic Versioning. They should reflect the version the application is using.
# It is recommended to use it with quotes.
appVersion: "1.16.0"
检查调试
helm template --debug /root/k8s_pkg/helm_pkg/kafk/efak
确认渲染生成的模板文件跟我们想要的一致
检查chart是否有语法错误,并进行调试
#检查是否有语法错误
helm lint
============
==> Linting /root/k8s_pkg/helm_pkg/kafk/efak
[INFO] Chart.yaml: icon is recommended
1 chart(s) linted, 0 chart(s) failed
#调试,确认无误
helm install --set name=efak efak --dry-run --debug /root/k8s_pkg/helm_pkg/kafk/efak/
====
install.go:178: [debug] Original chart version: ""
install.go:195: [debug] CHART PATH: /root/k8s_pkg/helm_pkg/kafk/efak
NAME: efak
LAST DEPLOYED: Mon May 27 15:42:52 2024
NAMESPACE: default
STATUS: pending-install
REVISION: 1
TEST SUITE: None
USER-SUPPLIED VALUES:
name: efak
COMPUTED VALUES:
affinity: {}
autoscaling:
enabled: false
maxReplicas: 100
minReplicas: 1
targetCPUUtilizationPercentage: 80
fullnameOverride: ""
image:
pullPolicy: IfNotPresent
repository: harbor.test.0a9c8cbe.nip.io/yunweiops/efak
tag: ""
imagePullSecrets: []
ingress:
annotations: {}
className: ""
enabled: false
hosts:
- host: chart-example.local
paths:
- path: /
pathType: ImplementationSpecific
tls: []
name: efak
nameOverride: ""
nodeSelector: {}
podAnnotations: {}
podSecurityContext: {}
replicaCount: 1
resources: {}
securityContext: {}
service:
port: 38048
type: Nodeport
serviceAccount:
annotations: {}
create: false
name: ""
tolerations: []
HOOKS:
MANIFEST:
---
# Source: efak/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
name: efak
labels:
helm.sh/chart: efak-0.1.0
app.kubernetes.io/name: efak
app.kubernetes.io/instance: efak
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
type: Nodeport
ports:
- protocol: TCP
targetPort: 8048
port: 38048
selector:
app.kubernetes.io/name: efak
app.kubernetes.io/instance: efak
---
# Source: efak/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: efak #名称
labels:
helm.sh/chart: efak-0.1.0
app.kubernetes.io/name: efak
app.kubernetes.io/instance: efak
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: efak
app.kubernetes.io/instance: efak
template:
metadata:
labels:
app.kubernetes.io/name: efak
app.kubernetes.io/instance: efak
spec:
serviceAccountName: default
securityContext:
{}
containers:
- name: efak
securityContext:
{}
image: "harbor.test.0a9c8cbe.nip.io/yunweiops/efak:1.16.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 8048
protocol: TCP
resources:
{}
==============
安装打包
本地安装
#安装
helm install --set name=efak efak /root/k8s_pkg/helm_pkg/kafk/efak/
====
NAME: efak
LAST DEPLOYED: Mon May 27 15:50:16 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
#查看helm状态
helm status efak
=====
NAME: efak
LAST DEPLOYED: Mon May 27 15:50:16 2024
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
打包上传
#打包
helm package /root/k8s_pkg/helm_pkg/kafk/efak/
========
Successfully packaged chart and saved it to: /root/k8s_pkg/helm_pkg/kafk/efak/efak-0.1.0.tgz
[root@node1 efak]# ls
charts Chart.yaml efak-0.1.0.tgz templates values.yaml
上传到harbor
问题与总结
- dockerfile打包的时候镜像很大,基本在300+MB,应该可以还能缩小
- efak3.0容器化部署响应很慢,如有条件,应该换低本版的2,0的kafa-eagle
- 打包helm用的都是最基础的