kafka2.8集群搭建及监控配置


背景

业务需要搭建kafka集群及监控,kafka依赖于zookeeper集群,所以也一并记录下来。

ip

操作系统

主机名

用途

192.168.0.19

CentOS7

zk1

zookeeper集群

192.168.0.36

CentOS7

zk2

zookeeper集群

192.168.0.18

CentOS7

zk3

zookeeper集群

192.168.0.137

CentOS7

kafka01

kafka集群

192.168.0.210

CentOS7

kafka02

kafka集群

192.168.0.132

CentOS7

kafka03

kafka集群

zookeeper集群搭建

搭建

3台都要操作

下载安装包,解压

wget https://mirrors.cloud.tencent.com/apache/zookeeper/zookeeper-3.7.0/apache-zookeeper-3.7.0-bin.tar.gz
tar zxvf apache-zookeeper-3.7.0-bin.tar.gz
mv apache-zookeeper-3.7.0-bin zookeeper
rm -f apache-zookeeper-3.7.0-bin.tar.gz

创建集群配置文件

cat <<EOF> /data/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper/data/
clientPort=2181
server.0=192.168.0.19:2888:3888
server.1=192.168.0.36:2888:3888
server.2=192.168.0.18:2888:3888
EOF

创建数据目录

mkdir -p /data/zookeeper/data/

3台分别操作

zk1

echo 0 > /data/zookeeper/data/myid

zk2

echo 1 > /data/zookeeper/data/myid

zk3

echo 2 > /data/zookeeper/data/myid

3台都执行

cd /data/zookeeper/bin/ && ./zkServer.sh start
cd /data/zookeeper/bin/ && ./zkServer.sh status

会输出如下结果:

# ./zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
# ./zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
# ./zkServer.sh status
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /data/zookeeper/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

搭建完成

kafka集群搭建

搭建

以下操作3台都要做

下载安装包,解压

cd /data
wget https://mirrors.cloud.tencent.com/apache/kafka/2.8.0/kafka_2.13-2.8.0.tgz
tar xvf kafka_2.13-2.8.0.tgz
mv kafka* kafka

配置环境变量

cat <<EOF> /etc/profile.d/kafka.sh
export KAFKA_HOME=/data/kafka
export PATH=$PATH:$KAFKA_HOME/bin
EOF

source /etc/profile.d/kafka.sh

重启脚本

cat <<EOF> /data/kafka/restart.sh
#!/bin/bash

kafka-server-stop.sh
nohup kafka-server-start.sh config/server.properties >> /data/kafka/nohup.out 2>&1 &
EOF

chmod +x /data/kafka/restart.sh

用于监控的配置,修改 bin/kafka-server-start.sh,增加 JMX_PORT,可以获取更多指标

if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G"
export JMX_PORT="9099"
fi

关键配置如下:

name

含义

举例

broker.id

一个Kafka节点就是一个Broker.id,要保证唯一性

broker.id=0

listeners

kafka只面向内网时用到listeners,内外网需要作区分时才需要用到advertised.listeners

listeners=PLAINTEXT://192.168.0.137:9092

zookeeper.connect

配置zk集群信息

zookeeper.connect=192.168.0.19:2181

kafka01 配置文件​​/data/kafka/config/server.properties​

broker.id=0
listeners=PLAINTEXT://192.168.0.137:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.19:2181,192.168.0.36:2181,192.168.0.18:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

kafka02 配置文件​​/data/kafka/config/server.properties​

broker.id=1
listeners=PLAINTEXT://192.168.0.210:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.19:2181,192.168.0.36:2181,192.168.0.18:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

kafka03 配置文件​​/data/kafka/config/server.properties​

broker.id=2
listeners=PLAINTEXT://192.168.0.132:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.19:2181,192.168.0.36:2181,192.168.0.18:2181
zookeeper.connection.timeout.ms=18000
group.initial.rebalance.delay.ms=0

3台都启动

/data/kafka/restart.sh

监控

官方主页:​​https://github.com/smartloli/kafka-eagle​

官方部署文档:​​https://www.kafka-eagle.org/articles/docs/installation/linux-macos.html​

安装包下载地址:

​https://github.com/smartloli/kafka-eagle/archive/refs/tags/v2.0.6.tar.gz​

安装配置

mkdir -p /opt/kafka-eagele
tar zxvf kafka-eagle-bin-2.0.6.tar.gz
cd kafka-eagle-bin-2.0.6/
tar zxvf kafka-eagle-web-2.0.6-bin.tar.gz
mv kafka-eagle-web-2.0.6/* /opt/kafka-eagele/

配置文件 conf/system-config.properties

# 填写zk地址,会自动获取到kafka节点
kafka.eagle.zk.cluster.alias=cluster1
cluster1.zk.list=tdn1:2181,tdn2:2181,tdn3:2181

# 默认使用sqlite,容易死锁,需修改为MySQL
# Default use sqlite to store data
#kafka.eagle.driver=org.sqlite.JDBC
# It is important to note that the '/hadoop/kafka-eagle/db' path must be exist.
#kafka.eagle.url=jdbc:sqlite:/hadoop/kafka-eagle/db/ke.db
#kafka.eagle.username=root
#kafka.eagle.password=smartloli

# MySQL创建ke数据库即可,无需导 SQL
kafka.eagle.driver=com.mysql.jdbc.Driver
kafka.eagle.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
kafka.eagle.username=root
kafka.eagle.password=smartloli

运行

cd bin
chmod +x ke.sh
./ke.sh start

浏览器访问 ​​http://localhost:8048​

admin/123456

kafka2.8集群搭建及监控配置_kafka