1、配置环境变量

[]# vim /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_212

export JRE_HOME=/usr/java/jdk1.8.0_212/jre

export PATH=$JAVA_HOME/bin:$PATH:/usr/local/mysql/bin

export CLASSPATH=.:$JAVA_HOME/lib/tools.jar

export KAFKA_HOME=/opt/kafka

export PATH=$PATH:$KAFKA_HOME/bin

[]# source /etc/profile

2、下载解压kafka

[]# tar xf kafka_2.13-3.4.1.tgz -C /opt/

[]# mv /opt/kafka_2.13-3.4.1 /opt/kafka

3、、修改配置文件

[]# cp /opt/kafka/config/kraft/server.properties /opt/kafka/config/kraft/server.properties-bak

[]# vim /opt/kafka/config/kraft/server.properties

process.roles=broker,controller

node.id=1

controller.quorum.voters=1@192.168.1.53:9093,2@192.168.1.138:9093,3@192.168.1.142:9093

listeners=PLAINTEXT://:9092,CONTROLLER://:9093

inter.broker.listener.name=PLAINTEXT

advertised.listeners=PLAINTEXT://192.168.1.138:9092

log.dirs=/opt/kafka/data/kraft-combined-logs


三台服务器配置文件都需要修改,不同的地方为

# 节点ID,自己设置每个节点的值要不同

node.id=1

# 使用IP端口,每个节点填写自己节点的IP

advertised.listeners=PLAINTEXT://192.168.1.138:9092

4、任意节服务器生成一个uuid

[]# cd /opt/kafka/bin/

[]# sh kafka-storage.sh random-uuid

eN-_WDJoTKqdO3CvmxPPsQ                备注:生成的id


用该uuid格式化kafka存储目录,三台服务器都要执行以下命令

[]# sh /opt/kafka/bin/kafka-storage.sh format -t  eN-_WDJoTKqdO3CvmxPPsQ -c /opt/kafka/bin/config/kraft/server.properties

5、启动集群

[]# systemctl daemon-reload

[]# systemctl start kafka