系统环境

apt-get update apt-get install -y default-jdk

下载软件 take /work/soft wget http://xxxxx/kafka_2.12-2.2.0.tgz
&& tar -xvf kafka_2.12-2.2.0.tgz
&& ln -s kafka_2.12-2.2.0 kafka

Zookeeper

Zookeeper 不需要单独安装, 直接用 kafka 自带的 Zookeeper

初始化 下面 3 条命令在 3 台服务器上分别执行:

mkdir -p /work/soft/zookeeper/version-2 && echo 1 > /work/soft/zookeeper/myid mkdir -p /work/soft/zookeeper/version-2 && echo 2 > /work/soft/zookeeper/myid mkdir -p /work/soft/zookeeper/version-2 && echo 3 > /work/soft/zookeeper/myid

配置文件 vi /work/soft/kafka/config/zookeeper.properties

使用以下配置做覆盖, 修改 IP 地址部分即可

### The number of milliseconds of each tick
tickTime=2000

### the port at which the clients will connect
clientPort=2181
### The number of ticks that the initial
### synchronization phase can take
initLimit=10
### The number of ticks that can pass between
### sending a request and getting an acknowledgement
syncLimit=5
### the directory where the snapshot is stored.
dataDir=/work/soft/zookeeper

server.1=10.1.215.56:2888:3888
server.2=10.1.215.58:2888:3888
server.3=10.1.215.57:2888:3888

supervisor 配置

vim /etc/supervisor/conf.d/zookeeper.conf [program:zookeeper] directory=/work/soft/kafka command=/work/soft/kafka/bin/zookeeper-server-start.sh /work/soft/kafka/config/zookeeper.properties autostart=true autorestart=true startsecs=3 startretries=20

Kafka

配置文件 vim /work/soft/kafka/config/server.properties

修改部分:

  • broker.id=0, id 每台机器唯一 0, 1, 2

  • listeners=PLAINTEXT://192.168.0.90:9092, 替换ip地址为本地ip(内网)

  • message 保存时间, 保存时间过长可能导致硬盘被打

    • normel kafka log.retention.hours=6
    • share kafka log.retention.hours=60, 保存48+12 = 60小时
  • log.dirs=/work/soft/kafka/kafka-logs

  • zookeeper连接信息zookeeper.connect=10.30.0.237:2181,10.30.0.235:2181,10.30.0.236:2181替换机器列表ip地址(内网)

添加部分:

# increate message size limit
message.max.bytes=20000000
replica.fetch.max.bytes=30000000
# replication factor
default.replication.factor=3

supervisor 配置

vim /etc/supervisor/conf.d/kafka.conf [program:kafka] directory=/work/soft/kafka command=/work/soft/kafka/bin/kafka-server-start.sh /work/soft/kafka/config/server.properties autostart=true autorestart=true startsecs=3 startretries=20

检验是否部署成功 创建topic

cd /work/soft/kafka/bin

./kafka-topics.sh --create --zookeeper ip1:2181,ip2:2181,ip3:2181 --replication-factor 1 --partitions 1 --topic test       //创建一个名字为test的topic

./kafka-topics.sh --list --zookeeper ip1:2181,ip2:2181,ip3:2181             //列出所有已经创建好的topic

生产者发送消息到kafka

 ./kafka-console-producer.sh --broker-list ip1:9092,ip2:9092,ip3:9092 --topic test

消费者消费消息
 ./kafka-console-consumer.sh --bootstrap-server  ip1:9092,ip2:9092,ip3:9092  --topic test --from-beginning

注释:关于--broker-list 和 --bootstrap-server 这两个参数,旧版本的kafka是使用的--broker-list , 新版本的kafka是使用的--bootstrap-server

可以使用 ./kafka-console-consumer.sh --help 或者 ./kafka-console-producer.sh --help 查看参数

生产者:\


docker run --interactive --rm \

      confluentinc/cp-kafkacat \

      kafkacat -b kafka_IP:9092 \

              -t test \

              -K: \

              -P

消费者:

docker run --tty --interactive --rm \

         confluentinc/cp-kafkacat \

         kafkacat -b kafka_IP:9092 -C -t test -T