目录

  • Kafka简介
  • 消息队列
  • Kafka的应用场景
  • 消息队列的两种模型
  • Kafka中的重要概念
  • 消费者组
  • 幂等性
  • Kafka集群搭建
  • kafka集群部署
  • kafka启动脚本
  • Kafka命令行操作
  • 1.查看Kafka Topic列表
  • 2.创建Kafka Topic
  • 3.删除Kafka Topic
  • 4.kafka消费信息
  • 5.查看kafka Topic详情
  • 6.kafka压力测试


Kafka简介

消息队列

  • 消息队列——用于存放消息的组件
  • 程序员可以将消息放入到队列中,也可以从消息队列中获取消息
  • 很多时候消息队列不是一个永久性的存储,是作为临时存储存在的(设定一个期限:设置消息在MQ中保存10天)
  • 消息队列中间件:消息队列的组件,例如:Kafka、Active MQ、RabbitMQ、RocketMQ、ZeroMQ

Kafka的应用场景

  • 异步处理
  • 可以将一些比较耗时的操作放在其他系统中,通过消息队列将需要进行处理的消息进行存储,其他系统可以消费消息队列中的数据
  • 比较常见的:发送短信验证码、发送邮件
  • 系统解耦
  • 原先一个微服务是通过接口(HTTP)调用另一个微服务,这时候耦合很严重,只要接口发生变化就会导致系统不可用
  • 使用消息队列可以将系统进行解耦合,现在第一个微服务可以将消息放入到消息队列中,另一个微服务可以从消息队列中把消息取出来进行处理。进行系统解耦
  • 流量削峰
  • 因为消息队列是低延迟、高可靠、高吞吐的,可以应对大量并发
  • 日志处理
  • 可以使用消息队列作为临时存储,或者一种通信管道

消息队列的两种模型

  • 生产者、消费者模型
  • 生产者负责将消息生产到MQ中
  • 消费者负责从MQ中获取消息
  • 生产者和消费者是解耦的,可能是生产者一个程序、消费者是另外一个程序
  • 消息队列的模式
  • 点对点:一个消费者消费一个消息
  • 发布订阅:多个消费者可以消费一个消息

Kafka中的重要概念

  • broker
  • Kafka服务器进程,生产者、消费者都要连接broker
  • 一个集群由多个broker组成,功能实现Kafka集群的负载均衡、容错
  • producer:生产者
  • consumer:消费者
  • topic:主题,一个Kafka集群中,可以包含多个topic。一个topic可以包含多个分区
  • 是一个逻辑结构,生产、消费消息都需要指定topic
  • partition:Kafka集群的分布式就是由分区来实现的。一个topic中的消息可以分布在topic中的不同partition中
  • replica:副本,实现Kafkaf集群的容错,实现partition的容错。一个topic至少应该包含大于1个的副本
  • consumer group:消费者组,一个消费者组中的消费者可以共同消费topic中的分区数据。每一个消费者组都一个唯一的名字。配置group.id一样的消费者是属于同一个组中
  • offset:偏移量。相对消费者、partition来说,可以通过offset来拉取数据
  • group.id:消费者组的概念,可以在一个消费组中包含多个消费者。如果若干个消费者的group.id是一样的,表示它们就在一个组中,一个组中的消费者是共同消费Kafka中topic的数据。
  • Kafka是一种拉消息模式的消息队列,在消费者中会有一个offset,表示从哪条消息开始拉取数据

消费者组

  • 一个消费者组中可以包含多个消费者,共同来消费topic中的数据
  • 一个topic中如果只有一个分区,那么这个分区只能被某个组中的一个消费者消费
  • 有多少个分区,那么就可以被同一个组内的多少个消费者消费

幂等性

  • 生产者消息重复问题
  • Kafka生产者生产消息到partition,如果直接发送消息,kafka会将消息保存到分区中,但Kafka会返回一个ack给生产者,表示当前操作是否成功,是否已经保存了这条消息。如果ack响应的过程失败了,此时生产者会重试,继续发送没有发送成功的消息,Kafka又会保存一条一模一样的消息
  • 在Kafka中可以开启幂等性
  • 当Kafka的生产者生产消息时,会增加一个pid(生产者的唯一编号)和sequence number(针对消息的一个递增序列)
  • 发送消息,会连着pid和sequence number一块发送
  • kafka接收到消息,会将消息和pid、sequence number一并保存下来
  • 如果ack响应失败,生产者重试,再次发送消息时,Kafka会根据pid、sequence number是否需要再保存一条消息
  • 判断条件:生产者发送过来的sequence number 是否小于等于 partition中消息对应的sequence

Kafka集群搭建

前期准备:zookeeper必须搭建完毕

kafka集群部署

  1. 上传并解压安装包
[lili@hadoop102 software]$ tar -zxvf kafka_2.11-0.11.0.0.tgz -C /opt/module/
  1. 修改解压后的文件名称
[lili@hadoop102 software]$ mv kafka_2.11-0.11.0.0/ kafka
  1. 在/opt/module/kafka目录下创建logs文件夹
    作为kafka运行日志存放的文件
[lili@hadoop102 kafka]$ mkdir logs
  1. 修改kafka配置文件
[lili@hadoop102 kafka]$ vim config/server.properties 

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
#broker唯一编号,不能重复
broker.id=0

# Switch to enable topic deletion or not, default value is false
#删除topic功能
delete.topic.enable=true

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from 
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set, 
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
#处理网路请求的线程数量
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
#用来处理磁盘IO的线程数量
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
#Socket(套接字)可以看成是两个网络应用程序进行通信时,各自通信连接中的端点,这是一个逻辑上的
#概念。它是网络环境中进程间通信的API(应用程序编程接口),也是可以被命名和寻址的通信端点,使
#用中的每一个套接字都有其类型和一个与之相连进程。通信时其中一个网络应用程序将要传输的一段信
#息写入它所在主机的 Socket中,该 Socket通过与网络接口卡(NIC)相连的传输介质将这段信息送到另
#外一台主机的 Socket中,使对方能够接收到这段信息。 Socket是由IP地址和端口结合的,提供向应
#用层进程传送数据包的机制 [2] 。
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600


############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
#kafka运行日志存放的路径
log.dirs=/opt/module/kafka/logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
#topic在当前broker上的分区个数
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
#用来恢复和清理data下数据的线程数量
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
#segment文件保留的最长时间,超时将被删除
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
#配置连接Zookeeper集群地址
zookeeper.connect=hadoop102:2181,hadoop103:2181,hadoop104:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000


############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0
  1. 分发kafka文件夹到三台服务器
[lili@hadoop102 module]$ xsync kafka/

分发之后,分别在其他服务器上修改配置/opt/module/kafka/config/server.properties中的broker.id=1、broker.id=2

  1. 配置环境变量
    分别在三台服务器上配置环境变量
[lili@hadoop102 module]$ vim /etc/profile.d/env.sh 
#KAFKA_HOME
export KAFKA_HOME=/opt/module/kafka
export PATH=$PATH:$KAFKA_HOME/bin
[lili@hadoop102 module]$ source /etc/profile.d/env.sh
  1. 启动集群
[lili@hadoop102 kafka]$ bin/kafka-server-start.sh config/server.properties &
[lili@hadoop103 kafka]$ bin/kafka-server-start.sh config/server.properties &
[lili@hadoop104 kafka]$ bin/kafka-server-start.sh config/server.properties &

在打开一个shell窗口连接hadoop102查看系统进程

[lili@hadoop102 ~]$ xcall.sh jps
----------hadoop102----------
21025 NameNode
21460 NodeManager
21142 DataNode
21543 JobHistoryServer
25946 Kafka
20826 QuorumPeerMain
23707 Application
30063 Jps
----------hadoop103----------
16547 Kafka
13589 QuorumPeerMain
13800 ResourceManager
15736 Application
17993 Jps
13918 NodeManager
13662 DataNode
----------hadoop104----------
19041 Jps
14242 DataNode
14358 SecondaryNameNode
16840 Kafka
14447 NodeManager
14175 QuorumPeerMain
[lili@hadoop102 ~]$
  1. 关闭集群
[lili@hadoop102 kafka]$ bin/kafka-server-stop.sh stop
[lili@hadoop103 kafka]$ bin/kafka-server-stop.sh stop
[lili@hadoop104 kafka]$ bin/kafka-server-stop.sh stop

关闭集群后查看系统进程

[lili@hadoop102 ~]$ xcall.sh jps
----------hadoop102----------
21025 NameNode
21460 NodeManager
21142 DataNode
21543 JobHistoryServer
30135 Jps
20826 QuorumPeerMain
23707 Application
----------hadoop103----------
18050 Jps
13589 QuorumPeerMain
13800 ResourceManager
15736 Application
13918 NodeManager
13662 DataNode
----------hadoop104----------
14242 DataNode
14358 SecondaryNameNode
19102 Jps
14447 NodeManager
14175 QuorumPeerMain
[lili@hadoop102 ~]$

kafka启动脚本

  1. 编写脚本
[lili@hadoop102 bin]$ vim kf.sh
#!/bin/bash
case $1 in
"start"){
        for i in hadoop102 hadoop103 hadoop104
        do
                echo " --------启动 $i Kafka-------"
                ssh $i "/opt/module/kafka/bin/kafka-server-start.sh -daemon /opt/module/kafka/config/server.properties "
#daemon进程又称为守护 进程,是在系统 启动就运行,系统关闭才停止的进程,独立于终端之外,不与客户端交
#互。一般进程在关闭终端后就停止了,而daemon进程不会停止。
        done
};;
"stop"){
        for i in hadoop102 hadoop103 hadoop104
        do
                echo " --------停止 $i Kafka-------"
                ssh $i "/opt/module/kafka/bin/kafka-server-stop.sh  stop"
        done
};;
esac
  1. 增加脚本权限
[lili@hadoop102 bin]$ chmod 777 kf.sh
  1. 启动脚本
[lili@hadoop102 module]$ kf.sh start
  1. 关闭脚本
[lili@hadoop102 module]$ kf.sh stop

Kafka命令行操作

1.查看Kafka Topic列表

[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 --list
topic_event
topic_start

如果没有出现这两个topic可能的原因

  1. 在kafka启动前应当先启动采集日志flume
  2. 缺乏数据源,查看/tmp/logs/目录下是否有日志数据。
  3. 如果仍然没有解决,尝试自行创建Kafka Topic

2.创建Kafka Topic

进入到/opt/module/kafka/目录下分别创建:启动日志主题、事件日志主题。

  1. 创建启动日志主题
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181  --create --replication-factor 1 --partitions 1 --topic topic_start
  1. 创建事件日志主题
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181  --create --replication-factor 1 --partitions 1 --topic topic_event

3.删除Kafka Topic

  1. 删除启动日志主题
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --delete --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --topic topic_start
  1. 删除事件日志主题
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --delete --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --topic topic_event

4.kafka消费信息

  1. 消费启动日志主题
[lili@hadoop102 kafka]$ bin/kafka-console-consumer.sh \
--bootstrap-server hadoop102:9092 --from-beginning --topic topic_start

–from-beginning:会把主题中以往所有的数据都读取出来。根据业务场景选择是否增加该配置。

  1. 消费事件日志主题
[lili@hadoop102 kafka]$ bin/kafka-console-consumer.sh \
--bootstrap-server hadoop102:9092 --from-beginning --topic topic_event

5.查看kafka Topic详情

  1. 查看启动日志主题
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 \
--describe --topic topic_start
  1. 查看事件日志主题
[lili@hadoop102 kafka]$ bin/kafka-topics.sh --zookeeper hadoop102:2181 \
--describe --topic --topic topic_event

6.kafka压力测试

利用kafka自带的官方脚本,对Kafka进行压测。

kafka-producer-perf-test.sh

kafka-consumer-perf-test.sh

  1. Kafka Producer压力测试
[lili@hadoop102 kafka]$ bin/kafka-producer-perf-test.sh  --topic test --record-size 100 --num-records 100000 --throughput -1 --producer-props bootstrap.servers=hadoop102:9092,hadoop103:9092,hadoop104:9092

说明:

record-size是一条信息有多大,单位是字节。

num-records是总共发送多少条信息。

throughput 是每秒多少条信息,设成-1,表示不限流,可测出生产者最大吞吐量。

打印结果:

100000 records sent, 27510.316369 records/sec (2.62 MB/sec), 1303.49 ms avg latency, 
1597.00 ms max latency, 1434 ms 50th, 1569 ms 95th, 1589 ms 99th, 1595 ms 99.9th.

参数解析:本例中一共写入10w条消息,吞吐量为2.62 MB/sec,每次写入的平均延迟为1597.00毫秒,最大的延迟为1595毫秒。(我的电脑好菜!)

  1. Kafka Consumer压力测试
[lili@hadoop102 kafka]$ bin/kafka-consumer-perf-test.sh --zookeeper hadoop102:2181 --topic test --fetch-size 10000 --messages 10000000 --threads 1

说明:

–zookeeper 指定zookeeper的链接信息

–topic 指定topic的名称

–fetch-size 指定每次fetch的数据的大小

–messages 总共要消费的消息个数

注:Consumer的测试,如果这四个指标(IO,CPU,内存,网络)都不能改变,考虑增加分区数来提升性能。

打印结果:

start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec
2021-07-31 07:23:43:903, 2021-07-31 07:23:49:424, 19.0735, 3.4547, 200000, 36225.3215

开始测试时间,测试结束数据,共消费数据19.0735MB,吞吐量3.4547MB/s,共消费200000条,平均每秒消费36225.3215条。