文章目录
- @KafkaListener的各种操作
- 多线程和单线程消费
- 容器工厂ConcurrentKafkaListenerContainerFactory
- 批量消息消费和单条消息消费
- 代码
- 参考
@KafkaListener的各种操作
- 通过KafkaListener可以自定义批量消费和多线程消费,通过自定义创建消费容器的工厂类,来定义不同的消费容器,如下
多线程和单线程消费
@KafkaListener(
id = "concurrencyConsumer",
topics = "#{'${kafka.listener.multiple.partition.topic}'.split(',')}",
containerFactory = "ackConcurrencyContainerFactory")
public void consumerListener(List<ConsumerRecord> consumerRecords, Acknowledgment ack) {
LogRecord.handle(consumerRecords, ack);
}
@KafkaListener(
id = "singleConsumer",
topics = "#{'${kafka.listener.single.partition.topic}'.split(',')}",
containerFactory = "ackSingleContainerFactory")
public void inputPersonfileNewCluster(List<ConsumerRecord> consumerRecords, Acknowledgment ack) {
LogRecord.handle(consumerRecords, ack);
}
- 这里面id是自定义的,topics是配置文件中定义好的,2这的区别就在containerFactory这个参数的指定,这个参数是一个Bean的名字,是自定义的创建消费容器的工厂bean。
容器工厂ConcurrentKafkaListenerContainerFactory
- 下面这bean就是提供并发能力的消费容器的工厂bean,关键在于factory.setConcurrency(concurrency);设置了一个并发度,这个需要小于topic的分区数量,否则会有多余的消费者线程无法消费到消息。
@Bean("ackConcurrencyContainerFactory")
public ConcurrentKafkaListenerContainerFactory ackContainerFactory() {
ConcurrentKafkaListenerContainerFactory factory = new ConcurrentKafkaListenerContainerFactory();
factory.setConsumerFactory(new DefaultKafkaConsumerFactory(consumerProps()));
factory.getContainerProperties().setAckMode(AbstractMessageListenerContainer.AckMode.MANUAL_IMMEDIATE);
factory.setBatchListener(true);
factory.setConcurrency(concurrency);
return factory;
}
- 创建一个包含4个分区的topic test1和一个只有一个分区的topic test,设置并发度为4,
启动后日志如下:
INFO|2019-04-17 21:53:04.079|[singleConsumer-0-L-1 ]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test,partition = 0, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.080|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 0, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.137|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 2, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.153|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 3, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:04.168|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 1, offset = 0,value = fo": "{\"c,Time: Wed Apr 17 21:53:04 CST 2019
INFO|2019-04-17 21:53:07.848|[singleConsumer-0-L-1 ]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test,partition = 0, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:07 CST 2019
INFO|2019-04-17 21:53:07.914|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 0, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:07 CST 2019
INFO|2019-04-17 21:53:07.962|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 2, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:07 CST 2019
INFO|2019-04-17 21:53:08.200|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 3, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:08 CST 2019
INFO|2019-04-17 21:53:08.506|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 1, offset = 10000,value = fo": "{\"c,Time: Wed Apr 17 21:53:08 CST 2019
INFO|2019-04-17 21:53:11.559|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 0, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:11 CST 2019
INFO|2019-04-17 21:53:11.618|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 2, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:11 CST 2019
INFO|2019-04-17 21:53:11.862|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 3, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:11 CST 2019
INFO|2019-04-17 21:53:12.131|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 1, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:12 CST 2019
INFO|2019-04-17 21:53:13.311|[singleConsumer-0-L-1 ]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test,partition = 0, offset = 20000,value = fo": "{\"c,Time: Wed Apr 17 21:53:13 CST 2019
INFO|2019-04-17 21:53:14.986|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 0, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:14 CST 2019
INFO|2019-04-17 21:53:15.047|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 2, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:15 CST 2019
INFO|2019-04-17 21:53:15.256|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 3, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:15 CST 2019
INFO|2019-04-17 21:53:15.676|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 1, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:15 CST 2019
INFO|2019-04-17 21:53:17.396|[singleConsumer-0-L-1 ]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test,partition = 0, offset = 30000,value = fo": "{\"c,Time: Wed Apr 17 21:53:17 CST 2019
INFO|2019-04-17 21:53:19.376|[concurrencyConsumer-0-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 0, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:19 CST 2019
INFO|2019-04-17 21:53:19.586|[concurrencyConsumer-2-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 2, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:19 CST 2019
INFO|2019-04-17 21:53:19.708|[concurrencyConsumer-3-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 3, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:19 CST 2019
INFO|2019-04-17 21:53:20.099|[concurrencyConsumer-1-L-1]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test1,partition = 1, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:20 CST 2019
INFO|2019-04-17 21:53:21.908|[singleConsumer-0-L-1 ]|c.i.m.c.LogRecord.logRecode. 23|消费数据: topic = test,partition = 0, offset = 40000,value = fo": "{\"c,Time: Wed Apr 17 21:53:21 CST 2019
- 这里可以看到,消费test1主题的并发消费者分组,包含四个线程,打印出的线程id不一样,消费test主题的只有一个线程,消费速度很快,因此每消费一万条才打印一条日志。
- 另外验证了,将并发度设置为6,那么5和6这两个消费者线程是不会打印出日志的,而在前面可以看到下面这样的日志,也就是这两个消费者加入到了消费者分组,但是后面并没有消费,这里的线程命名规则和clientId命名规则都是递增式的:
INFO|2019-04-17 21:55:04.030|[concurrencyConsumer-4-C-1]|o.a.k.c.c.i.AbstractCoordinator.sendJoinGroupRequest. 486|[Consumer clientId=consumer-5, groupId=test-group-xn-03] (Re-)joining group
INFO|2019-04-17 21:55:04.030|[concurrencyConsumer-5-C-1]|o.a.k.c.c.i.AbstractCoordinator.sendJoinGroupRequest. 486|[Consumer clientId=consumer-6, groupId=test-group-xn-03] (Re-)joining group
批量消息消费和单条消息消费
- 此处设置类似,就不在赘述,设置factory.setBatchListener(true),同时设置参数max.poll.records,表示批量消费时的最大消息数量。