目录

 

1 基本概念

2 提交方式

2.0 重复消费和丢失消费图解

2.1 自动提交

2.2 手动提交(同步提交)commitsync

2.4 异步提交commitAsync

2.3 同步和异步组合提交commitAsync()和 commitsync()

2.4 特殊提交、未消费完一个批次消息时提交偏移量


注:基本代码和信息来自享学课堂,自学略有修改

1 基本概念

  • 提交:消费者消费完消息之后,更新自己消费那个消息的操作
  • _consumer_offset:消费者消费完消息之后,会往_consumer_offset主题发送消息,_consumer_offset保存每个分区的偏移量
  • 分区再均衡:消费者的数量发生变化,或者主题分区数量发生变化,会修改消费者对应的分区关系,叫做分区再均衡:保证kafka高可用和伸缩性;缺点:在均衡期间,消费者无法读取消息,群组短时间不可用

 


2 提交方式

2.0 重复消费和丢失消费图解

重复消费图解:

golang kafka同步消息和异步消息 kafka的同步和异步commit_System

丢失消费图解:

golang kafka同步消息和异步消息 kafka的同步和异步commit_偏移量_02

2.1 自动提交


消费者会自动把从 poll() 方法接收到的 最大 偏移量提交上去。


参数:


  • enable.auto.comnit=true:在消费者close()的时候也会自动提交
  • auto.commit.interval.ms=默认5s,没过5秒就会提交偏移量,但是在4秒发生了分区在均衡,偏移量还没来得及提交,他们这四秒的消息就会被重复消费

问题:


 


自动提交虽然方便 , 但是很明显是一种基于时间提交的方式 , 不过并没有为我们留有余地来避免重复处理消息。


2.2 手动提交(同步提交)commitsync

参数:

auto.commit. offset = false:使用commitsync()提交poll()返回最新偏移量


注意:


  1. 处理完业务之后,一定要手动调用commitsync()
  2. 如果发生了在均衡,由于当前commitsync偏移量还未提交,所以消息会被重复消费
  3. commitsync会阻塞直到提交成功

 


 


代码示例:


public class CommitSync {

    public static void main(String[] args) {
        /**消息消费者*/
        Properties properties = new Properties();
        properties.put("bootstrap.servers","127.0.0.1:9092");
        properties.put("key.deserializer", StringDeserializer.class);
        properties.put("value.deserializer", StringDeserializer.class);
        /*取消自动提交*/
        properties.put(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,false);

        KafkaConsumer<String,String> consumer = new KafkaConsumer<String, String>(properties);
        try {
            consumer.subscribe(Collections.singletonList("simple"));
            while(true){
                ConsumerRecords<String, String> records = consumer.poll(500);
                for(ConsumerRecord<String, String> record:records){
                    System.out.println(String.format(
                            "主题:%s,分区:%d,偏移量:%d,key:%s,value:%s",
                            record.topic(),record.partition(),record.offset(), record.key(),record.value()));
                    //自定义业务逻辑
                }
                //开始事务
                //读业务写数据库-
                //偏移量写入数据库
                //提交
                consumer.commitSync();
            }
        } finally {
            consumer.close();
        }
    }
}


2.4 异步提交commitAsync


 


注意:


  1. commitAsync()不会重试提交偏移量,重试提交可能会导致重复消费
  2. commitAsync()也支持回调,在 broker 作出响应时会执行回调。回调经常被用于记录提交错误或生成度量指标。
public class CommitAsync {

    public static void main(String[] args) {
        /**消息消费者*/
        Properties properties = new Properties();
        properties.put("bootstrap.servers","127.0.0.1:9092");
        properties.put("key.deserializer", StringDeserializer.class);
        properties.put("value.deserializer", StringDeserializer.class);
        /**取消自动提交*/
        properties.put("enable.auto.commit",false);

        KafkaConsumer<String,String> consumer = new KafkaConsumer<String, String>(properties);
        try {
            consumer.subscribe(Collections.singletonList("simple"));
            while(true){
                ConsumerRecords<String, String> records = consumer.poll(500);
                for(ConsumerRecord<String, String> record:records){
                    System.out.println(String.format("主题:%s,分区:%d,偏移量:%d,key:%s,value:%s",
                            record.topic(),record.partition(),record.offset(), record.key(),record.value()));
                    //自定义业务逻辑开发
                }
                //消费完毕,同步提交
                consumer.commitAsync();
                /**允许执行回调*/
//                consumer.commitAsync(new OffsetCommitCallback() {
//                    @Override
//                    public void onComplete( Map<TopicPartition, OffsetAndMetadata> offsets,Exception exception) {
//                        if(exception!=null){
//                            System.out.print("Commmit failed for offsets ");
//                            System.out.println(offsets);
//                            exception.printStackTrace();
//                        }
//                    }
//                });
            }
        } finally {
            consumer.close();
        }
    }
}

 


2.3 同步和异步组合提交commitAsync()和 commitsync()


理由:


因此, 在消费者关闭前一般会组合使用 commitAsync() 和 commitsync() 。


同步一定会提交成功,异步可能会失败

代码示例:

public class SyncAndAsync {
    public static void main(String[] args) {
        /**消息消费者*/
        Properties properties = new Properties();
        properties.put("bootstrap.servers","127.0.0.1:9092");
        properties.put("key.deserializer", StringDeserializer.class);
        properties.put("value.deserializer", StringDeserializer.class);
        /*取消自动提交*/
        properties.put("enable.auto.commit",false);

        KafkaConsumer<String,String> consumer = new KafkaConsumer<String, String>(properties);
        try {
            consumer.subscribe(Collections.singletonList("simple"));
            while(true){
                ConsumerRecords<String, String> records = consumer.poll(500);
                for(ConsumerRecord<String, String> record:records){
                    System.out.println(String.format(
                            "主题:%s,分区:%d,偏移量:%d,key:%s,value:%s",
                            record.topic(),record.partition(),record.offset(),
                            record.key(),record.value()));
                    //自定义工作
                }
                //避免等待,异步提交
                consumer.commitAsync();
            }
        } catch (CommitFailedException e) {
            System.out.println("提交失败");
            e.printStackTrace();
        } finally {
            try {
                //最后一次提交,确保成功,同步提交
                consumer.commitSync();
            } finally {
                consumer.close();
            }
        }
    }
}

2.4 特殊提交、未消费完一个批次消息时提交偏移量


注意:


  1. commitSync()或 commitAsync()来实现,因为它只会提交最后一个偏移量;
  2. 消费者 API 允许在调用 commitsync()和 commitAsync()方法时传进去希望提交的主题分区和偏移量的 map,就能够在一个批次消息未消费完时提交偏移量

 


代码示例:


 


 


public class CommitSpecial {
    public static void main(String[] args) {
        /**消息消费者*/
        Properties properties = new Properties();
        properties.put("bootstrap.servers","127.0.0.1:9092");
        properties.put("key.deserializer", StringDeserializer.class);
        properties.put("value.deserializer", StringDeserializer.class);
        /*取消自动提交*/
        properties.put("enable.auto.commit",false);

        KafkaConsumer<String,String> consumer = new KafkaConsumer<String, String>(properties);
        //主题分区-偏移量的关系
        Map<TopicPartition, OffsetAndMetadata> currOffsets = new HashMap<TopicPartition, OffsetAndMetadata>();
        int count = 0;
        try {
            consumer.subscribe(Collections.singletonList("simple"));
            while(true){
                ConsumerRecords<String, String> records = consumer.poll(500);
                for(ConsumerRecord<String, String> record:records){
                    System.out.println(String.format(
                            "主题:%s,分区:%d,偏移量:%d,key:%s,value:%s",
                            record.topic(),record.partition(),record.offset(),
                            record.key(),record.value()));
                    //保存每个主题分区和偏移量的关系,并不是在处理完毕消息列表才提交
                    currOffsets.put(new TopicPartition(record.topic(),record.partition()),
                            new OffsetAndMetadata(record.offset()+1,"no meta"));
                    if(count%11==0){
                        consumer.commitAsync(currOffsets,null);
                    }
                    count++;
                }
            }
        } finally {
            consumer.close();
        }
    }
}