The main content :

  1. kafka主题的管理
  2. KafkaAdminClient应用

主题的管理

创建主题

bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic topicone --partitions 2 --replicationfactor 1

localhost:2181 zookeeper所在的ip,zookeeper 必传参数,多个zookeeper用 ‘,’分开。

partitions 用于设置主题分区数,每个线程处理一个分区数据

replication-factor 用于设置主题副本数,每个副本分布在不通节点,不能超过总结点数。如你只有一个 节点,但是创建时指定副本数为2,就会报错。

查看主题

查看topic元数据信细

topic元数据信细保存在Zookeeper节点中 (zookeeper命令 )

[zk: localhost:2181(CONNECTED) 0] get /brokers/topics/topicone
{"version":1,"partitions":{"1":[0],"0":[0]}}
cZxid = 0x33
ctime = Sat Nov 06 21:04:57 PDT 2021
mZxid = 0x33
mtime = Sat Nov 06 21:04:57 PDT 2021
pZxid = 0x35
cversion = 1
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 44
numChildren = 1
查看所有主题

执行kafka命令

[root@localhost zookeeper-3.4.14]# cd /opt/kafka/kafka_2.12-2.2.1/
[root@localhost kafka_2.12-2.2.1]# bin/kafka-topics.sh --list -zookeeper localhost:2181
__consumer_offsets
topicone
[root@localhost kafka_2.12-2.2.1]# 
查看特定主题
[root@localhost kafka_2.12-2.2.1]# bin/kafka-topics.sh --describe -zookeeper localhost:2181 --topic topicone
Topic:topicone	PartitionCount:2	ReplicationFactor:1	Configs:
	Topic: topicone	Partition: 0	Leader: 0	Replicas: 0	Isr: 0
	Topic: topicone	Partition: 1	Leader: 0	Replicas: 0	Isr: 0
[root@localhost kafka_2.12-2.2.1]# 

修改主题

// 增加配置 
bin/kafka-topics.sh --alter -zookeeper localhost:2181 --topic topicone --config flush.messages=1
Default(默认值)None
server.properties:log.flush.interval.messages
说明(解释): 
log文件”sync”到磁盘之前累积的消息条数,因为磁盘IO操作是一个慢操作,但又是一个”数据可靠性"的必要手段,所以此参数的设置,需要在"数据可靠性"与"性能"之间做必要的权衡.如果此值过大,将会导致每次"fsync"的时间较长(IO阻塞),如果此值过小,将会导致"fsync"的次数较多,这也意味着整体的client请求有一定的延迟.物理server故障,将会导致没有fsync的消息丢失.
更多参考 : https://www.jianshu.com/p/c9a54a587f0e

删除主题

若 delete.topic.enable=true 直接彻底删除该 Topic。

若 delete.topic.enable=false 如果当前 Topic 没有使用过即没有传输过信息:可以彻底删除。

如果当前 Topic 有使用过即有过传输过信息: 并没有真正删除 Topic 只是把这个 Topic 标记为删除(marked for deletion),重启 Kafka Server 后删 除。

bin/kafka-topics.sh --delete -zookeeper localhost:2181 --topic topicone
topicone is marked for deletion.

增加分区

// 增加分区数
[root@localhost kafka_2.12-2.2.1]# bin/kafka-topics.sh --alter -zookeeper localhost:2181 --topic topicone --partitions 3
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
[root@localhost kafka_2.12-2.2.1]#

修改分区数时,仅能增加分区个数。若是用其减少 partition 个数,会报错

KafkaAdminClient应用

我们都习惯使用Kafka中bin目录下的脚本工具来管理查看Kafka,但是有些时候需要将某些管理查看的 功能集成到系统(比如Kafka Manager)中,那么就需要调用一些API来直接操作Kafka了

package com.demo.kafkademo.ch4;
import org.apache.kafka.clients.admin.*;
import org.apache.kafka.common.config.ConfigResource;
/**
 * KafkaAdminClient应用
 */
public class KafkaAdminConfigOperation {
    static String brokerList =  "192.168.33.129:9092";
    static String topic = "topicone";
    public static void main(String[] args) throws ExecutionException, InterruptedException {
        describeTopicConfig();
        //alterTopicConfig();
        //addTopicPartitions();
    }
    // 查看主题详情
    public static void describeTopicConfig() throws ExecutionException,
            InterruptedException {

        Properties props = new Properties();
        props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
        props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, 30000);
        AdminClient client = AdminClient.create(props);

        ConfigResource resource =
                new ConfigResource(ConfigResource.Type.TOPIC, topic);
        DescribeConfigsResult result =
                client.describeConfigs(Collections.singleton(resource));
        Config config = result.all().get().get(resource);
        System.out.println("=====================================");
        System.out.println("config:"+config);
        client.close();
    }
	// 修改主题配置
    public static void alterTopicConfig() throws ExecutionException, InterruptedException {

        Properties props = new Properties();
        props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
        props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, 30000);
        AdminClient client = AdminClient.create(props);

        ConfigResource resource =
                new ConfigResource(ConfigResource.Type.TOPIC, topic);
        ConfigEntry entry = new ConfigEntry("cleanup.policy", "compact");
        Config config = new Config(Collections.singleton(entry));
        Map<ConfigResource, Config> configs = new HashMap<>();
        configs.put(resource, config);
        AlterConfigsResult result = client.alterConfigs(configs);
        result.all().get();

        client.close();
    }
	// 添加分区
    public static void addTopicPartitions() throws ExecutionException, InterruptedException {

        Properties props = new Properties();
        props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, brokerList);
        props.put(AdminClientConfig.REQUEST_TIMEOUT_MS_CONFIG, 30000);
        AdminClient client = AdminClient.create(props);

        NewPartitions newPartitions = NewPartitions.increaseTo(5);
        Map<String, NewPartitions> newPartitionsMap = new HashMap<>();
        newPartitionsMap.put(topic, newPartitions);
        CreatePartitionsResult result = client.createPartitions(newPartitionsMap);
        result.all().get();

        client.close();
    }
}