1.Apache Kafka官网介绍

http://kafka.apache.org

发布 & 订阅: 类似于一个消息系统,读写流式的数据.

处理: 编写可扩展的流处理应用程序,用于实时事件响应的场景。

存储: 安全的将流式的数据存储在一个分布式,有副本备份,容错的集群。 Kafka@用于构建实时的数据管道和流式的app.它可以水平扩展,高可用,速度快,并且已经运用在数千家公司的生产环境。

2.CDH Kafka官网介绍

https://docs.cloudera.com/documentation/kafka/latest/topics/kafka.html

3.生产如何选择版本

生产上绝大部分是CDH来构建企业级大数据平台,那么Kafka属于需要自定义部署《CDK部署课程》。 故企业里现在使用CDH5.15.1版本,那么默认zookeeper的版本即为zookeeper-3.4.5-cdh5.15.1, 这是固定的,无法改变。 那么Kafka版本如何选择呢?一般我们选择,当前的CDH官网的Kafka安装包最新版本即可。 当然我司经典版本是选择[0.10.2.0+kafka2.2.0+110],主要是历史时间原因+Spark Streaming对接Kafka的起始版本0.10。

CDH Kafka:
 wget http://archive.cloudera.com/kafka/kafka/4/kafka-2.2.1-kafka4.1.0.tar.gz
 wget http://archive.cloudera.com/kafka/parcels/4.1.0/KAFKA-4.1.0-1.4.1.0.p0.4-el7.parcel
版本:
 kafka_2.11-2.2.1-kafka-4.1.0.jar: scalaversion-kafkaversion-cdkversion
CDH Zookeeper:
 wget https://archive.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.15.1.tar.gz

4.集群部署及启动

4.1 准备好安装包&配置环境变量

安装包
[hadoop@ruozedata001 app]$ ll
total 0
lrwxrwxrwx 1 hadoop hadoop 50 Oct 16 14:29 kafka -> /home/hadoop/software/kafka_2.
11-2.2.1-kafka-4.1.0
lrwxrwxrwx 1 hadoop hadoop 47 Oct 16 14:29 zookeeper -> /home/hadoop/software/zook
eeper-3.4.5-cdh5.15.1
[hadoop@ruozedata001 app]$ 
环境变量
[hadoop@ruozedata001 ~]$ vi .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
 . /etc/bashrc
fi
# hadoop env
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper
export KAFKA_HOME=/home/hadoop/app/kafka
export PATH=$ZOOKEEPER_HOME/bin:$KAFKA_HOME/bin:$PATH
生效
[hadoop@ruozedata001 ~]$ source .bashrc
ruozedata002/003节点一致。

4.2 zookeeper

[hadoop@ruozedata001 ~]$ cd app/zookeeper/conf/
[hadoop@ruozedata001 conf]$ cp zoo_sample.cfg zoo.cfg
编辑配置文件
[hadoop@ruozedata001 conf]$ vi zoo.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/home/hadoop/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=ruozedata001:2888:3888
server.2=ruozedata002:2888:3888
server.3=ruozedata003:2888:3888
创建dataDir参数的路径
[hadoop@ruozedata001 ~]$ mkdir tmp/zookeeper
[hadoop@ruozedata001 ~]$ echo 1 > tmp/zookeeper/myid
ruozedata002: 
 zoo.cfg与ruozedata001一致;
 echo 2 > tmp/zookeeper/myid
ruozedata003: 
 zoo.cfg与ruozedata001一致;
 echo 3 > tmp/zookeeper/myid
启动zk
[hadoop@ruozedata001 ~]$ zkServer.sh start
[hadoop@ruozedata002 ~]$ zkServer.sh start
[hadoop@ruozedata003 ~]$ zkServer.sh start
查看状态
[hadoop@ruozedata001 ~]$ zkServer.sh status
[hadoop@ruozedata002 ~]$ zkServer.sh status
[hadoop@ruozedata003 ~]$ zkServer.sh status
其中一个leader,另外两个是follower状态

4.3 kafka

三个节点的配置一件
[hadoop@ruozedata001 ~]$ cd app/kafka/config
[hadoop@ruozedata001 config]$ vi server.properties 
broker.id=0
host.name=ruozedata001
port=9092

log.dirs=/home/hadoop/log/kafka-logs

zookeeper.connect=ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka

[hadoop@ruozedata002 ~]$ cd app/kafka/config
[hadoop@ruozedata002 config]$ vi server.properties 
broker.id=1
host.name=ruozedata002
port=9092

log.dirs=/home/hadoop/log/kafka-logs

zookeeper.connect=ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka

[hadoop@ruozedata003 ~]$ cd app/kafka/config
[hadoop@ruozedata003 config]$ vi server.properties 
broker.id=2
host.name=ruozedata003
port=9092

log.dirs=/home/hadoop/log/kafka-logs

zookeeper.connect=ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka


启动
[hadoop@ruozedata001 kafka]$ pwd
/home/hadoop/app/kafka
[hadoop@ruozedata001 kafka]$ bin/kafka-server-start.sh -daemon config/server.properties

[hadoop@ruozedata002 kafka]$ pwd
/home/hadoop/app/kafka
[hadoop@ruozedata002 kafka]$ bin/kafka-server-start.sh -daemon config/server.properties

[hadoop@ruozedata003 kafka]$ pwd
/home/hadoop/app/kafka
[hadoop@ruozedata003 kafka]$ bin/kafka-server-start.sh -daemon config/server.properties

检验是否启动成功
[hadoop@ruozedata001 logs]$ pwd
/home/hadoop/app/kafka/logs
[hadoop@ruozedata001 logs]$ tail -200f server.log
[2019-10-26 15:09:12,384] INFO [KafkaServer id=0] started (kafka.server.KafkaServer)

[hadoop@ruozedata002 logs]$ pwd
/home/hadoop/app/kafka/logs
[hadoop@ruozedata002 logs]$ tail -200f server.log
[2019-10-26 15:37:26,883] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)

[hadoop@ruozedata003 logs]$ pwd
/home/hadoop/app/kafka/logs
[hadoop@ruozedata003 logs]$ tail -200f server.log
[2019-10-26 15:41:56,486] INFO [KafkaServer id=2] started (kafka.server.KafkaServer)

5.基础概念:

producer: 生产者 flume Maxwell

consumer: 消费者 SS/SSS/Flink/Flume

broker: 消息处理节点, kafka的启动进程

topic: 就是数据主题,是数据记录发布的地方,可以用来区分业务系统。 oms订单系统mysql.oms -->maxwell–>kafka topic: oms /oms文件夹下 wms仓库系统mysql.wms -->maxwell–>kafka topic: wms /wms文件夹下

applog---flume-->kafka topic: applog
systemlog---flume-->kafka topic: systemlog

partition:topic物理上的分组,一个topic可以分为多个partition,每个partition是一个有序的队列,其实就是一个文件夹/目录而已。partiton命名规则为topic名称-有序序号,比如ruozedata-0,ruozedata-1,ruozedata-2。

replication-factor : 副本数,就是一个分区会复制几份,和HDFS的block的副本数,设计思想一致,为了高容错。

6.常用脚本命令

6.1.创建topic,如能成功创建topic则表示集群安装完成,也可以用jps命令查看kafka进程是否存在。

bin/kafka-topics.sh –create –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka –replication-factor 3 –partitions 3 –topic ruozedata

运行
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --create \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --replication-factor 3 \
> --partitions 3 \
> --topic ruozedata
Created topic ruozedata.
[hadoop@ruozedata001 kafka]$ pwd
/home/hadoop/app/kafka

在三台机器上面生成了3个partition
[hadoop@ruozedata001 ~]$ cd log/kafka-logs/
[hadoop@ruozedata001 kafka-logs]$ ll
total 28
-rw-rw-r-- 1 hadoop hadoop    0 Oct 26 14:59 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:13 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop   46 Oct 26 16:13 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   46 Oct 26 16:13 replication-offset-checkpoint
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-0
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-1
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-2

[hadoop@ruozedata002 kafka-logs]$ ll
total 28
-rw-rw-r-- 1 hadoop hadoop    0 Oct 26 14:59 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:10 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop   46 Oct 26 16:10 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   46 Oct 26 16:11 replication-offset-checkpoint
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-0
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-1
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-2
[hadoop@ruozedata002 kafka-logs]$ pwd
/home/hadoop/log/kafka-logs

[hadoop@ruozedata003 ~]$ cd log/kafka-logs/
[hadoop@ruozedata003 kafka-logs]$ ll
total 28
-rw-rw-r-- 1 hadoop hadoop    0 Oct 26 14:59 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:14 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop   46 Oct 26 16:14 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   46 Oct 26 16:14 replication-offset-checkpoint
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-0
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-1
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:09 ruozedata-2

6.2.通过list命令查看创建的topic:

bin/kafka-topics.sh –list –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka

运行
[hadoop@ruozedata001 ~]$ cd app/kafka/
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --list \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka
ruozedata

6.3.查看创建的Topic

bin/kafka-topics.sh –describe –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka –topic ruozedata

[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --describe \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --topic ruozedata
Topic:ruozedata	PartitionCount:3	ReplicationFactor:3	Configs:
	Topic: ruozedata	Partition: 0	Leader: 0	Replicas: 0,2,1	Isr: 0,2,1
	Topic: ruozedata	Partition: 1	Leader: 1	Replicas: 1,0,2	Isr: 1,0,2
	Topic: ruozedata	Partition: 2	Leader: 2	Replicas: 2,1,0	Isr: 2,1,0

第一行列出了这个topic的总体情况,如topic名称,分区数量,副本数量等。 第二行开始,每一行列出了一个分区的信息,如它是第几个分区,这个分区的leader是哪个broker,副本位于哪些broker,有哪些副本处理同步状态。 Partition: 分区 Leader : 负责读写指定分区的节点 Replicas : 复制该分区log的节点列表 Isr : “in-sync” replicas,当前活跃的副本列表(是⼀个⼦集),并且可能成为Leader我们可以通过Kafka自带的bin/kafka-console-producer.sh和bin/kafka-console-consumer.sh脚本,来验证演示如果发布消息、消费消息。

6.4.删除topic

bin/kafka-topics.sh –delete –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka –topic ruozedata 假如删除不干净:1.linux磁盘文件夹 2.zk的/kafka的 ls /kafka/brokers/topics ls /kafka/config/topics 默认delete.topic.enable=true,执行删除命令后,无需关心。

运行
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --delete \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --topic ruozedata
Topic ruozedata is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

结果,之前生成的文件夹ruozedata-0/ruozedata-1/ruozedata-2都被删除了
[hadoop@ruozedata001 logs]$ cd ~/log/kafka-logs/
[hadoop@ruozedata001 kafka-logs]$ ll
total 20
-rw-rw-r-- 1 hadoop hadoop  4 Oct 26 16:30 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop  4 Oct 26 16:30 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop 54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop 18 Oct 26 16:30 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop 46 Oct 26 16:30 replication-offset-checkpoint
[hadoop@ruozedata002 kafka-logs]$ ll
total 20
-rw-rw-r-- 1 hadoop hadoop  4 Oct 26 16:30 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop  4 Oct 26 16:30 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop 54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop 18 Oct 26 16:30 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop 46 Oct 26 16:30 replication-offset-checkpoint
[hadoop@ruozedata003 kafka-logs]$ ll
total 20
-rw-rw-r-- 1 hadoop hadoop  4 Oct 26 16:30 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop  4 Oct 26 16:30 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop 54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop 18 Oct 26 16:30 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop 46 Oct 26 16:30 replication-offset-checkpoint

删除整理: 1.生产上命名不要有标点符号的字符,就英文可以带数字,默认小写 2.假如已经在运行的kafka只有1个topic,你可以删除,0风险 3.假如已经在运行的kafka只有多个topic,忍,风险可能存在 4.千万不要手贱或者强迫症,看topic名称不舒服,去删除topic,删除需谨慎! 5.删除不可逆,细心操作删除命令!

Notable changes in 1.0.0:
Topic deletion is now enabled by default, since the functionality is now stable. 
Users who wish to to retain the previous behavior should set the broker config del
ete.topic.enable to false. 
Keep in mind that topic deletion removes data and the operation is not reversible 
(i.e. there is no "undelete" operation)

6.5.修改topic

使用—-alert原则上可以修改任何配置,以下列出了一些常用的修改选项: bin/kafka-topics.sh –create –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka –replication-factor 1 –partitions 1 –topic test

运行
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --create \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --replication-factor 1 \
> --partitions 1 \
> --topic test
Created topic test.
结果只有ruozedata003上面create了test0
[hadoop@ruozedata003 kafka-logs]$ ll
total 24
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:30 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:53 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:53 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:53 replication-offset-checkpoint
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:52 test-0

改变分区数量为3 bin/kafka-topics.sh –alter –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka –topic test --partitions 3

运行
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --alter \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --topic test --partitions 3
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
结果三台机器上面都有test文件夹产生
[hadoop@ruozedata001 kafka-logs]$ ll
total 24
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:30 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:55 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:55 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:55 replication-offset-checkpoint
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:55 test-1
[hadoop@ruozedata002 kafka-logs]$ ll
total 24
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:30 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:55 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:55 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:55 replication-offset-checkpoint
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:55 test-2
[hadoop@ruozedata003 kafka-logs]$ ll
total 24
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:30 cleaner-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 16:55 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:55 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   13 Oct 26 16:55 replication-offset-checkpoint
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 16:52 test-0

查看 bin/kafka-topics.sh --describe –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka –topic test

运行
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --alter \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --topic test --partitions 3
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh --describe \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --topic test
Topic:test	PartitionCount:3	ReplicationFactor:1	Configs:
	Topic: test	Partition: 0	Leader: 2	Replicas: 2	Isr: 2
	Topic: test	Partition: 1	Leader: 0	Replicas: 0	Isr: 0
	Topic: test	Partition: 2	Leader: 1	Replicas: 1	Isr: 1

6.6.自动迁移数据到新的节点

http://kafka.apache.org/22/documentation.html#basic_ops_automigrate

7.console案例

7.1.创建topic

bin/kafka-topics.sh –create –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka –replication-factor 3 –partitions 3 –topic g7

运行
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --create \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --replication-factor 3 \
> --partitions 3 \
> --topic g7
Created topic g7.

7.2.生产者

[hadoop@ruozedata001 kafka]$ bin/kafka-console-producer.sh \
> --broker-list ruozedata001:9092,ruozedata002:9092,ruozedata003:9092 \
> --topic g7
>www.ruozedata.com
>1
>2
>3
>4
>5
>6
>7
>8
>9

7.3.消费者

[hadoop@ruozedata002 kafka]$ bin/kafka-console-consumer.sh \
> --bootstrap-server ruozedata001:9092,ruozedata002:9092,ruozedata003:9092 \
> --topic g7 \
> --from-beginning
www.ruozedata.com
1
2
3
4
5
6
7
8
9

7.4 假如consumer会话中断,重新打开,从头开始消费数据,数据乱序现象,思考如何保障全局有序呢?

再次输出结果,全局乱序:
[hadoop@ruozedata002 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server ruozedata001:9092,ruozedata002:9092,ruozedata003:9092 --topic g7 --from-beginning
1
4
7
www.ruozedata.com
3
6
9
2
5
8

7.5 假如删除g7 topic

bin/kafka-topics.sh 
 –delete 
 –zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka 
 –topic g7
磁盘的topic的分区⽂件夹名称,有delete标识
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --delete \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --topic g7
Topic g7 is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
[hadoop@ruozedata002 kafka-logs]$ ll
total 100
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 18:32 g7-0.140575305f754d63b7f57f49659dcb78-delete
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 18:32 g7-1.dc1a125e82f3461488a9de7e86a83fdc-delete
drwxrwxr-x 2 hadoop hadoop 4096 Oct 26 18:30 g7-2.eaa1b3144cb24784b157f286b5fbe987-delete
-rw-rw-r-- 1 hadoop hadoop    4 Oct 26 18:42 log-start-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop   54 Oct 26 14:59 meta.properties
-rw-rw-r-- 1 hadoop hadoop  395 Oct 26 18:42 recovery-point-offset-checkpoint
-rw-rw-r-- 1 hadoop hadoop  395 Oct 26 18:42 replication-offset-checkpoint

第一次尝试消费,失败,无法获取metadata
[hadoop@ruozedata002 kafka]$ bin/kafka-console-consumer.sh --bootstrap-server ruozedata001:9092,ruozedata002:9092,ruozedata003:9092 --topic g7 --from-beginning
[2019-10-26 18:45:51,559] WARN [Consumer clientId=consumer-1, groupId=console-consumer-48841] Error while fetching metadata with correlation id 4 : {g7=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

有时⽆法创建成功!奇怪!
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh \
> --create \
> --zookeeper ruozedata001:2181,ruozedata002:2181,ruozedata003:2181/kafka \
> --replication-factor 3 \
> --partitions 3 \
> --topic g7
Error while executing topic command : Topic 'g7' already exists.
[2019-10-26 18:47:37,588] ERROR org.apache.kafka.common.errors.TopicExistsException: Topic 'g7' already exists.
 (kafka.admin.TopicCommand$)

有时会成功!奇怪!
[hadoop@ruozedata001 kafka]$ bin/kafka-topics.sh --create --zookeeper ruozedata001
:2181,ruozedata002:2181,ruozedata003:2181/kafka --replication-factor 3 --partition
s 3 --topic g7 
Created topic test.
那么成功的topic的,必然没有之前⼀模⼀样名称的topic的旧数据!

8.故障案例一

异构平台Kafka对接使用 http://blog.itpub.net/30089851/viewspace-2152671/