1.数据写入

这里先根据 Flume整合Kafka 这篇文章中所提到的方法来往kafka里面写入数据

创建topic

kafka-topics.sh --create \
--topic f2k \
--zookeeper hadoop01:2181,hadoop02:2181,hadoop03:2181/kafka \
--partitions 3 \
--replication-factor 3

在hadoop01上创建输入文件

touch ~/access.log

然后创建配置文件

vi /home/hadoop/apps/apache-flume-1.8.0-bin/conf/flume2kafka.conf

这里稍微修改一下,将获取数据的方式变为从文件中获取

flume2kafka.conf

## Flume agent's name
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = exec
# 要执行的命令
a1.sources.r1.command = tail -F /home/hadoop/access.log

# kafka's sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.kafka.bootstrap.servers = hadoop01:9092,hadoop02:9092,hadoop03:9092
a1.sinks.k1.kafka.topic = f2k
a1.sinks.k1.kafka.producer.acks = 1
a1.sinks.k1.kafka.producer.linger.ms = 1

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

2.数据抽取

在往kafka中成功输入数据后,这个时候开始再使用Flume抽取Kafka的数据到HDFS

HDFS路径

hdfs dfs -mkdir -p /data/kfk

在hadoop03上创建配置文件

kafka2flume.conf

vi /home/hadoop/apps/apache-flume-1.8.0-bin/conf/kafka2flume.conf
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
 
a1.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
a1.sources.r1.kafka.bootstrap.servers = hadoop01:9092,hadoop02:9092,hadoop03:9092
a1.sources.r1.kafka.topics = f2k
a1.sources.kafkaSource.kafka.consumer.timeout.ms = 100
a1.sources.r1.batchSize = 20

 
# Describe the sink
a1.sinks.k1.type = hdfs
# 上传到hdfs的路径
a1.sinks.k1.hdfs.path = /data/kfk
a1.sinks.k1.hdfs.fileType = DataStream 
# Use a channel which buffers events in memory
a1.channels.c1.type = memory

 
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

在hadoop01上启动

nohup bin/flume-ng agent -n a1 -c conf -f conf/flume2kafka.conf  >/dev/null 2>&1 &

在hadoop03上启动

nohup bin/flume-ng agent -n a1 -c conf -f conf/kafka2flume.conf  >/dev/null 2>&1 &

然后在hadoop01上给access.log追加数据

echo hello >> ~/access.log
echo you >> ~/access.log

可以看到tmp文件,表示hdfs正在抽取

Flume抽取Kafka数据到HDFS_# Hadoop组件

tmp后缀没有了之后表示抽取完成

Flume抽取Kafka数据到HDFS_flume_02

可能遇到的问题

  • hdfs的目标目录下没有文件生成

    查看第一步的数据是否写入了kafka

    在hadoop01上启动消费者

    kafka-console-consumer.sh --topic f2k --bootstrap-server hadoop01:9092,hadoop02:9092,hadoop03:9092
    

    再发送数据,如果消费者没有反应,请先排查kafka的问题

    echo hello >> ~/access.log
    echo you >> ~/access.log