文章目录

  • 前言
  • 1.定义
  • 2.基础架构(列举比较常用的组件类型)
  • 3.官方案例(监控端口数据发送到logger)
  • 4.监控单个本地文件到logger (exec Source)
  • 5. 监控单个本地文件到HDFS (Flume需要持有Hadoop的相关jar包)(exec Source)
  • 6.监控本地文件夹下新文件到HDFS (Spooling Source)
  • 7.实时监控目录下的多个追加文件(taildirSource)
  • 8.Flume事务
  • 9.Flume Agent 内部原理:
  • 10.Flume联接
  • 11.案例: Flume之间的连接
  • 12.故障转移
  • 13.负载均衡
  • 14.聚合
  • 15.自定义拦截器(Interceptor),实现 Multiplexing(多路复用)
  • 16.自定义Source
  • 17.自定义Sink
  • 18.Flume监控-Ganglia
  • 官方链接
  • 总结



前言

今天与大家分享的是本菜鸟的Flume学习笔记, Flume作为一种大数据采集工具应用还是十分广泛的,并且其配置非常的简单(配置参数可参考官网)。
有什么意见尽管提出来,本菜鸟来者不拒。
一起学习, 一起进步!
比心心~


提示:以下是本篇文章正文内容,下面案例可供参考

1.定义

高可用、高可靠、分布式的海量日志采集、聚合和传输的系统。
基于流式架构。
最主要的作用:实时读取服务器本地磁盘的数据,将数据写入到HDFS。

flume 日志聚合 flume日志采集实训总结_flume 日志聚合


2.基础架构(列举比较常用的组件类型)

Agent  // 传输单元,由Header和Body组成。
Source   //  avro、exec、spooling directory、netcat、tairdir
Channel  //  Memory Channel、File Channel、Kafka Channel
Sink  //  HDFS、logger、avro、file、HBase

flume 日志聚合 flume日志采集实训总结_flume 日志聚合_02


3.官方案例(监控端口数据发送到logger)

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动:$ bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console

4.监控单个本地文件到logger (exec Source)

a1.sources = r1
a1.sinks = k1
a1.channels = c1

a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /opt/module/hive-1.2.1/logs/hive.log

a1.channels.c1.type = memory
a1.channels.c1.capacaity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.type = logger

a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

$ bin/flume-ng agent --conf conf --conf-file file_flume_logger.conf --name a1 -Dflume.root.logger=INFO,console

5. 监控单个本地文件到HDFS (Flume需要持有Hadoop的相关jar包)(exec Source)

commons-configuration-1.6.jar、
hadoop-auth-2.7.2.jar、
hadoop-common-2.7.2.jar、
hadoop-hdfs-2.7.2.jar、
commons-io-2.4.jar、
htrace-core-3.1.0-incubating.jar

# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# Describe/configure the source
a2.sources.r2.type = exec
a2.sources.r2.command = tail -F /opt/module/hive-1.2.1/logs/hive.log
a2.sources.r2.shell = /bin/bash -c

# Describe the sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://node1:9000/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-hive
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k2.hdfs.rollInterval = 30
#设置每个文件的滚动大小
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

$ bin/flume-ng agent --conf conf --conf-file file_flume_hdfs.conf --name a2

6.监控本地文件夹下新文件到HDFS (Spooling Source)

# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# Describe/configure the source
a2.sources.r2.type = spooldir
a2.sources.r2.spoolDir = /opt/module/flume-1.9.0/upload
a2.sources.r2.fileSuffix = .COMPLETED
a2.sources.r2.fileHeader = true
#忽略所有以.tmp结尾的文件,不上传
a2.sources.r2.ignorePattern = ([^ ]*\.tmp)

# Describe the sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://node1:9000/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-hive
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k2.hdfs.rollInterval = 30
#设置每个文件的滚动大小
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

bin/flume-ng agent --conf conf --conf-file jobConf/dir-flume-hdfs.conf --name a2

注意:
    上传的文件会以.COMPLETED结尾(默认)
    不要在监控的文件夹中持续修改数据,因为当文件被标为.COMPLETED, 表示该文件已经被上传,不会再监控
    被监控文件夹每500ms扫描一下文件变动
    不能递归监控文件夹下的文件夹数据

7.实时监控目录下的多个追加文件(taildirSource)

# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# Describe/configure the source
a2.sources.r2.type = TAILDIR
a2.sources.r2.positionFile = /opt/module/flume-1.9.0/position/taildir_position.json
a2.sources.r2.filegroups = f1 f2
a2.sources.r2.filegroups.f1 = /opt/module/flume-1.9.0/upload/log1/.*.txt
a2.sources.r2.filegroups.f2 = /opt/module/flume-1.9.0/upload/log2/.*.txt

# Describe the sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://node1:9000/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-hive
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k2.hdfs.rollInterval = 30
#设置每个文件的滚动大小
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

bin/flume-ng agent --conf conf --conf-file jobConf/dir-flume-hdfs.conf --name a2

优点:
    多目录
    断点续传

8.Flume事务

flume 日志聚合 flume日志采集实训总结_大数据_03


9.Flume Agent 内部原理:

flume 日志聚合 flume日志采集实训总结_flume 日志聚合_04

重要组件:
ChannelSelector:
选出Event将要发往那个Channel.两种类型:Replicating (复制) 和 Multiplexing(多路复用)
	Replication:将同一个Event发往所有的Channel
	Multiplexing:根据相应的原则e,将不同的Event发往不同的Channel

SinkProcessor:
有三种类型: DefaultSinkProcessor,LoadBalancingSinkProcessor , FailoverSinkPeocessor
	DefaultSinkProcessor:对应的是单个Sink
	LoadBalancingSinkProcessor \FailoverSinkPeocessor 对应的是 Sink Group
	LoadBalancingSinkProcessor 可以实现负载均衡,
	FailoverSinkPeocessor 实现故障转移

10.Flume联接

Source是服务端, Sink是客户端, 所以启动的时候一般后面的Agent先启动。

简单串联:

flume 日志聚合 flume日志采集实训总结_大数据_05


复制和多路复用:

flume 日志聚合 flume日志采集实训总结_ci_06


负载均衡和故障转移:

flume 日志聚合 flume日志采集实训总结_ci_07


聚合:

flume 日志聚合 flume日志采集实训总结_ci_08


11.案例: Flume之间的连接

flume 日志聚合 flume日志采集实训总结_flume_09

file-flume-flume.conf:

# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2

# Describe/configure the source
a1.sources.r1.type = TAILDIR
a1.sources.r1.positionFile = /opt/module/flume-1.9.0/position/taildir_position2.json
a1.sources.r1.filegroups = f1 f2
a1.sources.r1.filegroups.f1 = /opt/module/flume-1.9.0/upload/log1/*.txt
a1.sources.r1.filegroups.f2 = /opt/module/flume-1.9.0/upload/log2/*.txt

# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.2.201
a1.sinks.k1.port = 8888

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = 192.168.2.201
a1.sinks.k2.port = 8889

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

bin/flume-ng agent --conf conf --conf-file jobConf/file-flume-flume.conf --name a1
flume-flume-hdfs.conf:

# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# Describe/configure the source
a2.sources.r1.type = avro
a2.sources.r1.bind = 192.168.2.201
a2.sources.r1.port = 8888

# Describe the sink
a2.sinks.k1.type = hdfs
a2.sinks.k1.hdfs.path = hdfs://node1:9000/flume-new/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k1.hdfs.filePrefix = logs-hive
#是否按照时间滚动文件夹
a2.sinks.k1.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k1.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k1.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k1.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k1.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k1.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k1.hdfs.rollInterval = 30
#设置每个文件的滚动大小
a2.sinks.k1.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k1.hdfs.rollCount = 0

# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

 bin/flume-ng agent --conf conf --conf-file jobConf/flume-flume-hdfs.conf --name a2
flume-flume-file.conf:

# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

# Describe/configure the source
a3.sources.r1.type = avro
a3.sources.r1.bind = 192.168.2.201
a3.sources.r1.port = 8889

# Describe the sink
a3.sinks.k1.type = file_roll
a3.sinks.k1.sink.directory = /opt/data/flumeData

# Use a channel which buffers events in memory
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/flume-flume-file.conf --name a3

12.故障转移

flume 日志聚合 flume日志采集实训总结_flume 日志聚合_10

nc-flume-flume.conf:

# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1
a1.sinkgroups = g1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.2.201
a1.sources.r1.port = 44444

# Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.2.201
a1.sinks.k1.port = 4141

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = 192.168.2.201
a1.sinks.k2.port = 4142

#Sink Group
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = failover
# 根据优先级进行数据传输。
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
a1.sinkgroups.g1.processor.maxpenalty = 10000

#Bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/nc-flume-flume.conf --name a1 -Dflume.root.logger=INFO,console
flume-flume-logger1.conf:

#Name
a2.sources = r1
a2.channels = c1
a2.sinks = k1

#Source
a2.sources.r1.type = avro
a2.sources.r1.bind = 192.168.2.201
a2.sources.r1.port = 4141

#Channel
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

#Sink
a2.sinks.k1.type = logger

#Bind
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/flume-flume-logger1.conf --name a2 -Dflume.root.logger=INFO,console
flume-flume-logger2.conf:

#Name
a3.sources = r1
a3.channels = c1
a3.sinks = k1

#Source
a3.sources.r1.type = avro
a3.sources.r1.bind = 192.168.2.201
a3.sources.r1.port = 4142

#Channel
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

#Sink
a3.sinks.k1.type = logger

#Bind
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/flume-flume-logger2.conf --name a3 -Dflume.root.logger=INFO,console

13.负载均衡

和上面故障转移的conf极其相似, 只需要改变nc-flume-flume.conf的Sink Group参数即可

# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1
a1.sinkgroups = g1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.2.201
a1.sources.r1.port = 44444

# Channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.2.201
a1.sinks.k1.port = 4141

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = 192.168.2.201
a1.sinks.k2.port = 4142

#Sink Group
a1.sinkgroups.g1.sinks = k1 k2
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = random

#Bind
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/nc-flume-flume.conf --name a1 -Dflume.root.logger=INFO,console

14.聚合

flume 日志聚合 flume日志采集实训总结_flume_11


使用同一个Source案例:

flume1.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.2.201
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.2.201
a1.sinks.k1.port = 4141

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume1.conf  --name a1
flume2.conf

# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# Describe/configure the source
a2.sources.r1.type = TAILDIR
a2.sources.r1.positionFile = /opt/module/flume-1.9.0/position/taildir_position.json
a2.sources.r1.filegroups = f1 f2
a2.sources.r1.filegroups.f1 = /opt/module/flume-1.9.0/upload/log1/.*.txt
a2.sources.r1.filegroups.f2 = /opt/module/flume-1.9.0/upload/log2/.*.txt

# Describe the sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = 192.168.2.201
a2.sinks.k1.port = 4141

# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume2.conf  --name a2
flume3.conf

# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

a3.sources.r1.type = avro
a3.sources.r1.bind = 192.168.2.201
a3.sources.r1.port = 4141

# Describe the sink
a3.sinks.k1.type = logger

# Use a channel which buffers events in memory
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

 bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume3.conf --name a3 -Dflume.root.logger=INFO,console

使用不同Source案例:(需要改变的是flume1,flume2的Sink端口, 在flume3中设置两个Source端口进行接收,然后配置两个Source到同一个Channel)

flume1.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.2.201
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.2.201
a1.sinks.k1.port = 4141

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume1.conf  --name a1
flume2.conf

# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

# Describe/configure the source
a2.sources.r1.type = TAILDIR
a2.sources.r1.positionFile = /opt/module/flume-1.9.0/position/taildir_position.json
a2.sources.r1.filegroups = f1 f2
a2.sources.r1.filegroups.f1 = /opt/module/flume-1.9.0/upload/log1/.*.txt
a2.sources.r1.filegroups.f2 = /opt/module/flume-1.9.0/upload/log2/.*.txt

# Describe the sink
a2.sinks.k1.type = avro
a2.sinks.k1.hostname = 192.168.2.201
a2.sinks.k1.port = 4142

# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume2.conf  --name a2
flume3.conf

# Name the components on this agent
a3.sources = r1 r2
a3.sinks = k1
a3.channels = c1

a3.sources.r1.type = avro
a3.sources.r1.bind = 192.168.2.201
a3.sources.r1.port = 4141

a3.sources.r2.type = avro
a3.sources.r2.bind = 192.168.2.201
a3.sources.r2.port = 4142

# Describe the sink
a3.sinks.k1.type = logger

# Use a channel which buffers events in memory
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sources.r2.channels = c1
a3.sinks.k1.channel = c1

 bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume3.conf --name a3 -Dflume.root.logger=INFO,console

15.自定义拦截器(Interceptor),实现 Multiplexing(多路复用)

flume 日志聚合 flume日志采集实训总结_大数据_12

pom文件:

<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-core</artifactId>
    <version>1.9.0</version>
</dependency>

/**
 * @author Mr.Xu
 * @create 2020-11-18 10:12
 * 将存在hello的event(一行数据一个event),发送不同的channel中
 * 将文件打包上传到flume/lib下
 */
public class TypeInterceptor implements Interceptor {

    List<Event> eventList;

    public void initialize() {
        eventList = new ArrayList<Event>();
    }

    public Event intercept(Event event) {
        // 1. 获取event的header
        Map<String, String> headers = event.getHeaders();

        // 2.获取事件中的body信息
        String body = new String(event.getBody());

        //3. 判断body中是否存在hello,
        if(body.contains("hello")){
            // 4.添加头信息
            headers.put("type","hasHello");
        }else {
            headers.put("type","noHello");
        }
        return event;
    }

    public List<Event> intercept(List<Event> list) {

        eventList.clear();

        for (Event event : list) {
            eventList.add(intercept(event));
        }
        return eventList;
    }

    public void close() {

    }

    public static class Builder implements Interceptor.Builder{

        public Interceptor build() {
            return new TypeInterceptor();
        }

        public void configure(Context context) {

        }
    }
}
flume1.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2

# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.2.201
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 192.168.2.201
a1.sinks.k1.port = 4141

a1.sinks.k2.type = avro
a1.sinks.k2.hostname = 192.168.2.201
a1.sinks.k2.port = 4142

# Interceptor (注意要和自定义拦截器匹配)
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = com.haiyi.app.TypeInterceptor$Builder

# Channel Selector (这里的header和mapping要与自定义的拦截器匹配)
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = type
a1.sources.r1.selector.mapping.hasHello = c1
a1.sources.r1.selector.mapping.noHello = c2

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2

bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume1.conf  --name a1
flume2.conf

# Name the components on this agent
a2.sources = r1
a2.sinks = k1
a2.channels = c1

a2.sources.r1.type = avro
a2.sources.r1.bind = 192.168.2.201
a2.sources.r1.port = 4141

# Describe the sink
a2.sinks.k1.type = logger

# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1

 bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume2.conf --name a2 -Dflume.root.logger=INFO,console
flume3.conf

# Name the components on this agent
a3.sources = r1
a3.sinks = k1
a3.channels = c1

a3.sources.r1.type = avro
a3.sources.r1.bind = 192.168.2.201
a3.sources.r1.port = 4142

# Describe the sink
a3.sinks.k1.type = logger

# Use a channel which buffers events in memory
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r1.channels = c1
a3.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/group3/flume3.conf --name a3 -Dflume.root.logger=INFO,console

16.自定义Source

pom.xml

<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-core</artifactId>
    <version>1.9.0</version>
</dependency>

public class MySource extends AbstractSource implements Configurable, PollableSource {
    // 定义配置文件将来要读取的字段
    private Long delay;
    private String field;

    // 初始化配置信息
    public void configure(Context context) {
        delay = context.getLong("delay");
        field = context.getString("field", "Hello");  // 设置默认值
    }

    // 这个方法会被反复调用
    public Status process() throws EventDeliveryException {
        try {
            // 创建事件头信息
            HashMap<String, String> headerMap = new HashMap<String, String>();
            // 创建事件
            SimpleEvent event = new SimpleEvent();
            for (int i = 0; i < 5; i++) {
                // 给事件设置头信息
                event.setHeaders(headerMap);
                // 给事件设置内容
                event.setBody((field + i).getBytes());
                // 将事件写入Channel
                getChannelProcessor().processEvent(event);
                Thread.sleep(delay);
                }
        } catch (InterruptedException e) {
            e.printStackTrace();
            return Status.BACKOFF;
        }
        return Status.READY;
    }

    public long getBackOffSleepIncrement() {
        return 0;
    }

    public long getMaxBackOffSleepInterval() {
        return 0;
    }
}
flume1.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Source
a1.sources.r1.type = com.haiyi.app.MySource
a1.sources.r1.delay = 1000
a1.sources.r1.field = qaqaqaqaq

# Describe the sink
a1.sinks.k1.type = logger

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/group6/flume1.conf --name a1 -Dflume.root.logger=INFO,console

17.自定义Sink

pom.xml

<dependency>
    <groupId>org.apache.flume</groupId>
    <artifactId>flume-ng-core</artifactId>
    <version>1.9.0</version>
</dependency>

// 实现了LoggerSink
public class MySink extends AbstractSink implements Configurable {
    // 创建Logger对象
    private static final Logger LOG = LoggerFactory.getLogger(AbstractSink.class);

    // 配置文件制定前缀和后缀
    private String prefix;
    private String suffix;

    public void configure(Context context) {
        prefix = context.getString("prefix", "hello");
        suffix = context.getString("suffix", "hi");
    }

    public Status process() throws EventDeliveryException {
        //声明返回值状态信息
        Status status;

        //获取当前Sink绑定的Channel
        Channel ch = getChannel();

        //获取事务
        Transaction txn = ch.getTransaction();

        //声明事件
        Event event;

        //开启事务
        txn.begin();

        //读取Channel中的事件,直到读取到事件 ,结束循环
        while (true) {
            event = ch.take();
            if (event != null) {
                break;
            }
        }
        try {
            //处理事件(打印)
            LOG.info(prefix + new String(event.getBody()) + suffix);

            //事务提交
            txn.commit();
            status = Status.READY;
        } catch (Exception e) {

            //遇到异常,事务回滚
            txn.rollback();
            status = Status.BACKOFF;
        } finally {

            //关闭事务
            txn.close();
        }
        return status;
    }
}
flume1.conf

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Source
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 192.168.2.201
a1.sources.r1.port = 44444

# Describe the sink
a1.sinks.k1.type = com.haiyi.app.MySink
a1.sinks.k1.prefix = <<<<<
a1.sinks.k1.suffix = >>>>>

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

bin/flume-ng agent --conf conf --conf-file jobConf/group7/flume1.conf --name a1 -Dflume.root.logger=INFO,console

18.Flume监控-Ganglia

https://cloud.tencent.com/developer/article/1586042

官方链接

http://flume.apache.org/

总结

以上就是我的Flume学习笔记,其实就是一个查阅Flume官网配置,书写Flume配置文件的过程。内容比较多, 看起来可能有点困难,亲呐可收藏起来慢慢看。

各位小可爱有什么问题, 也可以尽管来和本菜鸟一起讨论,本菜鸟来者不拒。

一起学习,一起进步。

笨鸟先飞,熟能生巧。