Spark Streaming 是什么?
Spark Streaming 是核心 Spark API 的扩展,支持可伸缩、高吞吐量、容错的实时数据流处理。数据可以从许多来源获取,如 Kafka、Flume、Kinesis 或 TCP sockets,可以使用复杂的算法处理数据,这些算法用高级函数表示,如 map、reduce、join 和 window。最后,处理后的数据可以推送到文件系统、数据库和活动仪表板。实际上,还可以将 Spark 的 MLlib 机器学习和 GraphX 图形处理算法应用于数据流。在内部,它是这样工作的。Spark Streaming 接受实时输入数据流,并将数据分成批次,然后由 Spark engine 处理,以批量生成最终的结果流。
Spark 流提供了一种高级抽象,称为离散流或 DStream,它表示连续的数据流。DStreams 可以从 Kafka、Flume 和 Kinesis 等源的输入数据流创建,也可以通过对其他 DStreams 应用高级操作创建。在内部,DStream 表示为 RDDs 序列。
流处理和批处理的区别
批处理针对有界的、大量的、持久化的静态数据,流处理针对无界的、小量的(每次处理)、持续实时快速产生的数据。应该说批处理系统强调的是计算能力,流处理系统更要求吞吐量(单位时间内处理请求数量)、实时性(至少秒级)。 两都之间的关系以下图概括。
DStream(离散数据流)
离散数据流(Discretized Stream)是Spark Streaming提供的高级别抽象
- DStream代表了一系列连续的RDDs
每个RDD都包含一个时间间隔内的数据
DStream既是输入的数据流,也是转换处理过的数据流
对DStream的转换操作即是对具体RDD操作 - Input DStream
Input DStream指从某种流式数据源(Streaming Sources)接收流数据的DStream
基于TCPSocket接收文本数据
private val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("Demo1")
private val ssc = new StreamingContext(conf, Seconds(5))
//使用Spark Streaming做wordcount
private val inputDstream: ReceiverInputDStream[String] = ssc.socketTextStream("zcy01", 9999)
//对输入的流进行操作
//hadoop spark kafka
private val wordDstream: DStream[String] = inputDstream.flatMap(_.split(" "))
private val wordAndOneDstream: DStream[(String, Int)] = wordDstream.map((_, 1))
private val wordcounts: DStream[(String, Int)] = wordAndOneDstream.reduceByKey(_ + _)
wordcounts.print()
//通过start启动消息采集和处理
ssc.start()
//等待程序终止
ssc.awaitTermination()
读取HDFS系统的文件
private val conf: SparkConf = new SparkConf().setAppName("demo1").setMaster("local[*]")
private val ssc = new StreamingContext(conf,Seconds(5))
private val line: DStream[String] = ssc.textFileStream("hdfs://192.168.174.41:9000/demo/test/")
private val wordcount: DStream[(String, Int)] = line.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
wordcount.print()
ssc.start()
ssc.awaitTermination()
Spark Streaming整合Spark SQL
private val conf: SparkConf = new SparkConf().setMaster("local[*]").setAppName("Demo1")
private val ssc = new StreamingContext(conf, Seconds(5))
//todo 创建SparkSession对象
private val spark: SparkSession = SparkSession.builder().config(conf).getOrCreate()
import spark.implicits._
//使用Spark Streaming做wordcount
private val inputDstream: ReceiverInputDStream[String] = ssc.socketTextStream("zcy01", 5678)
//对输入的流进行操作
//hadoop spark kafka
private val wordDstream: DStream[String] = inputDstream.flatMap(_.split(" "))
private val wordAndOneDstream: DStream[(String, Int)] = wordDstream.map((_, 1))
private val wordcounts: DStream[(String, Int)] = wordAndOneDstream.reduceByKey(_ + _)
wordcounts.print()
//通过start启动消息采集和处理
ssc.start()
//等待程序终止
ssc.awaitTermination()
wordDstream.foreachRDD(
rdd=>{
if (rdd.count()!=0){
val df1 = rdd.map(x=>Word(x)).toDF()
df1.createOrReplaceTempView("words")
spark.sql(
"""
|select word,count(*)
|from words
|group by word
""".stripMargin).show()
}
}
)
}
case class Word(word: String)
Spark Streaming整合Flume
Flume依赖:org.apache.spark:spark-streaming-flume_2.11:2.x.x
Flume Agent配置文件
- push
Push 方式属于推送(由 Flume 向 Spark 推送)
agent.sources = s1
agent.channels = c1
agent.sinks = sk1
# 设置Source的类型为netcat,使用的channel为c1
agent.sources.s1.type = netcat
agent.sources.s1.bind = hadoop101
agent.sources.s1.port = 44444
agent.sources.s1.channels = c1
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000
# AvroSink向Spark(55555)推送数据
# 使用push createStream
agent.sinks.sk1.type=avro
agent.sinks.sk1.hostname=hadoop101
agent.sinks.sk1.port=55555
agent.sinks.sk1.channel = c1
private val conf: SparkConf = new SparkConf().setAppName("flumeDemo01").setMaster("local[*]")
private val ssc = new StreamingContext(conf,Seconds(5))
private val flumeStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createStream(ssc,"zcy01",55555)
flumeStream.map(x=>new String(x.event.getBody.array()).trim)
.flatMap(_.split(" ").map((_,1))).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
- pull:
Pull 属
于拉取(Spark 拉取 Flume 的输出)。
agent.sources = s1
agent.channels = c1
agent.sinks = sk1
#设置Source的内省为netcat,使用的channel为c1
agent.sources.s1.type = netcat
agent.sources.s1.bind = hadoop101
agent.sources.s1.port = 44444
agent.sources.s1.channels = c1
#SparkSink,要求flume lib目录存在spark-streaming-flume-sink_2.11-x.x.x.jar
agent.sinks.sk1.type=org.apache.spark.streaming.flume.sink.SparkSink
agent.sinks.sk1.hostname=hadoop101
agent.sinks.sk1.port=55555
agent.sinks.sk1.channel = c1
#设置channel信息
#内存模式
agent.channels.c1.type = memory
agent.channels.c1.capacity = 1000
val conf = new SparkConf().setAppName("flumeDemo02").setMaster("local[2]")
val ssc = new StreamingContext(conf, Seconds(5))
//TODO poll方式
val flumePollStream= FlumeUtils
.createPollingStream(ssc,"hadoop101",55555)
flumePollStream.map(x=>new String(x.event.getBody.array()).trim)
.flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_).print()
ssc.start()
ssc.awaitTermination()
Spark Streaming集成Kafka
val conf = new SparkConf().setMaster("local[*]").setAppName("kafkaDemo")
val ssc = new StreamingContext(conf,Seconds(5))
val kafkaParams=Map(
(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG->"zcy01:9092"),
("value.deserializer"->"org.apache.kafka.common.serialization.StringDeserializer"),
("key.deserializer"->"org.apache.kafka.common.serialization.StringDeserializer"),
(ConsumerConfig.GROUP_ID_CONFIG->"kafkaGroup01")
)
val message: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream(ssc,
LocationStrategies.PreferConsistent,
ConsumerStrategies.Subscribe(Set("testPartition2"), kafkaParams)
)
message.map(x=>x.value()).flatMap(_.split(" ")).map((_,1)).reduceByKey(_+_)
.print()
ssc.start()
ssc.awaitTermination()