sparkstreaming任务在处理数据时存在堆积情况,但是仍然会不断从kafka拉取数据
首先先说2个参数
spark.streaming.kafka.consumer.poll.ms
spark去kafka取数的时候,会有一个超时时间。如果两次尝试后都出现了超时,这个任务就会失败,然后spark会把这个任务分发到其它的executor上面去执行,这就会导致一定的调度耗时。
在spark中这个参数的默认值是512ms。如果超时时间很短,但是kafka响应的时间很长,这就会导致spark中有很多的任务失败。如果超时时间太长,spark在这段时间内什么都不做且最终还是会超时,就会产生很大的延迟(poll timeout+executor中的任务调度)。
如果spark作业中有很多的任务失败,你需要去增大这个值(或者你该去kafka那边看看为啥它那么久都没响应)。
如果不指定这个参数,spark可能就会使用spark.network.timeout(所有网络交互的默认超时,默认是120s)
spark.streaming.kafka.maxRatePerPartition
设置Spark Streaming从kafka分区每秒拉取的条数
spark.streaming.backpressure.enabled
参数设置为ture,开启背压机制后Spark Streaming会根据延迟动态去kafka消费数据,上限由spark.streaming.kafka.maxRatePerPartition参数控制,所以两个参数一般会一起使用
spark.streaming.stopGracefullyOnShutdown
参数设置成ture,Spark会在JVM关闭时正常关闭StreamingContext,而不是立马关闭
reconnect.backoff.ms
如果客户端与kafka的broker的连接断开了,客户端会等reconnect.backoff.ms之后重新连接。
reconnect.backoff.max.ms
重连的总的最大时间,每次连接失败,重连时间都会指数级增加,每次增加的时间会存在20%的随机抖动,以避免连接风暴。
正文主题
val ssc: = new StreamingContext(spark.sparkContext, Seconds(interval))
ssc.addStreamingListener(new MyStreamingListener(appName, 10))
MyStreamingListener自定义的监听类
class MyStreamingListener(private val appName:String, private val duration: Int) extends StreamingListener {
//在流处理启动时调用
override def onStreamingStarted(streamingStarted: StreamingListenerStreamingStarted): Unit = {
super.onStreamingStarted(streamingStarted)
}
//当一个接收器启动时调用
override def onReceiverStarted(receiverStarted: StreamingListenerReceiverStarted): Unit = super.onReceiverStarted(receiverStarted)
//当接收端报告错误时调用
override def onReceiverError(receiverError: StreamingListenerReceiverError): Unit = {
super.onReceiverError(receiverError)
}
//当一个接收器被停止时呼叫
override def onReceiverStopped(receiverStopped: StreamingListenerReceiverStopped): Unit = {
}
//在提交一批作业以进行处理时调用。
override def onBatchSubmitted(batchSubmitted: StreamingListenerBatchSubmitted): Unit = {
super.onBatchSubmitted(batchSubmitted)
}
//在开始处理一批作业时调用。
override def onBatchStarted(batchStarted: StreamingListenerBatchStarted): Unit = {
super.onBatchStarted(batchStarted)
}
//当一批作业的处理完成时调用。
override def onBatchCompleted(batchCompleted: StreamingListenerBatchCompleted): Unit = {
}
//在开始处理批处理作业时调用。
override def onOutputOperationStarted(outputOperationStarted: StreamingListenerOutputOperationStarted): Unit = {}
//当批处理作业的处理完成时调用。
override def onOutputOperationCompleted(outputOperationCompleted: StreamingListenerOutputOperationCompleted): Unit = {
}
}
监听类源码
/*
* Licensed to the Apache Software Foundation (ASF) under one or more
* contributor license agreements. See the NOTICE file distributed with
* this work for additional information regarding copyright ownership.
* The ASF licenses this file to You under the Apache License, Version 2.0
* (the "License"); you may not use this file except in compliance with
* the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*/
package org.apache.spark.streaming.scheduler
import scala.collection.mutable.Queue
import org.apache.spark.annotation.DeveloperApi
import org.apache.spark.util.Distribution
/**
* :: DeveloperApi ::
* Base trait for events related to StreamingListener
*/
@DeveloperApi
sealed trait StreamingListenerEvent
@DeveloperApi
case class StreamingListenerStreamingStarted(time: Long) extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerBatchSubmitted(batchInfo: BatchInfo) extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerBatchCompleted(batchInfo: BatchInfo) extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerBatchStarted(batchInfo: BatchInfo) extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerOutputOperationStarted(outputOperationInfo: OutputOperationInfo)
extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerOutputOperationCompleted(outputOperationInfo: OutputOperationInfo)
extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerReceiverStarted(receiverInfo: ReceiverInfo)
extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerReceiverError(receiverInfo: ReceiverInfo)
extends StreamingListenerEvent
@DeveloperApi
case class StreamingListenerReceiverStopped(receiverInfo: ReceiverInfo)
extends StreamingListenerEvent
/**
* :: DeveloperApi ::
* A listener interface for receiving information about an ongoing streaming
* computation.
*/
@DeveloperApi
trait StreamingListener {
/** Called when the streaming has been started */
def onStreamingStarted(streamingStarted: StreamingListenerStreamingStarted) { }
/** Called when a receiver has been started */
def onReceiverStarted(receiverStarted: StreamingListenerReceiverStarted) { }
/** Called when a receiver has reported an error */
def onReceiverError(receiverError: StreamingListenerReceiverError) { }
/** Called when a receiver has been stopped */
def onReceiverStopped(receiverStopped: StreamingListenerReceiverStopped) { }
/** Called when a batch of jobs has been submitted for processing. */
def onBatchSubmitted(batchSubmitted: StreamingListenerBatchSubmitted) { }
/** Called when processing of a batch of jobs has started. */
def onBatchStarted(batchStarted: StreamingListenerBatchStarted) { }
/** Called when processing of a batch of jobs has completed. */
def onBatchCompleted(batchCompleted: StreamingListenerBatchCompleted) { }
/** Called when processing of a job of a batch has started. */
def onOutputOperationStarted(
outputOperationStarted: StreamingListenerOutputOperationStarted) { }
/** Called when processing of a job of a batch has completed. */
def onOutputOperationCompleted(
outputOperationCompleted: StreamingListenerOutputOperationCompleted) { }
}
/**
* :: DeveloperApi ::
* A simple StreamingListener that logs summary statistics across Spark Streaming batches
* @param numBatchInfos Number of last batches to consider for generating statistics (default: 10)
*/
@DeveloperApi
class StatsReportListener(numBatchInfos: Int = 10) extends StreamingListener {
// Queue containing latest completed batches
val batchInfos = new Queue[BatchInfo]()
override def onBatchCompleted(batchStarted: StreamingListenerBatchCompleted) {
batchInfos.enqueue(batchStarted.batchInfo)
if (batchInfos.size > numBatchInfos) batchInfos.dequeue()
printStats()
}
def printStats() {
showMillisDistribution("Total delay: ", _.totalDelay)
showMillisDistribution("Processing time: ", _.processingDelay)
}
def showMillisDistribution(heading: String, getMetric: BatchInfo => Option[Long]) {
org.apache.spark.scheduler.StatsReportListener.showMillisDistribution(
heading, extractDistribution(getMetric))
}
def extractDistribution(getMetric: BatchInfo => Option[Long]): Option[Distribution] = {
Distribution(batchInfos.flatMap(getMetric(_)).map(_.toDouble))
}
}
重点看上边代码块 70行 -100行