因为首次启动JOB的时候,由于冷启动会造成内存使用太大,为了防止这种情况出现,限制首次处理的数据量

spark.streaming.backpressure.enabled=true
spark.streaming.backpressure.initialRate=200

for example:

#!/bin/sh
TaskName="funnel"
UserName="hadoop"
cd `dirname $0`
nohup sudo -u ${UserName} /data/bigdata/spark/bin/spark-submit \
--name ${TaskName} \
--class FunnelMain \
--master yarn \
--deploy-mode cluster \
--executor-memory 2G \
--num-executors 3 \
--conf spark.streaming.backpressure.enabled=true \
--conf spark.streaming.backpressure.initialRate=1000 \
--files /data/apps/funnel/app/conf/conf.properties \
/data/apps/funnel/app/target/apphadoop-1-jar-with-dependencies.jar conf.properties >>../log/${TaskName}.log 2>&1 &
exit 0

应该是在spark-submit的命令中用–conf指定。
http://qindongliang.iteye.com/blog/2354165

这一次,真的感觉在牢笼中,真切的感受!