import org.apache.spark.streaming._ import org.apache.spark.streaming.StreamingContext._ import org.apache.spark.api.java.function._ import org.apache.spark.streaming.api._ // Create a StreamingContext with a local master // Spark Streaming needs at least two working thread val ssc = new StreamingContext("local[2]", "NetworkWordCount", Seconds(1)) // Create a DStream that will connect to serverIP:serverPort, like localhost:9999 val lines = ssc.socketTextStream("localhost", 9999) // Split each line into words val words = lines.flatMap(_.split(" ")) // Count each word in each batch val pairs = words.map(word => (word, 1)) val wordCounts = pairs.reduceByKey(_ + _) // Print a few of the counts to the console wordCounts.print() ssc.start() // Start the computation ssc.awaitTermination() // Wait for the computation to terminate
Spark Streaming Demo
原创
©著作权归作者所有:来自51CTO博客作者zongquanliu的原创作品,请联系作者获取转载授权,否则将追究法律责任
下一篇:Spark调研
提问和评论都可以,用心的回复会被更多人看到
评论
发布评论
相关文章
-
Spark Streaming
使用Spark Streaming统计HDFS文件的词频Demo02_HDFSWordCountpackage cn.kgc.s
spark 大数据 apache 数据 -
【Spark Streaming】Spark Day10:Spark Streaming 学习笔记
Spark Day10:Spark Streaming01-[了解]-昨日课程内容回顾
机器学习 算法 spark 数据 apache -
[Spark streaming基础]--如何优雅地停止Spark streaming
n my current proje...
spark Streaming kafka apache -
Spark Streaming - 2
Spark Streaming Source/Transform/Sink/优雅关闭
spark kafka scala linux -
Spark Streaming - 1
Spark Streaming 简单介绍以及WC案例
spark 大数据 java 数据 离散化 -
Spark Streaming 例子
下在集群跑一下 监听1212端口(端口可以自己随便取) 可以看到反馈信息
spark apache scala -
spark streaming Receive
/*package com.shujia.spark.streaming import kafka.serializer.StringDecoder import org.apache.spark.SparkConf import org.apache.spark.storage.StorageLe ...
spark streaming spark kafka apache zookeeper -
Spark Streaming设计Spark