1. 创建 maven 工程,pom 文件如下:
<dependencies>
	<dependency>
	    <groupId>org.apache.storm</groupId>
	    <artifactId>storm-core</artifactId>
	    <version>1.0.3</version>
	    <scope>provided</scope>
	</dependency>
</dependencies>
  1. storm 编写的程序叫 Topology。一个 Topology 任务由 1 个 Spout 任务和多个 Bolt 任务组成。Spout 任务用来采集数据,Bolt 任务用来处理数据。Spout 任务与 Bolt 任务以及 Bolt 任务与 Bolt 任务之间传递的数据结构是 Tuple。
    以词频统计程序为例。
    用于采集数据的 Spout 任务代码如下:
package storm;

import java.util.Map;
import java.util.Random;

import org.apache.storm.spout.SpoutOutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichSpout;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Values;
import org.apache.storm.utils.Utils;

public class WordCountSpout extends BaseRichSpout {

	private static final long serialVersionUID = 1571765705181254611L;

	// 模拟数据
	private String[] data = {"I love Beijing", "I love China", "Beijing is the capital of China"};
	
	// 用于往下一个组件发送消息
	private SpoutOutputCollector collector;
	
	public void open(Map conf, TopologyContext context, SpoutOutputCollector collector) {
		this.collector = collector;
	}

	public void nextTuple() {
		Utils.sleep(3000);
		// 由Strom框架调用,用于接收外部数据源的数据
		int random = (new Random()).nextInt(3);
		String sentence = data[random];
		
		// 发送数据
		System.out.println("发送数据:" + sentence);
		this.collector.emit(new Values(sentence));
	}

	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		declarer.declare(new Fields("sentence"));
	}
}
  1. 对每一条语句进行分词操作的 Bolt 任务的代码如下:
package storm;

import java.util.Map;

import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;

public class WordCountSplitBolt extends BaseRichBolt {

	private static final long serialVersionUID = -7399165475264468561L;

	private OutputCollector collector;
	
	public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
		this.collector = collector;
	}

	public void execute(Tuple tuple) {
		String sentence = tuple.getStringByField("sentence");
		// 分词
		String[] words = sentence.split(" ");
		for (String word : words) {
			this.collector.emit(new Values(word, 1));
		}
	}

	public void declareOutputFields(OutputFieldsDeclarer declarer) {
		declarer.declare(new Fields("word", "count"));
	}
}
  1. 进行词频统计的 Bolt 任务的代码如下:
package storm;

import java.util.HashMap;
import java.util.Map;

import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Fields;
import org.apache.storm.tuple.Tuple;
import org.apache.storm.tuple.Values;

public class WordCountBoltCount extends BaseRichBolt {

	private static final long serialVersionUID = -3206516572376524950L;
	
	private Map<String, Integer> result = new HashMap<String, Integer>();
	
	public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {

	}

	public void execute(Tuple tuple) {
		String word = tuple.getStringByField("word");
		int count = tuple.getIntegerByField("count");
		
		if (result.containsKey(word)) {
			result.put(word, result.get(word) + count);
		} else {
			result.put(word, 1);
		}
		// 直接输出到屏幕
		System.out.println("输出的结果是:" + result);
	}

	public void declareOutputFields(OutputFieldsDeclarer declarer) {

	}
}
  1. Topology 主程序代码如下:
package storm;

import org.apache.storm.Config;
import org.apache.storm.LocalCluster;
import org.apache.storm.generated.StormTopology;
import org.apache.storm.topology.TopologyBuilder;
import org.apache.storm.tuple.Fields;

import jdbc.JdbcBoltUtils;

public class WordCountTopology {

	public static void main(String[] args) throws Exception {
		TopologyBuilder builder = new TopologyBuilder();
		
		// 设置任务的spout组件
		builder.setSpout("wordcount_spout", new WordCountSpout());
		
		// 设置任务的第一个bolt组件
		builder.setBolt("wordcount_splitbolt", new WordCountSplitBolt()).
			shuffleGrouping("wordcount_spout");
		
		// 设置任务的第二个bolt组件
		builder.setBolt("wordcount_count", new WordCountBoltCount()).
			fieldsGrouping("wordcount_splitbolt", new Fields("word"));
		
		// 创建Topology任务
		StormTopology wc = builder.createTopology();
		
		Config config = new Config();
		
		// 提交到本地运行
		LocalCluster localCluster = new LocalCluster();
		localCluster.submitTopology("mywordcount", config, wc);
		
		// 提交任务到Storm集群运行
//		StormSubmitter.submitTopology(args[0], config, wc);
	}
}
  1. 本地模式不需要安装 storm,右击运行 WordCountTopology 程序即可,但是 Eclipse 必须以管理员身份启动
  2. 运行结果:
  3. 如何编写 Storm 程序?_apache

  4. 也可以打成 jar 包提交到 storm 集群上运行:
    执行命令:storm nimbus & 执行命令:storm supervisor & 来启动 nimbus 和 supervisor。
    提交 jar 包到 storm 集群运行:storm jar wordcount.jar storm.WordCountTopology MyWordCount杀死任务:storm kill MyWordCount -w 10