一、软件环境
我使用的软件版本如下:
1. Intellij Idea 2017.1

二、创建maven工程及配置

2.1创建工程

打开Idea,file->new->Project,左侧面板选择maven工程。(如果只跑MapReduce创建Java工程即可,不用勾选Creat from archetype,如果想创建web工程或者使用骨架可以勾选)

创建完成后以及运行结束后目录会如下:

IDEA 开发MAPREDUCE 程序 idea搭建mapreduce_IDEA 开发MAPREDUCE 程序

2.2 添加maven依赖
在pom.xml添加依赖,对于Hadoop 2.6.5版本的hadoop,需要的jar包有以下几个:
• hadoop-common
• hadoop-hdfs
• hadoop-mapreduce-client-core
• hadoop-mapreduce-client-jobclient
• log4j( 打印日志)
具体pom.xml中的依赖如下:

<dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>4.12</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.6.5</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.6.5</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>2.6.5</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-jobclient</artifactId>
            <version>2.6.5</version>
        </dependency>
        <dependency>
            <groupId>log4j</groupId>
            <artifactId>log4j</artifactId>
            <version>1.2.17</version>
        </dependency>
    </dependencies>

2.3 配置log4j
在src/main/resources目录下新增log4j的配置文件log4j.properties,内容如下:

log4j.rootLogger = debug,stdout
### 输出信息到控制抬 ###
log4j.appender.stdout = org.apache.log4j.ConsoleAppender
log4j.appender.stdout.Target = System.out
log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern = [%-5p] %d{yyyy-MM-dd HH:mm:ss,SSS} method:%l%n%m%n

三、运行WordCount(从本地读取文件/本地运行)
在工程根目录下新建input文件夹,input文件夹下新增dream.txt,随便写入一些单词:

I have a  dream
a dreamI have a  dream
       a dreamI have a  dream
              a dream


I have a  dream
a dream
I have a  dream
a dream
I have a  dream
a dream
I have a  dream
a dream

在src/main/java目录下新建包,新增FileUtil.java,创建一个删除output文件的函数,以后就不用手动删除了。内容如下:

public class FileUtil {
public static boolean deleteDir(String path) {
        File dir = new File(path);
        if (dir.exists()) {
            for (File f : dir.listFiles()) {
                if (f.isDirectory()) {
                    deleteDir(f.getName());
                } else {
                    f.delete();
                }
            }
            dir.delete();
            return true;
        } else {
            System.out.println("文件(夹)不存在!");
            return false;
        }
    }
}

编写WordCount的MapReduce程序WordCount.java,内容如下:

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

import java.io.IOException;
import java.util.Iterator;
import java.util.StringTokenizer;

/**
 * Created by YYL on 2017/6/18.
 */
public class WordCount {
    //嵌套类 Mapper
    //Mapper<keyin,valuein,keyout,valueout>
    public static class WordCountMapper extends
            Mapper<Object, Text, Text, IntWritable> {

        //one赋值为1
        public static final IntWritable one = new IntWritable(1);
        private Text word = new Text();

        public void map(Object key, Text value, Context context)
                throws IOException, InterruptedException {
           // Java语言中,提供了专门用来分析字符串的类StringTokenizer(位于java.util包中)。
            // 该类可以将字符串分解为独立使用的单词,并称之为语言符号。
            // 语言符号之间由定界符(delim)或者是空格、制表符、换行符等典型的空白字符来分隔。
            StringTokenizer itr = new StringTokenizer(value.toString());
            while (itr.hasMoreTokens()) {
                this.word.set(itr.nextToken());
            //context.write(word, one),即将分割后的字符串形成键值对,<单词,1>
                context.write(this.word, one);
            }
        }

    }

    //嵌套类Reducer
    //Reduce<keyin,valuein,keyout,valueout>
    //Reducer的valuein类型要和Mapper的va lueout类型一致,
    // Reducer的valuein是Mapper的valueout经过shuffle之后的值
    public static class WordCountReducer  extends
            Reducer<Text, IntWritable, Text, IntWritable> {
        private IntWritable result = new IntWritable();//0

        public void reduce(Text key, Iterable<IntWritable> values, Context context)
                throws IOException, InterruptedException {
            int sum = 0;
            IntWritable val;
            for (Iterator i = values.iterator(); i.hasNext(); sum += val.get()) {
                val = (IntWritable) i.next();
            }
            this.result.set(sum);
            context.write(key, this.result);
        }
    }

    public static void main(String[] args)
            throws IOException, ClassNotFoundException, InterruptedException {
        FileUtil.deleteDir("output");
        //获得Configuration配置 Configuration: core-default.xml, core-site.xml
        Configuration conf = new Configuration();  
//本地运行
    String[] otherArgs = new String[]{"input/dream.txt","output"};
    //发布到集群上
       // String[] otherArgs = new String[]{"hdfs://hdp-node-01:9000/WordCountDir/","hdfs://hdp-node-01:9000/WordCountDir/out"};
        if (otherArgs.length != 2) {
            System.err.println("Usage:Merge and duplicate removal <in> <out>");
            System.exit(2);
        }
  //设置Job属性
        Job job = Job.getInstance(conf, "WordCount");
        job.setJarByClass(WordCount.class);
        job.setMapperClass(WordCount.WordCountMapper.class);
        job.setReducerClass(WordCount.WordCountReducer .class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(IntWritable.class);

//        配置输入和输出路径
        //传入input path
        FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
        //传入output path,输出路径应该为空,否则报错org.apache.hadoop.mapred.FileAlreadyExistsException。
        FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
        //是否正常退出
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}

注:
1.注意不要导错类的包
2.本地运行时输入目录改为本地的
//本地运行
String[] otherArgs = new String[]{“input/dream.txt”,”output”};
//发布到集群上
// String[] otherArgs = new String[]{“hdfs://hdp-node-01:9000/WordCountDir/”,”hdfs://hdp-node-01:9000/WordCountDir/out”};
3.运行完毕以后,会在工程根目录下增加一个output文件夹,打开output/part-r-00000,内容如下:
I 5
a 14
dream 12
dreamI 2
have 7

四、运行WordCount(从HDFS读取文件/集群上运行)
4.1 在HDFS上新建文件夹:
[root@hdp-node-01 local]# hadoop fs -mkdir /WordCountDir

如果出现Namenode安全模式导致的不能创建文件夹提示:

mkdir: Cannot create directory /WordCountDir. Name node is in safe mode.

运行以下命令关闭safe mode:

hadoop dfsadmin -safemode leave

4.2上传本地文件:

将MapReduce01.jar和dream.txt上传到集群

[root@hdp-node-01 local]# hadoop fs -put dream.txt /WordCountDir

IDEA 开发MAPREDUCE 程序 idea搭建mapreduce_hadoop_02

4.3 在集群上运行:
修改otherArgs参数,指定输入为文件在HDFS上的路径:
String[] otherArgs = new String[]{“hdfs://hdp-node-01:9000/WordCountDir/”,”hdfs://hdp-node-01:9000/WordCountDir/out”};

需保证WordCountDir下无out目录,运行后自动生成
hadoop jar MapReduce01.jar WordCount

重新运行,需先删除out目录
hadoop fs -rmr /WordCountDir/out

运行示例:

IDEA 开发MAPREDUCE 程序 idea搭建mapreduce_hadoop_03


查看结果:

hadoop fs -cat /WordCountDir/out/part-r-00000

五、代码下载
代码下载地址:https://github.com/yyl424525/Hadoop/tree/master/MapReduce01

六、补充
StringTokener类
Java语言中,提供了专门用来分析字符串的类StringTokenizer(位于java.util包中)。该类可以将字符串分解为独立使用的单词,并称之为语言符号。语言符号之间由定界符(delim)或者是空格、制表符、换行符等典型的空白字符来分隔。其他的字符也同样可以设定为定界符。

StringTokenizer类的构造方法及描述:
StringTokenizer(String str) 为字符串str构造一个字符串分析器。使用默认的定界符,即空格符(如果有多个连续的空格符,则看作是一个)、换行符、回车符、Tab符号等
StringTokenizer(String str, String delim) 为字符串str构造一个字符串分析器,并使用字符串delim作为定界符
StringTokenizer类的主要方法及功能
String nextToken() 用于逐个获取字符串中的语言符号(单词)
boolean hasMoreTokens() 用于判断所要分析的字符串中,是否还有语言符号,如果有则返回true,反之返回false
int countTokens() 用于得到所要分析的字符串中,一共含有多少个语言符号

下面是一个例子。

String s1 = “|ln|ln/sy|ln/dl|ln/as|ln/bx|”; 
 StringTokenizer stringtokenizer1 = new StringTokenizer(s1, “|”); 
 while(stringtokenizer1 .hasMoreTokens()) { 
 String s3 = stringtokenizer.nextToken(); 
 System.out.println(s3); 
 }

输出:
ln
ln/sy
ln/dl
ln/as
ln/bx