Hadoop集群:Hadoop2.6.0,系统:windows7,开发环境:eclipse

Eclipse调用Hadoop运行MapReduce程序其实就是普通的java程序可以提交MR任务到集群执行而已。

1、首先需要配置环境变量:

在系统变量中新增:

然后再Path中增加:%HADOOP_HOME%\bin;

2、需要在开发的MapReduce的main函数中指定配置如下:

Configuration conf = new Configuration();
conf.setBoolean("mapreduce.app-submission.cross-platform", true);// 配置使用跨平台提交任务
conf.set("fs.defaultFS", "hdfs://imageHandler1:9000/tmp"); // 指定namenode
conf.set("mapreduce.framework.name", "yarn"); // 指定使用yarn框架
conf.set("yarn.resourcemanager.address", "imageHandler1:8032"); // 指定ResourceManager
conf.set("yarn.resourcemanager.scheduler.address", "imageHandler1:8030");// 指定资源分配器

3、在eclipse中运行main函数:

例如运行wordcount。

(1)首先是出现类似下面的错误:

1. 2014-04-03 21:20:21,568 ERROR [main] util.Shell (Shell.java:getWinUtilsPath(303)) - Failed to locate the winutils binary in the hadoop binary path  
2. java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.  
3.     at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:278)  
4.     at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:300)  
5.     at org.apache.hadoop.util.Shell.<clinit>(Shell.java:293)  
6.     at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:76)  
7.     at org.apache.hadoop.yarn.conf.YarnConfiguration.<clinit>(YarnConfiguration.java:345)  
8.     at org.fansy.hadoop.mr.WordCount.getConf(WordCount.java:104)  
9.     at org.fansy.hadoop.mr.WordCount.runJob(WordCount.java:84)  
10.     at org.fansy.hadoop.mr.WordCount.main(WordCount.java:47)

这个错误可以不用管,也可以在配置环境变量中的hadoop的bin目录下加入winutils.exe文件。

然后会报一个权限错误,需要调整相应目录的权限,例如修改conf配置中namenode相应目录的权限,这里是/tmp,在Hadoop集群中执行:hdfs dfs -chmod 777 /tmp。

(2)然后出现类似下面的错误:

1. 2014-04-03 20:32:36,596 ERROR [main] security.UserGroupInformation (UserGroupInformation.java:doAs(1494)) - PriviledgedActionException as:Administrator (auth:SIMPLE) cause:java.io.IOException: Failed to run job : Application application_1396459813671_0001 failed 2 times due to AM Container for appattempt_1396459813671_0001_000002 exited with  exitCode: 1 due to: Exception from container-launch:   
2. org.apache.hadoop.util.Shell$ExitCodeException: /bin/bash: line 0: fg: no job control  
3.   
4.     at org.apache.hadoop.util.Shell.runCommand(Shell.java:464)  
5.     at org.apache.hadoop.util.Shell.run(Shell.java:379)  
6.     at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:589)  
7.     at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:195)  
8.     at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:283)  
9.     at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:79)  
10.     at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)  
11.     at java.util.concurrent.FutureTask.run(FutureTask.java:166)  
12.     at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)  
13.     at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)  
14.     at java.lang.Thread.run(Thread.java:724)  
15. .Failing this attempt.. Failing the application
  1. .  

这时基本成功了一半,用上面的错误去google,可以得到这个网页:https://issues.apache.org/jira/browse/MAPREDUCE-5655 。Hadoop2.6.0只需修改

YARNRunner.java即可,MRApps.java不需要修改。

YARNRunner.java修改如下:

在该类中搜索“// Setup the command to run the AM”,然后注释掉“vargs.add(MRApps.crossPlatformifyMREnv(jobConf,Environment.JAVA_HOME) + "/bin/java");”,在该行下面增加:

String remoteOs = conf.get("mapred.remote.os"); 

vvargs.add("Linux".equals(remoteOs) ? "$JAVA_HOME/bin/java" : MRApps.crossPlatformifyMREnv(jobConf, Environment.JAVA_HOME) + "/bin/java");

修改完该类后替换Eclipse中的jar包hadoop-mapreduce-client-jobclient-2.6.0.jar中相应的类。

修改hadoop-mapreduce-client-core-2.6.0.jar中的mapred-default.xml(只需修改eclipse中引入的jar包即可),增加:

1. <property>  
2.     <name>mapred.remote.os</name>  
3.     <value>Linux</value>  
4.     <description>  
5.         Remote MapReduce framework's OS, can be either Linux or Windows  
6.     </description>  
7. </property>

(题外话,添加了这个属性后,按说我new一个Configuration后,我使用conf.get("mapred.remote.os")的时候应该是可以得到Linux的,但是我得到的却是null,这个就不清楚是怎么了。)

(3)这时再运行程序,还是报错,登录yarn主监控页面,查看log日志,可以看到下面的错误:

  1. Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

按照https://issues.apache.org/jira/browse/MAPREDUCE-5655中的解决办法,修改mapred-default.xml和yarn-default.xml,分别在hadoop-mapreduce-client-core-2.6.0.jar和hadoop-yarn-common-2.6.0.jar中(只需修改eclipse中引入的jar包即可)。

在mapred-default.xml找到mapreduce.application.classpath,修改如下:

<property>
   <name>mapreduce.application.classpath</name>
   <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/share/hadoop/common/*,
        $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
        $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
        $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
        $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,
        $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,
        $HADOOP_YARN_HOME/share/hadoop/yarn/*,
        $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*,
   </value>
</property>
在yarn-default.xml中找到yarn.application.classpath,修改如下:
<property>
    <name>yarn.application.classpath</name>
    <value>
        $HADOOP_CONF_DIR,
        $HADOOP_COMMON_HOME/share/hadoop/common/*,
        $HADOOP_COMMON_HOME/share/hadoop/common/lib/*,
        $HADOOP_HDFS_HOME/share/hadoop/hdfs/*,
        $HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*,
        $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*,
        $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*,
        $HADOOP_YARN_HOME/share/hadoop/yarn/*,
        $HADOOP_YARN_HOME/share/hadoop/yarn/lib/*
    </value>
  </property>

(4)经过上面的修改再次运行报错类似:

    1. Caused by: java.lang.ClassNotFoundException: Class org.fansy.hadoop.mr.WordCount$WCMapper not found  
    2.     at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:1626)  
    3.     at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:1718)  
    4.     ... 8 more  
    需要上传wordcount程序

    的jar文件到$HADOOP_HOME/share/hadoop/mapreduce/lib下面(集群每台机器都要上传),然后再次运行,成功了。