hadoop2.4.1伪分布模式部署 - wrencai

时间 2014-08-08 14:54:33  博客园-所有随笔区

原文  http://www.cnblogs.com/wrencai/p/3899375.html

hadoop2.4.1伪分布模式部署


( 承接上一篇hadoop2.4.1-src的编译安装继续配置 : http://www.cnblogs.com/wrencai/p/3897438.html )


感谢: http://blog.sina.com.cn/s/blog_5252f6ca0101kb3s.html


感谢: http://blog.csdn.net/coolwzjcool/article/details/32072157


1.配置hadoop环境变量


在/etc/profile文件结尾增加hadoop安装目录的PATH路径


export HADOOP_PREFIX=/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1

export PATH=$PATH:$HADOOP_PREFIX/bin

2.配置hadoop相关配置文件


进入到hadoop安装目录此处为:/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1


对etc/hadoop中的文件进行配置(相关文件hadoop-env.sh 、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml)


a.配制core-site.xml


<configuration>

   <property>

    <name>fs.default.name</name>

    <value>hdfs://localhost:9000</value>

   </property>

   <property>

    <name>dfs.namenode.name.dir</name>

    <value>file:/home/hadoop/hadoop-2.4.1/dfs/name</value>

   </property>

   <property>

    <name>dfs.datanode.data.dir</name>

    <value>file:/home/hadoop/hadoop-2.4.1/dfs/data</value>

   </property>

</configuration>

注意红色字体hadoop是我为配置hadoop2.4.1设立的账户名称,是系统在home目录下自动创建的,可以根据需要更改。


b.配制hdfs-site.xml


<configuration>

   <property>

    <name>dfs.replication</name>

    <!--系统默认文件保存3份,因伪分布模式,故改为1份-->

    <value>1</value>

   </property>

   <property>

    <name>dfs.namenode.name.dir</name>

    <value>/home/hadoop/hadoop-2.4.0/dfs/name</value>

   </property>

   <property>

    <name>dfs.datanode.data.dir</name>

    <value>/home/hadoop/hadoop-2.4.0/dfs/data</value>

   </property>

  </configuration>

c.配制mapred-site.xml


<configuration>

   <property>

    <name>mapreduce.jobtracker.address</name>

    <value>localhost:9001</value>

   </property>

  </configuration>

d.配置yarn-site.xml


<configuration>

  <!-- Site specific YARN configuration properties -->

   <property>

    <name>mapreduce.framework.name</name>

    <value>yarn</value>

   </property>

   <property>

    <name>yarn.nodemanager.aux-services</name>

    <value>mapreduce_shuffle</value>

   </property>

  </configuration>

3.ssh免密码登陆设置: 参考 http://lhflinux.blog.51cto.com/1961662/526122


ssh链接是需要密码认证的,可以通过添加系统认证(即公钥-私钥)的修改,修改后系统间切换可以避免密码输入和ssh认证。


a. 修改文件:vi /etc/ssh/sshd_config


RSAAuthentication yes         开启RSA加密方式


   PubkeyAuthentication yes      开启公钥认证


   AuthorizedKeysFile .ssh/authorized_keys      公钥存放位置


PasswordAuthentication no     拒绝使用密码登录


GSSAPIAuthentication no       防止登录慢,以及报错问题


ClientAliveInterval 300                      300秒超时自动退出    ClientAliveCountMax 10                   允许SSH远程连接的最大数


    b.在root根目录下执行:


ssh-keygen -t rsa -P ''"

     回车,然后输入密码, 完成后再执行:(本机作为伪集群的一个节点,也需要将认证写入authorized,不执行下一句可能会出现agent admitted failure to sign using the  key     错误,参考 http://blog.chinaunix.net/uid-28228356-id-3510267.html ))


cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys

d.执行下面命令,能够直接进入则表示成功


[root@localhost]#ssh localhost

Last login:Fri Aug  8 13:44:42 2014 from localhost


4.运行测试hadoop


a.到hadoop2.4.0目录下执行下面命令,格式化结点信息,最后一句出现 "shutting down...",中间没有warn或者fatal error应该就对了。此处可能会出现 STARTUP_MSG: host  =  java.net. UnknownHostException: localhost.localdomain: localhost.localdomain的提示,可以参考 http://lxy2330.iteye.com/blog/1112806 进行修改,或者临时通过hostname localhost命令将本机主机名改为localhost.


./bin/hadoop namenode –format

     b.执行sbin/start-all.sh启动hadoop第一次可能不成功,这是可以通过先执行一次sbin/stop-all.sh然后在执行sbin/start-all.sh来完成,最后用jps命令查看进程


[root@localhost hadoop-2.4.1]# ./sbin/start-all.sh 

This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh

Starting namenodes on [localhost]

localhost: starting namenode, logging to

/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-namenode-localhost.out

localhost: starting datanode, logging to

/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-datanode-localhost.out

Starting secondary namenodes [0.0.0.0]

0.0.0.0: starting secondarynamenode, logging to

/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/hadoop-root-secondarynamenode-localhost.out

starting yarn daemons

starting resourcemanager, logging to

/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/yarn-root-resourcemanager-localhost.out

localhost: starting nodemanager, logging to

/opt/hadoop-2.4.1-src/hadoop-dist/target/hadoop-2.4.1/logs/yarn-root-nodemanager-localhost.out

[root@localhost hadoop-2.4.1]# ssh localhost

Last login: Fri Aug  8 13:44:41 2014 from localhost

[root@localhost ~]# jps

28186 ResourceManager

28025 SecondaryNameNode

27743 NameNode

28281 NodeManager

29223 Jps

[root@localhost ~]#