1.JDK安装:

1)建立目录/usr/java将jdk-6u43-linux-i586-rpm.bin拷贝到Centos下的/usr/java下。

2)修改文件权限,执行chmod755jdk-6u43-linux-i586-rpm.bin

3)开始安装,执行./jdk-6u43-linux-i586-rpm.bin

4)设置环境变量,打开/etc/profile,在文件末尾,done与unseti之间加入如下环境变量配置:

exportJAVA_HOME=/usr/java/jdk1.6.0_43
exportJRE_HOME=/usr/java/jdk1.6.0_43/jre
exportHADOOP_HOME=/home/u/hadoop-1.1.2
exportHADOOP_HOME_WARN_SUPPRESS=1
exportANT_HOME=/usr/java/apache-ant-1.9.0
exportHBASE_HOME=/home/u/hbase-0.95.0-hadoop1
exportCLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$HADOOP_HOME/lib:$CLASSPATH
exportCLASSPATH=$HBASE_HOME/lib:$CLASSPATH
exportPATH=$JAVA_HOME/bin:$JRE_HOME/bin:$HADOOP_HOME/bin:$ANT_HOME/bin:$PATH
exportPATH=$HBASE_HOME/bin:$PATH
设置完毕,更新配置参数:source/etc/profile

  1. 防火墙处理

关闭防火墙:serviceiptablesstop命令关闭防火墙重启后会再次打开,可以在系统设置里面配置防火墙设置;

  1. 安装OpenSSHServer

首先,搜索一下CentOS的软件库里面有没有已经定义好的SSH服务器包:

$yumsearchssh
$yuminstallopenssh-server
$chkconfig--listsshd
sshd0:off1:off2:on3:on4:on5:on6:off
$/etc/init.d/sshdstart

  1. 配置SSH无密码连接

[hadoop@master~]$ssh-keygen-trsa
Generatingpublic/privatersakeypair.
Enterfileinwhichtosavethekey(/home/hadoop/.ssh/id_rsa):
Createddirectory'/home/hadoop/.ssh'.
Enterpassphrase(emptyfornopassphrase):
Entersamepassphraseagain:
Youridentificationhasbeensavedin/home/hadoop/.ssh/id_rsa.
Yourpublickeyhasbeensavedin/home/hadoop/.ssh/id_rsa.pub.
Thekeyfingerprintis:
89:24:d6:39:76:aa:61:2e:e0:3b:e8:53:98:d2:16:dehadoop@master
Thekey'srandomartp_w_picpathis:
+--[RSA2048]----+
||
|..|
|o*.|
|..+=.|
|.oooooS|
|o+++Eo|
|oooo|
|.o..|
|.oo|
+-----------------+
[hadoop@master~]$cat~/.ssh/id_rsa.pub>>~/.ssh/authorized_keys
[hadoop@master~]$chmod600~/.ssh/authorized_keys
[hadoop@master~]$
[hadoop@master~]$scp.ssh/authorized_keysslave1:~/.ssh/
[hadoop@master~]$scp.ssh/authorized_keysslave2:~/.ssh

如果设置了.ssh目录,在authorized_keys设置了key后登录还提示需要输入密码,则可能是权限问题

--注意:以下语句要在每个电脑上执行

chmod700~/.ssh/
chmod700/home/userName
chmod600~/.ssh/authorized_keys

5.配置master、slave文件

[hadoop@masterconf]$moremaster
master
[hadoop@masterconf]$moreslaves
slave1
slave2
slave3

6.hadoop在格式化namenode之前首先检查/etc/hosts文件中设置的是否正确,无问题后执行格式化命令:

[hadoop@master~]$more/etc/hosts
127.0.0.1localhost
192.168.129.201master
192.168.129.202slave1
192.168.129.203slave2
192.168.129.1slave3
[hadoop@master~]$hadoopnamenode-format

7.启动以后Hadoop后,查看相关进程

[hadoop@masterconf]$jps
3454SecondaryNameNode
3537JobTracker
3292NameNode
12722Jps