1、环境介绍

(1)软件版本
OS:rhel6.3-x86_64
JDK:jdk1.6.0_41
Hadoop:hadoop-1.1.1

(2)IP规划
hserver  192.168.183.130 NameNode
hclient1 192.168.183.131 DataNode
hclient2 192.168.183.132 DataNode

2、清空iptables

# iptables -F
# /etc/init.d/iptables stop
# chkconfig iptables off

3、配置host解析文件

# vim /etc/hosts
192.168.183.130 hserver
192.168.183.131 hclient1
192.168.183.132 hclient2

4、设置专门运行hadoop的用户

# useradd hadoop -d /hadoop
# echo "redhat" |passwd hadoop --stdin

5、配置hadoop用户的SSH无密登陆
$ ssh-keygen(一路回车)
$ ssh-copy-id -i .ssh/id_rsa.pub [hostname](回答yes,输入密码)

6、安装JDK
$ chmod +x jdk-6u41-linux-x64.bin
$ ./jdk-6u41-linux-x64.bin

7、安装 Hadoop
下载链接:http://archive.apache.org/dist/hadoop/core/hadoop-1.1.1/hadoop-1.1.1.tar.gz
解压安装:$ tar zvxf hadoop-1.1.1.tar.gz
hadoop安装目录:/hadoop/hadoop-1.1.1

8、配置 Hadoop

修改配置文件(hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml、masters、slaves)

(1)$ vim hadoop-env.sh

export JAVA_HOME=/hadoop/jdk1.6.0_41

(2)$ vim core-site.xml

<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://hserver:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/hadoop/hadoop</value>
  </property>
</configuration>

(3)$ vim hdfs-site.xml

<configuration>
  <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
</configuration>

(4)$ vim mapred-site.xml

<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>hserver:9001</value>
  </property>
</configuration>

(5)$ vim masters

hserver

(6)$ vim slaves

hclient1
hclient2

9、格式化NameNode(hserver节点)

$ bin/hadoop namenode -format

10、启动 Hadoop

(1)NameNode(hserver节点)上启动守护进程

$ bin/start-all.sh

(2)jps检查守护进程启动情况

NameNode(hserver节点)

4389 JobTracker
4116 NameNode
4297 SecondaryNameNode
10436 Jps

DataNode(hclient1节点)

6794 Jps
2784 DataNode
2894 TaskTracker


11、操作Hadoop

(1)Web方式
NameNode 访问地址:http://hserver:50070
JobTracker 访问地址:http://hserver:50030

(2)Command方式
$ hadoop dfs -ls /
Found 1 items
drwxr-xr-x   - hadoop supergroup          0 2013-03-07 04:17 /hadoop