最近学习了下hadoop的安装。下面详细说明下步骤
一、环境
我的是在Linux环境下进行安装的。对于想在windows系统上学习的同学,可以采用虚拟机方式或通过cygwin模拟linux环境方式进行学习。
现在有三台服务器,分配如下:
10.0.1.100 NameNode
10.0.1.201 DataNode1
10.0.1.202 DataNode2
NameNode(主服务器)可以看作是分布式文件系统中的管理者,主要负责管理文件系统的命名空间、集群配置信息和存储块的复制等。
DataNode(从服务器)是文件存储的基本单元,它将Block存储在本地文件系统中,保存了Block的Meta-data,同时周期性地将所有存在的Block信息发送给NameNode
1、安装jdk
自己到网上找到合适的jdk版本,我下载的是jdk-6u23-linux-i586.bin(linux的32位操作系统版本)。分别上传到3个服务器的/usr/local/java/jdk目录下(该目录可根据个人习惯修改)
jdk-6u23-linux-i586.bin 是自解压的文件,要增加可执行权限。如下
[plain] view plain copy print ?
chmod +x jdk-6u23-linux-i586.bin
chmod +x jdk-6u23-linux-i586.bin
[plain] view plain copy print ?
./jdk-6u23-linux-i586.bin
./jdk-6u23-linux-i586.bin
输入回车后,解压完成。
修改配置
[plain] view plain copy print ?
vi /etc/profile
vi /etc/profile
在profile最后增加以下几行环境变量配置
[html] view plain copy print ?
JAVA_HOME =/usr/local/java/jdk/jdk1.6.0_23
CLASSPATH =.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
PATH =$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
JAVA_HOME=/usr/local/java/jdk/jdk1.6.0_23
CLASSPATH=.:$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar
PATH=$JAVA_HOME/bin:$PATH
export JAVA_HOME CLASSPATH PATH
重新ssh连接服务器,测试JDK是否安装成功。其命令如下:
[plain] view plain copy print ?
java -version
java -version
正常显示如下:
java version "1.6.0_23"
Java(TM) SE Runtime Environment (build 1.6.0_23-b05)
Java HotSpot(TM) Client VM (build 19.0-b09, mixed mode, sharing)
2、配置ssh
一般linux系统是自带安装ssh服务的,对于没有安装ssh服务的,请自行安装,这里不详细说明如何安装。
我们需要对ssh服务进行配置,以运行hadoop环境进行ssh无密码登录。即NameNode节点需要能够ssh无密码登录访问DataNode节点
进入NameNode服务器,输入如下命令
[plain] view plain copy print ?
[root@localhost hadoop]# cd ~
[root@localhost ~]# cd .ssh/
[root@localhost .ssh]# ssh-keygen -t rsa
[root@localhost hadoop]# cd ~
[root@localhost ~]# cd .ssh/
[root@localhost .ssh]# ssh-keygen -t rsa
一直回车。.ssh目录下多出两个文件
私钥文件:id_rsa公钥文件:id_rsa.pub
复制id_rsa.pub文件为authorized_keys
[plain] view plain copy print ?
[root@localhost .ssh]# cp id_rsa.pub authorized_keys
[root@localhost .ssh]# cp id_rsa.pub authorized_keys
将公钥文件authorized_keys分发到各DataNode节点:
[plain] view plain copy print ?
[root@localhost .ssh]# scp authorized_keys root@10.0.1.201:/root/.ssh/
[root@localhost .ssh]# scp authorized_keys root@10.0.1.202:/root/.ssh/
[root@localhost .ssh]# scp authorized_keys root@10.0.1.201:/root/.ssh/
[root@localhost .ssh]# scp authorized_keys root@10.0.1.202:/root/.ssh/
注意:如果当前用户目录下没有.ssh目录,可以自己创建一个该目录
验证ssh无密码登录:
[plain] view plain copy print ?
[root@localhost .ssh]# ssh root@10.0.1.201
Last login: Mon Jan 5 09:46:01 2015 from 10.0.1.100
[root@localhost .ssh]# ssh root@10.0.1.201
Last login: Mon Jan 5 09:46:01 2015 from 10.0.1.100
看到以上信息,表示配置成功!如果还提示要输入密码,则配置失败。
二、下载及安装Hadoop
去hadoop官网上(http://hadoop.apache.org/)下载合适的hadoop版本。我选择的是比较新的2.5.2版本 (最新的是2.6.0版本)。文件名为hadoop-2.5.2.tar.gz,下载文件上传到/root/test下(三个服务器都要上传),切换到该 目录下,解压:
[plain] view plain copy print ?
tar -zvxf hadoop-2.5.2.tar.gz
tar -zvxf hadoop-2.5.2.tar.gz
主服务器(10.0.1.100)进入配置目录:cd hadoop-2.5.2/etc/hadoop
core-site.xml
[html] view plain copy print ?
< configuration >
< property >
< name > hadoop.tmp.dir </ name >
< value > /home/hadoop/tmp </ value >
< description > Abase for other temporary directories. </ description >
</ property >
< property >
< name > fs.defaultFS </ name >
< value > hdfs://10.0.1.100:9000 </ value >
</ property >
< property >
< name > io.file.buffer.size </ name >
< value > 4096 </ value >
</ property >
</ configuration >
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://10.0.1.100:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>4096</value>
</property>
</configuration>
注:两个从服务器也要按上面修改core-site.xml配置文件,以下其余配置只针对主服务器
hdfs-site.xml
[html] view plain copy print ?
< configuration >
< property >
< name > dfs.nameservices </ name >
< value > hadoop-cluster1 </ value >
</ property >
< property >
< name > dfs.namenode.secondary.http-address </ name >
< value > 10.0.1.100:50090 </ value >
</ property >
< property >
< name > dfs.namenode.name.dir </ name >
< value > file:///home/hadoop/dfs/name </ value >
</ property >
< property >
< name > dfs.datanode.data.dir </ name >
< value > file:///home/hadoop/dfs/data </ value >
</ property >
< property >
< name > dfs.replication </ name >
< value > 2 </ value >
</ property >
< property >
< name > dfs.webhdfs.enabled </ name >
< value > true </ value >
</ property >
</ configuration >
<configuration>
<property>
<name>dfs.nameservices</name>
<value>hadoop-cluster1</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>10.0.1.100:50090</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:///home/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:///home/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
[html] view plain copy print ?
< configuration >
< property >
< name > mapreduce.framework.name </ name >
< value > yarn </ value >
</ property >
< property >
< name > mapreduce.jobtracker.http.address </ name >
< value > 10.0.1.100:50030 </ value >
</ property >
< property >
< name > mapreduce.jobhistory.address </ name >
< value > 10.0.1.100:10020 </ value >
</ property >
< property >
< name > mapreduce.jobhistory.webapp.address </ name >
< value > 10.0.1.100:19888 </ value >
</ property >
</ configuration >
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobtracker.http.address</name>
<value>10.0.1.100:50030</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>10.0.1.100:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>10.0.1.100:19888</value>
</property>
</configuration>
yarn-site.xml
[html] view plain copy print ?
< configuration >
<!-- Site specific YARN configuration properties -->
< property >
< name > yarn.nodemanager.aux-services </ name >
< value > mapreduce_shuffle </ value >
</ property >
< property >
< name > yarn.resourcemanager.address </ name >
< value > 10.0.1.100:8032 </ value >
</ property >
< property >
< name > yarn.resourcemanager.scheduler.address </ name >
< value > 10.0.1.100:8030 </ value >
</ property >
< property >
< name > yarn.resourcemanager.resource-tracker.address </ name >
< value > 10.0.1.100:8031 </ value >
</ property >
< property >
< name > yarn.resourcemanager.admin.address </ name >
< value > 10.0.1.100:8033 </ value >
</ property >
< property >
< name > yarn.resourcemanager.webapp.address </ name >
< value > 10.0.1.100:8088 </ value >
</ property >
</ configuration >
<configuration>
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>10.0.1.100:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>10.0.1.100:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>10.0.1.100:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>10.0.1.100:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>10.0.1.100:8088</value>
</property>
</configuration>
slaves
[html] view plain copy print ?
10.0.1.201
10.0.1.202
10.0.1.201
10.0.1.202
hadoop-env.sh
[html] view plain copy print ?
export JAVA_HOME =/usr/local/java/jdk/jdk1.6.0_23
export JAVA_HOME=/usr/local/java/jdk/jdk1.6.0_23
yarn-env.sh
[html] view plain copy print ?
export JAVA_HOME =/usr/local/java/jdk/jdk1.6.0_23
export JAVA_HOME=/usr/local/java/jdk/jdk1.6.0_23
3、格式化文件系统
[plain] view plain copy print ?
bin/hdfs namenode -format
bin/hdfs namenode -format
注意:这里的格式化文件系统并不是硬盘格式化,只是针对主服务器hdfs-site.xml的dfs.namenode.name.dir和dfs.datanode.data.dir目录做相应的清理工作。
4、启动和停止服务
启动
[plain] view plain copy print ?
sbin/start-dfs.sh
sbin/start-dfs.sh
[plain] view plain copy print ?
sbin/start-yarn.sh
sbin/start-yarn.sh
停止
[plain] view plain copy print ?
sbin/stop-dfs.sh
sbin/stop-dfs.sh
[plain] view plain copy print ?
sbin/stop-yarn.sh
sbin/stop-yarn.sh
5、查看启动的进程
[plain] view plain copy print ?
jps
jps
显示如下
14140 ResourceManager
13795 NameNode
14399 Jps
三、通过浏览器访问
http://10.0.1.100:50070/
http://10.0.1.100:8088/
---------------
特别说明下,上面配置主服务器的slaves文件,使用的是ip配置,此时需要在主服务器的/etc/hosts中增加ip到主机名的映射如下:
[plain] view plain copy print ?
10.0.1.201 anyname1
10.0.1.202 anyname2
10.0.1.201 anyname1
10.0.1.202 anyname2
否则,可能在执行start-dfs.sh命令时,从服务器的DateNode节点打印如下错误日志:
2015-01-16 17:06:54,375 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool BP-1748412339-10.0.1.212-1420015637155 (Datanode Uuid null) service to /10.0.1.218:9000 Datanode denied communication with namenode because hostname cannot be resolved (ip=10.0.1.217, hostname=10.0.1.217): DatanodeRegistration(0.0.0.0, datanodeUuid=3ed21882-db82-462e-a71d-0dd52489d19e, infoPort=50075, ipcPort=50020, storageInfo=lv=-55;cid=CID-4237dee9-ea5e-4994-91c2-008d9e804960;nsid=358861143;c=0)
大意是无法将ip地址解析成主机名,也就是无法获取到主机名,需要在/etc/hosts中进行指定。