Hadoop集群安装部署与配置-20141119

1、集群环境说明

主机列表

主机名
iprole系统版本
node110.0.0.101masterrhel6.5
node210.0.0.102slaverhel6.5
node310.0.0.103slaverhel6.5


  1. JDK version: java 1.8 (download)

  2. hadoop version: hadoop-2.5.1 (download)

2、集群环境配置之旅

2.1、JDK与hadoop安装

a) 下载JDKhadoop安装包并安装(默认下载的安装包都放在/opt)

# wget http://download.oracle.com/otn-pub/java/jdk/8u25-b17/jdk-8u25-linux-x64.tar.gz

# wget http://mirrors.hust.edu.cn/apache/hadoop/common/hadoop-2.5.1/hadoop-2.5.1.tar.gz

# rpm -qa | grep java  (如果存在其他版本,使用“rpm -e 包名进行删除)  

# mkdir /usr/java

# tar -zxf jdk-8u25-linux-x64.tar.gz -C /usr/java

# tar -zxf hadoop-2.5.1.tar.gz -C /data

# vim /etc/profile (添加下面的内容)

exportHADOOP_HOME_WARN_SUPPRESS=1

exportHADOOP_HOME=/data/hadoop-2.5.1

exportJAVA_HOME=/usr/java/jdk1.8.0_25

exportCLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib

exportPATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH:$HOMR/bin

HADOOP_CONF_DIR=/data/hadoop-2.5.1/etc/hadoop

exportHADOOP_CONF_DIR

HADOOP_LOG_DIR=/data/hadoop-2.5.1/logs

exportHADOOP_LOG_DIR

exportPATH=$PATH:/data/hadoop-2.5.1/bin

# source /etc/profile

# echo $JAVA_HOME; java -version (测试)

 

特别提示:配置java环境需要在每个node上执行一次

2.2、修改hosts文件

# vim /etc/hosts (添加如下内容)

         10.0.0.101node1

         10.0.0.102node2

         10.0.0.103node3

 

特别提示:上面的操作需要在每一个node上执行一次

2.3、配置ssh免密码连入

a) 10.0.0.101上执行下面的命令

# ssh-keygen -t rsa

# cat /root/.ssh/id_rsa.pub >>/root/.ssh/authorized_keys

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.102

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.103

# ssh 10.0.0.102 (测试)

# ssh 10.0.0.103 (测试)

 

b) 10.0.0.102上执行下面的命令

# ssh-keygen -t rsa

# cat /root/.ssh/id_rsa.pub >>/root/.ssh/authorized_keys

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.101

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.103

# ssh 10.0.0.101 (测试)

# ssh 10.0.0.103 (测试)

 

C)     10.0.0.103上执行下面的命令

# ssh-keygen -t rsa

# cat /root/.ssh/id_rsa.pub >>/root/.ssh/authorized_keys

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.101

# ssh-copy-id -i /root/.ssh/id_rsa.pub root@10.0.0.102

# ssh 10.0.0.101 (测试)

# ssh 10.0.0.102 (测试)

2.4、hadoop集群配置

a) Hadoop主要配置文件

文件名格式描述
core-site.xmlHadoop配置 XMLHadoopCore的配置项, 例如HDFSMapReduce常用的IO设置等. 配置分布式文件系统的 URL.
hdfs-site.xmlHadoop配置 XMLHadoop守护进程的配置项,包括namenode,辅助namenodedatanode. 配置 nameNodedataNode 的本地目录信息
mapred-site.xmlHadoop配置 XMLMapReduce守护进程配置项,包括jobtrackertasktracker. 配置其使用 Yarn 框架执行 map-reduce 处理程序
Yarn-site.xmlHadoop配置 XML配置 ResourceManagerNodeManager 的通信端口,web 监控端口等.
hadoop-env.shBash脚本加载运行hadoop所需的变量。
yarn-env.shBash脚本加载运行yarn框架所需的变量。
hadoop-metrics.propertiesJava属性控制metrics Hadoop上如何发布的属性。
log4j.propertiesJava属性系统日志文件,namenode审计日志,tasktracker子进程的任务日志的属性。
slaves纯文本运行datanodetasktracker的机器列表(每行一个)


b) 进入/data/hadoop-2.5.1/etc/hadoop,并进行下面的配置

====== 配置开始 ======

# mkdir /data/hadoop-2.5.1/{logs,temp}

# vim core-site.xml

<configuration> 

   <property> 

       <name>hadoop.tmp.dir</name> 

       <value>/home/hadoop/tmp</value> 

       <description>Abase for other temporarydirectories.</description> 

   </property> 

   <property> 

       <name>fs.defaultFS</name> 

       <value>hdfs://node1:9000</value> 

   </property> 

   <property> 

       <name>io.file.buffer.size</name> 

       <value>4096</value> 

   </property> 

</configuration>

 

# vim hdfs-site.xml

<configuration> 

   <property> 

       <name>dfs.nameservices</name> 

       <value>hadoop-cluster1</value> 

   </property> 

   <property> 

       <name>dfs.namenode.secondary.http-address</name> 

       <value>node1:50090</value> 

   </property> 

   <property> 

       <name>dfs.namenode.name.dir</name> 

       <value>/data/hadoop-2.5.1/dfs/name</value> 

   </property> 

   <property> 

       <name>dfs.datanode.data.dir</name> 

       <value>/data/hadoop-2.5.1/dfs/data</value> 

   </property> 

    <property> 

       <name>dfs.replication</name> 

       <value>2</value> 

   </property> 

   <property> 

       <name>dfs.webhdfs.enabled</name> 

       <value>true</value> 

   </property> 

</configuration>

 

# mv mapred-site.xml.template mapred-site.xml

# vim mapred-site.xml

<configuration> 

   <property> 

       <name>mapreduce.framework.name</name> 

       <value>yarn</value> 

   </property> 

   <property> 

       <name>mapreduce.jobtracker.http.address</name> 

       <value>node1:50030</value> 

   </property> 

   <property> 

       <name>mapreduce.jobhistory.address</name> 

       <value>node1:10020</value> 

   </property> 

   <property> 

       <name>mapreduce.jobhistory.webapp.address</name> 

       <value>node1:19888</value> 

    </property> 

</configuration>

 

# vim Yarn-site.xml

<configuration>  

<!-- Site specific YARN configurationproperties --> 

   <property> 

       <name>yarn.nodemanager.aux-services</name> 

       <value>mapreduce_shuffle</value> 

   </property> 

   <property> 

       <name>yarn.resourcemanager.address</name> 

       <value>node1:8032</value> 

   </property> 

   <property> 

       <name>yarn.resourcemanager.scheduler.address</name> 

       <value>node1:8030</value> 

   </property> 

   <property> 

       <name>yarn.resourcemanager.resource-tracker.address</name> 

       <value>node1:8031</value> 

   </property> 

   <property> 

       <name>yarn.resourcemanager.admin.address</name> 

       <value>node1:8033</value> 

   </property> 

   <property> 

       <name>yarn.resourcemanager.webapp.address</name> 

       <value>node1:8088</value> 

   </property> 

</configuration>

 

# hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_25

 

# yarn-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_25

 

# vim slaves

node2

node3

====== 配置结束 =====

2.5、分发hadoop到slave

# rsync -avz /data/hadoop-2.5.1 root@node2:/data/

#rsync -avz /data/hadoop-2.5.1 root@node3:/data/

2.6、格式化文件系统

# cd /data/hadoop-2.5.1

# bin/hdfs namenode -format

3、hadoop服务启动与停止

3.1、启动服务

# cd /data/hadoop-2.5.1

# sbin/start-dfs.sh

# sbin/start-yarn.sh

3.2、停止服务

# cd /data/hadoop-2.5.1

# sbin/stop-dfs.sh

# sbin/stop-yarn.sh

4、验证

4.1、查看启动的进程

# jps

4.2、通过浏览器访问

Cluster HDFShttp://10.0.0.101:50070/   #hadoop-commonhadoop-hdfs中定义的

Yarn resourceManagerhttp://10.0.0.101:8088/ #yarn-site.xml中定义的