配置基础环境

1、测试环境直接关闭防火墙

2、主机添加hosts记录

# vim /etc/hosts
10.2.24.17 hadoop

3、创建hadoop用户

# useradd hadoop
# passwd hadoop

4、添加免密登录(如果不添加免密登录,后面启动服务时候会提示输入密码)

# su - hadoop
$ ssh-keygen -t rsa
$ ssh-copy-id haddop@localhost

安装JDK

1、卸载系统自带的openjdk

yum remove *openjdk*

2、安装JDK

下载JDK,并配置环境变量

export JAVA_HOME=/usr/java/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export PATH=$PATH:${JAVA_PATH}

3、更新配置,查看java版本

$source /etc/profile
$java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)

部署Hadoop

1、安装Hadoop

$wget https://www.apache.org/dyn/closer.cgi/hadoop/common/hadoop-2.10.2/hadoop-2.10.2.tar.gz
$tar zxvf hadoop-2.10.2.tar.gz
$mv hadoop2.10.2 hadoop

2、添加环境变量

$vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_131
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib:$CLASSPATH
export JAVA_PATH=${JAVA_HOME}/bin:${JRE_HOME}/bin
export HADOOP_HOME=/home/bigdata/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export PATH=$PATH:${JAVA_PATH}:${HADOOP_HOME}/bin

$source /etc/profile

3、配置Hadoop

配置文件在/hadoop/etc/hadoop目录下

$cd /hadoop/etc/hadoop
#1. 修改core-site.xml
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://hadoop:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>file:/home/hadoop/data/tmp</value>
  </property>
</configuration>
# 2. 修改hdfs-site.xml
$ vim hdfs-site.xml
<configuration>
    <property>
    <name>dfs.replication</name>
    <value>2</value>
  </property>
  <property>
    <name>dfs.namenode.name.dir</name>
    <value>file:/home/hadoop/data/namenode</value>
  </property>
  <property>
    <name>dfs.datanode.data.dir</name>
    <value>file:/home/hadoop/data/datanode</value>
  </property>
  <property>
    <name>dfs.namenode.secondary.http-address</name>
    <value>hadoop:9001</value>
  </property>
</configuration>
 
# 3. 修改mapred-site.xml
$ cp  mapred-site.xml.template mapred-site.xml
$ vim mapred-site.xml
<configuration>
    <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
</configuration>
 
# 4. 修改hadoop-env.sh(如果不声明JAVA_HOME,在启动时会出现找不到JAVA_HOME 错误)
$ vim hadoop-env.sh
export JAVA_HOME=${JAVA_HOME}
改为
export JAVA_HOME=/usr/java/jdk1.8.0_131
 
# 5. 修改yarn-env.sh(如果不声明JAVA_HOME,在启动时会出现找不到JAVA_HOME 错误)
$ vim yarn-env.sh
在脚本前面添加
export JAVA_HOME=/usr/java/jdk1.8.0_131

4、格式化HDFS

$hadoop namenode -format

5、启动服务

$ sbin/start-dfs.sh
$ sbin/start-yarn.sh

查看启动情况

$jps
16321 NameNode
16958 ResourceManager
16011 SecondaryNameNode
16127 Jps

浏览器中访问http://192.144.200.230:50070 查看管理页面

Linux系统部署Hadoop单节点_hadoop

浏览器中访问http://192.144.200.230:8088

Linux系统部署Hadoop单节点_hadoop_02