Hadoop 2.9.1 on Ubuntu 16.04

环境配置

3台机器虚拟机 ubuntu16.04

10.64.104.177  hadoop-master
10.64.104.178  hadoop-node1
10.64.104.179  hadoop-node2

1、安装jdk

# 三台机器均要安装
sudo apt-get update
sudo apt-get install default-jdk

java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-0ubuntu0.16.04.1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)

# 添加JAVA_HOME
cat << EOF >> ~/.bashrc
#HADOOP VARIABLES START
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_INSTALL/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_INSTALL/lib"
#HADOOP VARIABLES END
EOF
source ~/.bashrc

2、创建账户

sudo useradd -m hadoop -s /bin/bash //添加用户
sudo passwd hadoop  //创建密码
sudo  adduser hadoop sudo   //sudo 授权

3、主机名 及 hosts配置

# 设置3台机器的主机名
hostnamectl  set-hostname  hadoop-master
hostnamectl  set-hostname  hadoop-node1
hostnamectl  set-hostname  hadoop-node2

# 添加 host
cat << EOF >> /etc/hosts
10.64.104.177  hadoop-master
10.64.104.178  hadoop-node1   
10.64.104.179  hadoop-node2   
EOF

4、SSH免密

# 在hadoop-master 
sudo apt-get install openssh-server
ssh-keygen -t rsa 回车 回车 回车
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

scp /home/hadoop/.ssh/id_rsa.pub  hadoop@10.64.104.178:.ssh/
scp /home/hadoop/.ssh/id_rsa.pub  hadoop@10.64.104.179:.ssh/

# 在hadoop-node1,hadoop-node2 
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys

安装Hadoop

配置hadoop-master的hadoop环境


1、下载(当前最新的stable 版本为2.9.1)
Hadoop 的安装包从这里可以下载: http://mirror.bit.edu.cn/apache/hadoop/common/
wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.9.1/hadoop-2.9.1.tar.gz
2、解压
sudo tar xvf hadoop-2.9.1.tar.gz -C /usr/local
sudo mv /usr/local/hadoop-2.9.1 /usr/local/hadoop
sudo chown -R hadoop.hadoop /usr/local/hadoop/ 
3、hadoop环境变量配置
cat <<EOF>> ~/.bashrc
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
EOF

# 立即生效
source ~/.bashrc

4、配置hadoop 配置文件

# 修改如下篇日志文件,三台机都要进行这些操作
$HADOOP_HOME/etc/hadoop/hadoop-env.sh
$HADOOP_HOME/etc/hadoop/core-site.xml
$HADOOP_HOME/etc/hadoop/hdfs-site.xml
$HADOOP_HOME/etc/hadoop/mapred-site.xml
$HADOOP_HOME/etc/hadoop/slaves

1) 修改JAVA_HOME
vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh 
## 配置项
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk.x86_64

2)修改Hadoop核心配置文件/usr/local/hadoop/etc/hadoop/core-site.xml,通过fs.default.name指定NameNode的IP地址和端口号,通过hadoop.tmp.dir指定hadoop数据存储的临时文件夹。

<configuration>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/local/hadoop/tmp</value>
        <description>Abase for other temporary directories.</description>
    </property>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop-master:9000</value>
    </property>
</configuration>

特别注意:如没有配置hadoop.tmp.dir参数,此时系统默认的临时目录为:/tmp/hadoo-hadoop。而这个目录在每次重启后都会被删除,必须重新执行format才行,否则会出错。

3)配置hdfs-site.xml
修改HDFS核心配置文件/usr/local/hadoop/etc/hadoop/hdfs-site.xml,通过dfs.replication指定HDFS的备份因子为3,通过dfs.name.dir指定namenode节点的文件存储目录,通过dfs.data.dir指定datanode节点的文件存储目录。

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <property>
        <name>dfs.name.dir</name>
        <value>/usr/local/hadoop/hdfs/name</value>
    </property>
    <property>
        <name>dfs.data.dir</name>
        <value>/usr/local/hadoop/hdfs/data</value>
    </property>
</configuration>

4)配置masters 文件
修改/usr/local/hadoop/etc/hadoop/masters文件,该文件指定namenode节点所在的服务器机器。删除localhost,添加namenode节点的主机名hadoop-master;不建议使用IP地址,因为IP地址可能会变化,但是主机名一般不会变化。
cat << EOF >> /usr/local/hadoop/etc/hadoop/masters
hadoop-master
EOF

5)配置slaves文件(Master主机特有)
该文件指定哪些服务器节点是datanode节点。删除locahost,添加所有datanode节点的主机名
cat << EOF >> /usr/local/hadoop/etc/hadoop/slaves
hadoop-node1
hadoop-node2
EOF 

配置hadoop-node的hadoop环境

下面以配置hadoop-node1的hadoop为例进行演示,用户需参照以下步骤完成其他hadoop-node2~3服务器的配置。
1)复制hadoop 到 hadoop-node1 节点
sudo mkdir -p /usr/local/hadoop
rsync -avz /usr/local/hadoop hadoop-node1:/usr/local/hadoop

2) 添加hadoop环境
cat <<EOF>> ~/.bashrc
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
EOF
source ~/.bashrc

3) 登录hadoop-node1服务器,删除slaves内容

rm -rf /usr/local/hadoop/etc/hadoop/slaves

其它节点配置如上

启动 Hadoop

启动集群
1、格式化HDFS文件系统

进入master的~/hadoop目录,执行以下操作

bin/hadoop namenode -format
格式化namenode,第一次启动服务前执行的操作,以后不需要执行。

2、然后启动hadoop:

sbin/start-all.sh
3、使用jps命令查看运行情况
jps
#master 执行 jps查看运行情况
hadoop@hadoop-master:/usr/local/hadoop$ jps 
27893 SecondaryNameNode
28070 ResourceManager
27657 NameNode
30635 Jps
26078 ResourceManager
#slave 执行 jps查看运行情况
hadoop@hadoop-node1:~$ jps 
26832 DataNode
27956 NodeManager
28093 Jps

4、命令查看Hadoop集群的状态

通过简单的jps命令虽然可以查看HDFS文件管理系统、MapReduce服务是否启动成功,但是无法查看到Hadoop整个集群的运行状态。我们可以通过hadoop dfsadmin -report进行查看。用该命令可以快速定位出哪些节点挂掉了,HDFS的容量以及使用了多少,以及每个节点的硬盘使用情况。

hadoop dfsadmin -report
输出结果:

Configured Capacity: 50108030976 (46.67 GB)
Present Capacity: 41877471232 (39.00 GB)
DFS Remaining: 41877385216 (39.00 GB)
DFS Used: 86016 (84 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
......

5、hadoop 重启

sbin/stop-all.sh
sbin/start-all.sh

6、查看hadoop状态
http://${hadoop-master}:50070
http://${hadoop-master}:8088

FAQ

Q1: hadoop-node2: Error: JAVA_HOME is not set and could not be found.

这个错误意思没有找到jdk的环境变量,需要在hadoop-env.sh配置。

vi /usr/local/hadoop/etc/hadoop/hadoop-env.sh 
## 配置项
export JAVA_HOME=/usr/lib/jvm/jre-1.8.0-openjdk.x86_64