Hadoop学习环境搭建

  • Apache Hadoop3.1.1虚拟机环境搭建
  • 工具准备
  • 安装虚拟机
  • Hadoop安装和配置
  • 配置Hadoop001、Hadoop002、Hadoop003互相访问
  • 配置Hadoop
  • 启动Hadoop


Apache Hadoop3.1.1虚拟机环境搭建

最近想学习一下大数据相关的知识,都说Hadoop是目前学习大数据必不可少的,所以那就先从Hadoop开始吧。第一步就是搭建一个Hadoop的环境,我选择的是用VirtualBox虚拟机搭建一个简单的Hadoop环境,Hadoop的版本是3.1.1。

工具准备

安装虚拟机

利用VirtualBox搭建虚拟机环境,一共安装三台虚拟机,一台作为namenode,两台作为datanode,三台虚拟机只有hostname和ip不同,其他相同,所以只演示一台的安装过程:

  1. 新建虚拟机,名称为hadoop001,操作系统选择Linux,版本选择Red Hat (64-bit),点击【下一步】 ;
  2. 设置内存和硬盘保存位置及大小,我是设置的2G内存和100G硬盘;



  3. 创建完成后,设置存储为下载的CentOS安装ISO文件,设置网络为桥接网卡,然后点击启动,开始安装操作系统;

  4. 安装操作系统不多赘述,正常安装即可,需要注意一点,安装过程中直接设置好hostname和IP地址




  5. hadoop001安装完成,,hostname:hadoop001,ip:192.168.0.181。由于安装的是Minal模式,所以需要通过yum安装net-tools等工具。具体工具可百度。同时需要安装Java环境,简单说来就是下载jdk,然后解压到指定目录后,设置环境变量即可,此处不再具体叙述 ,可百度。连接linux和传输文件的工具可以用XShell和XFtp。
# Java environment
export JAVA_HOME=/usr/lib/java/jdk1.8.0_181
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar
export PATH=$PATH:${JAVA_HOME}/bin
  1. 按照hadoop001的方式安装完成hadoop002(hostname:hadoop002, ip:192.168.0.182),hadoop003(hostname:hadoop003, ip:192.168.0.183);此处的IP地址根据你自己的网络环境进行设置。按照hadoop001的方式安装完成hadoop002(hostname:hadoop002, ip:192.168.0.182),hadoop003(hostname:hadoop003, ip:192.168.0.183);此处的IP地址根据你自己的网络环境进行设置。

到此为止,虚拟机环境安装完成,接下来就是Hadoop的安装和配置过程

Hadoop安装和配置

  1. 第一步配置三台虚拟机之间能互相不用用户名密码访问
  2. 配置Hadoop
  3. 启动Hadoop

配置Hadoop001、Hadoop002、Hadoop003互相访问

  • 虚拟机中设置host文件
vim /etc/hosts

三台机器做一样的设置

192.168.0.181 hadoop001
192.168.0.182 hadoop002
192.168.0.183 hadoop003

保证三台机器都能互相ping通

Hadoop环境搭建好测试 hadoop实验环境搭建_大数据

  • 虚拟机hadoop001中生成ssh密钥
ssh-keygen -t rsa -P ''

Hadoop环境搭建好测试 hadoop实验环境搭建_大数据_02


hadoop002与hadoop003做同样的操作,完成后可以通过命令查看密钥对是否生成成功

ls /root/.ssh/

Hadoop环境搭建好测试 hadoop实验环境搭建_Hadoop_03

然后合并hadoop001、hadoop002、hadoop003的id_rsa.pub文件内容到authorized_keys文件中:
首先看一下三台机器中的id_rsa.pub内容
hadoop001

cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4X22DpQ8VYtLZsz0bmd8M4DvU6Ixf6YuZ1ss7Y+gJPIL4X/X2bYIk14rRTPY3/N2lG758mh0OJL0ET1c4J+48g/YZ3gXvbeo4WxIfNaVF/5Qk64hTQjrDpT8BmOB0U/rzt3TDndnkKiOLuHsbaCSthkCZEN2Hrn1IcLVYIj5BxlS5Gtb3eFhKElNfFoFsnoIulMirxuqOld0UhSYNNBOcnZoTll/GUdHufDorlR0ADwM1AxbVo4uVPnDk4i1heVEXyL5iySxM5+Z+qYqywGyKHCBNEIBx0TpnCqnBofv0fd+pDNDbNHYbulAMLZbio85WlsSzhpL5Wws8M3LcGWFD root@hadoop001

hadoop002

cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0MB7PUJYdVwEr9fKgJ1b8xaODbq4ZX0G/+YrLEj6e6/TNHo/M6U/bEPWkClT14PjI2SKXqVtHgMLDe8aShxHxV7KfXG6KS002lIGXa0bOXdbFIr74GEdJJmL1wLZvbuKEAC8h1DkIsnDLr7/pKTDReN3L1iHgcSqjUHXqXC0cd6tZTfc/oDqZ7q2xmGbdRSB/uKX1stt09+a64AAvQPrCenTS6BP4VfSnY2WCxUOu28+z33uZf2q4UxwIdYn1CmPGCeobdeLyfeMMrVZoJfhE/K47wiAUteb+bHvpsy9pS5beJjBcI0Cvri0puH4HgOoveMNhenBMVGS68Lo5fr8h root@hadoop002

hadoop003

cat ~/.ssh/id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ3BgltB9uS8oRxoIrHwBsUOwSak2eKF7SqjKD6uacXc5iE6uXvSiZxT8BKhPHczBWQ2s2vd40U616L9SlHbN2rzsED6AdM9zNPvcPEW/SgmC2yJ4HYAJJKo+UAYLHwj9nh5IocwbHTkIxTAmBubvXcKOvjyzB2NMK4T0VAuEvLadySrxCt/cyHBGT0V0h2RAyx0IODSL5+gP/mNJsoou1wq2H18XF2TNYTUZU2aMb4HW99mcCm7Ps451LcGVSHKxoWyr08FaKZyV0vqI2va33JhohzxW7bhHbAsWK1UJE9jWhKbIL1Flmi6TS9mlnFVytoCHc+LU/AKtOS+ZonGx5 root@hadoop003

首先创建authorized_keys文件,然后添加hadoop001中的id_rsa.pub中的内容

touch authorized_keys
cat id_rsa.pub >> authorized_keys

通过xshell工具可以非常方便的通过vim命令编辑authorized_keys文件,把hadoop002和hadoop003的id_rsa.pub文件内容合并到authorized_keys中。

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC4X22DpQ8VYtLZsz0bmd8M4DvU6Ixf6YuZ1ss7Y+gJPIL4X/X2bYIk14rRTPY3/N2lG758mh0OJL0ET1c4J+48g/YZ3gXvbeo4WxIfNaVF/5Qk64hTQjrDpT8BmOB0U/rzt3TDndnkKiOLuHsbaCSthkCZEN2Hrn1IcLVYIj5BxlS5Gtb3eFhKElNfFoFsnoIulMirxuqOld0UhSYNNBOcnZoTll/GUdHufDorlR0ADwM1AxbVo4uVPnDk4i1heVEXyL5iySxM5+Z+qYqywGyKHCBNEIBx0TpnCqnBofv0fd+pDNDbNHYbulAMLZbio85WlsSzhpL5Wws8M3LcGWFD root@hadoop001
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0MB7PUJYdVwEr9fKgJ1b8xaODbq4ZX0G/+YrLEj6e6/TNHo/M6U/bEPWkClT14PjI2SKXqVtHgMLDe8aShxHxV7KfXG6KS002lIGXa0bOXdbFIr74GEdJJmL1wLZvbuKEAC8h1DkIsnDLr7/pKTDReN3L1iHgcSqjUHXqXC0cd6tZTfc/oDqZ7q2xmGbdRSB/uKX1stt09+a64AAvQPrCenTS6BP4VfSnY2WCxUOu28+z33uZf2q4UxwIdYn1CmPGCeobdeLyfeMMrVZoJfhE/K47wiAUteb+bHvpsy9pS5beJjBcI0Cvri0puH4HgOoveMNhenBMVGS68Lo5fr8h root@hadoop002
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDJ3BgltB9uS8oRxoIrHwBsUOwSak2eKF7SqjKD6uacXc5iE6uXvSiZxT8BKhPHczBWQ2s2vd40U616L9SlHbN2rzsED6AdM9zNPvcPEW/SgmC2yJ4HYAJJKo+UAYLHwj9nh5IocwbHTkIxTAmBubvXcKOvjyzB2NMK4T0VAuEvLadySrxCt/cyHBGT0V0h2RAyx0IODSL5+gP/mNJsoou1wq2H18XF2TNYTUZU2aMb4HW99mcCm7Ps451LcGVSHKxoWyr08FaKZyV0vqI2va33JhohzxW7bhHbAsWK1UJE9jWhKbIL1Flmi6TS9mlnFVytoCHc+LU/AKtOS+ZonGx5 root@hadoop003

然后通过xftp工具非常方便的把该文件放到hadoop002和hadoop003中的相同路径下

现在就可测试看看ssh无密码登录了,从hadoop001上分别通过ssh直接连接hadoop002和hadoop003,发现无需输入密码可直接登录

Hadoop环境搭建好测试 hadoop实验环境搭建_Hadoop_04

配置Hadoop

  • 关闭防火墙
    为了防止防火墙原因导致节点之间通信出现问题,首先关闭防火墙并重启,命令如下,在三台虚拟机中分别进行操作
#执行以下命令关闭防火墙
[root@hadoop001 ~]systemctl stop firewalld && systemctl disable firewalld
[root@hadoop001 ~]setenforce 0

#将SELINUX的值改成disabled
[root@hadoop001 ~]vim /etc/selinux/config

SELINUX=disabled

#重启服务器
[root@hadoop001 ~]reboot
  • 配置文件修改
    首先通过xftp工具把下载的hadoop文件传到虚拟机中,然后解压后,设置环境变量,我的环境变量如下,通过命令vim /etc/profile 编辑配置文件,在文件最后添加以下内容。具体路径各位可根据自己的具体情况设置
# Hadoop environment
export HADOOP_HOME=/opt/hadoop/hadoop-3.1.1
export PATH=$PATH:${HADOOP_HOME}/bin

完成后通过source命令刷新环境变量,切记
定位到hadoop目录下,然后配置hadoop的配置文件,这些配置文件全部位于 /opt/hadoop/hadoop-3.1.1/etc/hadoop 文件夹下
hadoop-env.sh

# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
# export JAVA_HOME=
export JAVA_HOME=/usr/lib/java/jdk1.8.0_181

core-site.xml

<configuration>
    <!-- 指定HDFS老大(namenode)的通信地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop001:9000</value>
    </property>
    <!-- 指定hadoop运行时产生文件的存储路径 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <value>/opt/hadoop/data/tmp</value>
    </property>
</configuration>

hdfs-site.xml

<configuration>
    <!-- 设置namenode的http通讯地址 -->
    <property>
        <name>dfs.namenode.http-address</name>
        <value>hadoop001:50070</value>
    </property>

    <!-- 设置secondarynamenode的http通讯地址 -->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop002:50090</value>
    </property>

    <!-- 设置namenode存放的路径 -->
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/opt/hadoop/data/name</value>
    </property>

    <!-- 设置hdfs副本数量 -->
    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>
    <!-- 设置datanode存放的路径 -->
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/opt/hadoop/data/datanode</value>
    </property>
    
    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>
</configuration>

mapred-site.xml

<configuration>
    <!-- 通知框架MR使用YARN -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>  
        <name>mapreduce.application.classpath</name>  
        <value>  
        /opt/hadoop/hadoop-3.1.1/etc/hadoop,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/common/*,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/common/lib/*,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/hdfs/*,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/hdfs/lib/*,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/mapreduce/*,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/mapreduce/lib/*,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/yarn/*,  
        /opt/hadoop/hadoop-3.1.1/share/hadoop/yarn/lib/*  
        </value>  
    </property>
</configuration>

yarn-site.xml

<configuration>
    <property>  
        <name>yarn.nodemanager.aux-services</name>  
        <value>mapreduce_shuffle</value>  
    </property>  
    <property>  
        <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
        <value>org.apache.hadoop.mapred.ShuffleHandle</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.resource-tracker.address</name>  
        <value>hadoop001:8025</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.scheduler.address</name>  
        <value>hadoop001:8030</value>  
    </property>  
    <property>  
        <name>yarn.resourcemanager.address</name>  
        <value>hadoop001:8040</value>  
    </property>  
</configuration>

$HADOOP_HOME/sbin/start-dfs.sh $HADOOP_HOME/sbin/stop-dfs.sh

HDFS_DATANODE_USER=root
HADOOP_SECURE_DN_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root

Hadoop环境搭建好测试 hadoop实验环境搭建_Hadoop3.1.1_05


$HADOOP_HOME/sbin/start-yarn.sh $HADOOP_HOME/sbin/stop-yarn.sh

YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

Hadoop环境搭建好测试 hadoop实验环境搭建_hadoop_06


masters

需要创建一个master文件,配置secondary namenode的主机

[root@hadoop001 hadoop]# touch /opt/hadoop/hadoop-3.1.1/etc/hadoop/masters
[root@hadoop001 hadoop]# vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/masters
#添加
hadoop002

workers

[root@hadoop001 hadoop]# vim /opt/hadoop/hadoop-3.1.1/etc/hadoop/workers
#添加
hadoop002
hadoop003

创建文件夹

[root@hadoop001 hadoop]# mkdir -p /opt/hadoop/data/tmp
[root@hadoop001 hadoop]# mkdir -p /opt/hadoop/data/name
[root@hadoop001 hadoop]# mkdir -p /opt/hadoop/data/datanode

复制到其他主机

[root@hadoop001 opt]# scp -r /opt/hadoop hadoop002:/opt/
[root@hadoop001 opt]# scp -r /opt/hadoop hadoop003:/opt/

修改hadoop002、hadoop003 环境变量,修改完成后刷新环境变量

Hadoop环境搭建好测试 hadoop实验环境搭建_大数据_07


Hadoop环境搭建好测试 hadoop实验环境搭建_hadoop_08


Hadoop环境搭建好测试 hadoop实验环境搭建_Hadoop3.1.1_09

启动Hadoop

第一次启动需要格式化,然后运行脚本启动即可

[root@hadoop001 opt]#  /opt/hadoop/hadoop-3.1.1/bin/hdfs namenode -format
[root@hadoop001 opt]#  /opt/hadoop/hadoop-3.1.1/sbin/start-all.sh

Hadoop环境搭建好测试 hadoop实验环境搭建_hadoop_10


Hadoop环境搭建好测试 hadoop实验环境搭建_Hadoop3.1.1_11


启动后通过浏览器访问hadoop001上的站点地址,可以查看当前hadoop状态,到此为止环境搭建完成。