目录
一.安装准备
二:主节点安装hadoop
三.从节点安装hadoop
四.启动hadoop
五.验证安装
一.安装准备
1.需要三台虚拟机:主节点为hadoop001,从节点为hadoop002,hadoop003;
hadoop001,hadoop002,hadoop003;是虚拟机的主机名,
用
hostnamectl --static set-hostname hadoop001
改主机名;
我的虚拟机IP地址分别为:hadoop001(192.168.17.131),hadoop002(192.168.17.132),hadoop003(192.168.17.133)
虚拟机的IP地址可以用
ip addr
查看;
2.每台虚拟机都安装了jdk;
4.每台虚拟机都要关闭防火墙;
systemctl stop firewalld.service
systemctl disable firewalld.service
5.每台虚拟机都相互配置了主机名映射;
进入hosts
vi /etc/hosts
添加如下内容
192.168.17.131 hadoop001
192.168.17.132 hadoop002
192.168.17.133 hadoop003
在Windows上用记事本打开hosts(位置:C:\Windows\System32\drivers\etc\hosts )添加以下内容
192.168.17.131 hadoop001
192.168.17.132 hadoop002
192.168.17.133 hadoop003
二:主节点安装hadoop
1.下载hadoop-2.7.3.tar.gz;
百度网盘链接:
链接:https://pan.baidu.com/s/1uQTVMzg8E5QULQTAoppdcQ
提取码:58c5
2.上传hadoop-2.7.3.tar.gz到hadoop001,
直接把hadoop-2.7.3.tar.gz拖到MobaXterm_Portable的框框里就行。
3.解压安装
tar -zvxf /tools/hadoop-2.7.3.tar.gz -C /training/
4.配置环境变量(三台虚拟机都要配置)
vi ~/.bash_profile
#hadoop
export HADOOP_HOME=/training/hadoop-2.7.3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
环境变量生效
source ~/.bash_profile
5.创建tmp目录
mkdir /training/hadoop-2.7.3/tmp
6.修改配置文件
进入配置文件目录
cd /training/hadoop-2.7.3/etc/hadoop/
ls查看文件
修改配置文件
1)hadoop-env.sh
vi hadoop-env.sh
添加jdk路径就可以了,我的路径是:
export JAVA_HOME=/training/jdk1.8.0_171
2)hdfs-site.xml
vi hdfs-site.xml
在<configuration></configuration>之间添加如下信息:
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
3)core-site.xml
vi core-site.xml
在<configuration></configuration>之间添加如下信息:
<property>
<name>fs.defaultFS</name>
<value>hdfs://hadoop001:9000</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/training/hadoop-2.7.3/tmp</value>
</property>
4)mapper-site.xml
vi mapper-site.xml
在<configuration></configuration>之间添加如下信息:
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<!-- 历史服务器端地址 -->
<property>
<name>mapreduce.jobhistory.address</name>
<value>hadoop001:10020</value>
</property>
<!-- 历史服务器 web 端地址 -->
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>hadoop001:19888</value>
</property>
5)yarn-site.xml
vi yarn-site.xml
在<configuration></configuration>之间添加如下信息:
<!-- Site specific YARN configuration properties -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>hadoop001</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!-- 日志聚集功能使能 -->
<property>
<name>yarn.log-aggregation-enable</name>
<value>true</value>
</property>
<!-- 日志保留时间设置7天 -->
<property>
<name>yarn.log-aggregation.retain-seconds</name>
<value>604800</value>
</property>
<!--配置Log Server -->
<property>
<name>yarn.log.server.url</name>
<value>http://hadoop001:19888/jobhistory/logs</value>
</property>
6)slaves
vi slaves
添加如下信息:
hadoop002
hadoop003
7.格式化nameNode
hdfs namenode -format
会打印出如下日志信息:
Storage directory /tmp/hadoop-root/dfs/name has been successfully formatted.
显示成功!!!
三.从节点安装hadoop
1.将hadoop001上的hadoop环境复制到hadoop002,hadoop003上
scp -r /training/hadoop-2.7.3/ root@hadoop002:/training/
scp -r /training/hadoop-2.7.3/ root@hadoop003:/training/
四.启动hadoop
1.在主节点hadoop001上执行
start-all.sh
停止hadoop用
stop-all.sh
五.验证安装
1.主节点查看进程有:NameNode ResourceMnager SecondaryNameNode
从节点查看进程有:
DataNode NodeManager
2.浏览器查看
HDFS:
YARN:
http://hadoop001:8088