所有的机器上都要安装hadoop,现在就先在Master服务器安装,然后其他服务器按照步骤重复进行即可。

1.首先是上传文件并解压缩。

centos8os搭配hadoop centos部署hadoop_xml
























2.然后是配置Hadoop的环境变量:

centos8os搭配hadoop centos部署hadoop_Hadoop_02
























3.配置完之后source profile,使更改有效。在app目录下创建几个Hadoop需要用到的文件夹

mkdir  /home/wangkai/app/tmp  

mkdir  /home/wangkai/app/var  

mkdir  /home/wangkai/app/dfs  

mkdir  /home/wangkai/app/name  

mkdir  /home/wangkai/app/data  

4.修改hadoop-env.sh文件

centos8os搭配hadoop centos部署hadoop_centos8os搭配hadoop_03
























5.修改里面的JAVA_HOME路径和HADOOP_CONF_DIR路径

centos8os搭配hadoop centos部署hadoop_xml_04

























6.配置core-site.xml文件

<configuration>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/home/wangkai/app/tmp</value>
                <description>Abase for other temporary directories</description>
        </property>
        <property>
                <name>fs.default.name</name>
                <value>hdfs://192.168.59.131:9000</value>
        </property>
</configuration>

centos8os搭配hadoop centos部署hadoop_hadoop_05
























7.配置hdfs-site.xml文件

<configuration>
<property>
<name>dfs.name.dir</name>
<value>/home/wangkai/app/dfs/name</value>
<description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/home/wangkai/app/dfs/data</value>
<description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>need not permissions</description>
</property>
</configuration>

centos8os搭配hadoop centos部署hadoop_hadoop_06

























8.新建并且修改mapred-simt.xml,在该版本中,有一个名为mapred-site.xml.template的文件,复制该文件,然后改名为mapred-site.xml  命令是:   cp   /home/wangkai/app/hadoop-2.7.3/etc/hadoop/mapred-site.xml.template     /home/wangkai/app/hadoop-2.7.3/etc/hadoop/mapred-site.xml 
配置mapred-site.xml

<configuration>
        <property>
                <name>mapred.job.tracker</name>
                <value>http://192.168.59.131:9001</value>
        </property>
        <property>
                <name>mapred.local.dir</name>
                <value>/home/wangkai/app/var</value>
        </property>
        <property>
                <name>mapreduce.framework.name</name>
                <value>yarn</value>
        </property>
</configuration>

centos8os搭配hadoop centos部署hadoop_xml_07

























9.配置yarn-site.xml

<configuration>
<!-- Site specific YARN configuration properties -->
<property>
        <name>yarn.resourcemanager.hostname</name>
        <value>192.168.59.131</value>
</property>
<property>
        <description>The address of the applications manager interface in the RM.</description>
        <name>yarn.resourcemanager.address</name>
        <value>${yarn.resourcemanager.hostname}:8032</value>
</property>
<property>
        <description>The address of the scheduler interface.</description>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>${yarn.resourcemanager.hostname}:8030</value>
</property>
<property>
<description>The http address of the RM web application.</description>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>${yarn.resourcemanager.hostname}:8088</value>
</property>
<property>
<description>The https adddress of the RM web application.</description>
        <name>yarn.resourcemanager.webapp.https.address</name>
        <value>${yarn.resourcemanager.hostname}:8090</value>
</property>
<property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>${yarn.resourcemanager.hostname}:8031</value>
</property>
<property>
        <description>The address of the RM admin interface.</description>
        <name>yarn.resourcemanager.admin.address</name>
        <value>${yarn.resourcemanager.hostname}:8033</value>
</property>
<property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
</property>
<property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>2048</value>
        <discription>每个节点可用内存,单位MB,默认8182MB</discription>
</property>
    <property>
        <name>yarn.nodemanager.vmem-pmem-ratio</name>
        <value>2.1</value>
</property>
<property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>2048</value>
</property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
</property>
</configuration>

centos8os搭配hadoop centos部署hadoop_xml_08


























10.配置slaves中从机的ip

192.168.59.132

centos8os搭配hadoop centos部署hadoop_xml_09

























11.启动Hadoop,在namenode上执行初始化

进入到master这台机器的/home/wangkai/app/hadoop-2.7.3/bin目录下,执行./hadoop namenode -format 

格式化成功之后,可以在/home/wangkai/app/dfs/name/目录多了一个current目录,而且该目录内有一系列文件

然后进入/home/wangkai/app/hadoop-2.7.3/sbin目录下,执行./start-all.sh启动Hadoop,一路yes下去

12.测试Hadoop

首先是执行命令,关闭防火墙,在centos7下是: systemctl stop firewalld.service

然后是访问master主机 http://192.168.59.131:50070/    

自动跳转到了overview页面


在本地输入http://192.168.59.131:8088/

自动跳转到了cluster页面

说明Hadoop配置成功了。