1.修改主机名
主机名存放在/etc/hostname文件中,修改主机名时,编辑hostname文件,在文件中输入新的主机名并保存该文件即可
值的指出的是,在其它Linux发行版中,并非都存在/etc/hostname文件
别的发行版将主机名存放在/etc/sysconfig/network文件中

/etc/hosts存放的是域名与ip的对应关系
一般情况下hosts文件的每行为一个主机,每行由三部份组成,每个部份由空格隔开。其中#号开头的行做说明,不被系统解释。
主机名(hostname)和域名(Domain)的区别:主机名通常在局域网内使用,
通过hosts文件,主机名就被解析到对应ip;域名通常在internet上使用,但如果本机不想使用internet上的域名解析,
这时就可以更改hosts文件,加入自己的域名解析。
hosts文件的格式如下:
IP地址 主机名/域名 
第一部份:网络IP地址;
第二部份:主机名或域名;
第三部份:主机名别名;

主机名(hostname)和域名(Domain)的区别:主机名通常在局域网内使用,
通过hosts文件,主机名就被解析到对应ip;域名通常在internet上使用,但如果本机不想使用internet上的域名解析,
这时就可以更改hosts文件,加入自己的域名解析。

2、安装环境
五台Linux CentOS 7系统

hostname                      ipaddress               subnet mask                          geteway    
    master      192.168.52.160    255.255.255.0      192.168.52.2
    slave1      192.168.52.161    255.255.255.0      192.168.52.2
    slave2      192.168.52.162    255.255.255.0      192.168.52.2
    slave3      192.168.52.163    255.255.255.0      192.168.52.2
    slave4      192.168.52.164    255.255.255.0      192.168.52.2

3.免密设置
在本机设置  使用rsa加密方式生成密钥

ssh-keygen -t rsa
生成授权文件  拷贝 密钥 形成一个公钥文件
 [root@server6 ~]# cd .ssh
 [root@server6 .ssh]# cp id_rsa.pub authorized_keys合并密钥 到 authorized_keys文件
 ssh远程登陆后,将内容(公钥)追加到authorized_keys文件中
 ssh 对方的主机名称 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
 ssh master cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
 ssh slave1 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
 ssh slave2 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
 ssh slave3 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
 ssh slave4 cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keysauthorized_keys:存放远程免密登录的公钥,主要通过这个文件记录多台机器的公钥
 id_rsa : 生成的私钥文件
 id_rsa.pub : 生成的公钥文件
 know_hosts : 已知的主机公钥清单4.创建文件目录
 mkdir /opt/hadoop/tmp 
 mkdir /opt/hadoop/dfs 
 mkdir /opt/hadoop/dfs/data
 mkdir /opt/hadoop/dfs/name进入hadoop-3.0.0的配置目录:cd /opt/hadoop/hadoop-3.2.0/etc/hadoop,
 依次修改、core-site.xml、hdfs-site.xml、mapred-site.xml、yarn-site.xml以及workers文件。======================================================================================
文件就是增加了自己的JDK安装目录,
# 设置 JAVA_HOME 为自己在系统中安装的 JDK 目录
 JAVA_HOME=/opt/jdk8======================================================================================
<configuration>
     <property>
         <name>fs.defaultFS</name>
         <value>hdfs://master:9000</value>
         <description>HDFS的URI,文件系统://namenode标识:端口号</description>
     </property>
     <property>
         <!--hadoop.tmp.dir是hadoop 文件系统依赖的配置文件。 
         默认是在 /tmp 目录下的,而这个目录下的文件,
         在Linux系统中,重启之后,很多都会被清空。
         所以我们要手动指定这写文件的保存目录
         这个目录路径要么不存在,hadoop启动的时候会自动帮我们创建;
         要么是一个空目录,不然在启动的时候会报错。
         namenode上本地的hadoop临时文件夹-->
         <name>hadoop.tmp.dir</name>
         <value>file:/opt/hadoop/tmp</value>
     </property>
     <property>
         <name>io.file.buffer.size</name>
         <value>131072</value>
     </property>
 </configuration>======================================================================================
<configuration>
     <property>
         <!--dfs.replication 是配置文件保存的副本数
         Hadoop的备份系数是指每个block在hadoop集群中有几份,
         系数越高,冗余性越好,占用存储也越多
         -->
         <name>dfs.replication</name>
         <value>3</value>
     </property>
     <property>
         <!--namenode上存储hdfs名字空间元数据 
         NameNode持久存储命名空间和事务日志的本地文件系统上的路径-->
         <name>dfs.namenode.name.dir</name>
         <value>file:/opt/hadoop/dfs/name</value>
     </property>
     <property>
         <!--datanode上数据块的物理存储位置-->
         <name>dfs.datanode.data.dir</name>
         <value>file:/opt/hadoop/dfs/data</value>
     </property>
     <property>
         <!--dfs.namenode.secondary.http-address 是指定 secondary 的节点-->
         <name>dfs.namenode.secondary.http-address</name>
         <value>slave1:50090</value>
     </property>
     <property>
         <name>dfs.webhdfs.enabled</name>
         <value>true</value>
     </property>
     <property>
         <!--dfs.permissions配置为false后,
         可以允许不要检查权限就生成dfs上的文件,
         方便倒是方便了,但是你需要防止误删除,
         请将它设置为true,或者直接将
         该property节点删除,因为默认就是true-->
         <name>dfs.permissions</name>
         <value>true</value>
     </property>
     <!--设置默认端口,这段是我后来加的,如果不加上会导致启动hadoop-3.1.0后无法访问50070端口查看HDFS管理界面,hadoop-2.7.7可以不加-->
   <property> 
     <name>dfs.http.address</name> 
     <value>192.168.52.160:50070</value> 
   </property>
 </configuration>======================================================================================
<configuration>
     <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
         <description>The runtime framework for executing MapReduce jobs. Can be one of local, classic or yarn.</description>
         <final>true</final>
     </property>
     <property>
             <name>mapreduce.jobtracker.http.address</name>
             <value>master:50030</value>
     </property>
     <property>
             <name>mapreduce.jobhistory.address</name>
             <value>master:10020</value>
     </property>
     <property>
             <name>mapreduce.jobhistory.webapp.address</name>
             <value>master:19888</value>
     </property>
     <property>
             <name>mapred.job.tracker</name>
             <value>http://master:9001</value>
     </property>
 </configuration>======================================================================================
<configuration>
     <property>
         <name>yarn.resourcemanager.hostname</name>
         <value>master</value>
         <description>The hostname of the RM.</description>
     </property>
     <property>
         <name>yarn.nodemanager.aux-services</name>
         <value>mapreduce_shuffle</value>
     </property>
     <property>
         <name>yarn.resourcemanager.scheduler.address</name>
         <value>master:8030</value>
     </property>
     <property>
         <name>yarn.resourcemanager.resource-tracker.address</name>
         <value>master:8031</value>
     </property>
     <property>
         <name>yarn.resourcemanager.address</name>
         <value>master:8032</value>
         <description>${yarn.resourcemanager.hostname}:8032</description>
     </property>
     <property>
         <name>yarn.resourcemanager.admin.address</name>
         <value>master:8033</value>
     </property>
     <property>
         <!--绑定IP或mpi-1改为0.0.0.0,而不是本地回环IP,这样,就能够实现外网访问本机的8088端口了-->
         <name>yarn.resourcemanager.webapp.address</name>
         <value>0.0.0.0:8088</value>
     </property>
 </configuration>======================================================================================
workers文件里面添加上所有子节点的IP
======================================================================================
5.运行Hadoop集群
 格式化namenode//第一次使用hdfs,必须对其格式化(只需格式化一次)
 hadoop namenode -format[root@master opt]# hadoop namenode -format
 WARNING: /opt/hadoop/hadoop-3.2.0/logs does not exist. Creating.
 2019-04-19 05:49:21,691 INFO namenode.NameNode: STARTUP_MSG: 
 /************************************************************
 STARTUP_MSG: Starting NameNode
 STARTUP_MSG:   host = master/192.168.52.160
 STARTUP_MSG:   args = [-format]
 STARTUP_MSG:   version = 3.2.0
 STARTUP_MSG:   classpath = /opt/hadoop/hadoop-3.2.0/etc/hadoop:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/accessors-smart-1.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/asm-5.0.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/audience-annotations-0.5.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/avro-1.7.7.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-beanutils-1.9.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-codec-1.11.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-configuration2-2.1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-io-2.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-lang3-3.7.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-math3-3.1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-net-3.6.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/commons-text-1.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/curator-client-2.12.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/curator-framework-2.12.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/curator-recipes-2.12.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/dnsjava-2.1.7.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/gson-2.2.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/hadoop-annotations-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/hadoop-auth-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/httpclient-4.5.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/httpcore-4.4.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jackson-annotations-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jackson-core-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jackson-databind-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jaxb-api-2.2.11.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jersey-core-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jersey-json-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jersey-server-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jersey-servlet-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jettison-1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jsch-0.1.54.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/json-smart-2.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jsr311-api-1.1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-admin-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-client-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-common-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-core-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-crypto-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-identity-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-server-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerb-util-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerby-asn1-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerby-config-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerby-pkix-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerby-util-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/kerby-xdr-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/netty-3.10.5.Final.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/re2j-1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/slf4j-api-1.7.25.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/snappy-java-1.0.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/stax2-api-3.1.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/token-provider-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/woodstox-core-5.0.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/xz-1.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/zookeeper-3.4.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/jul-to-slf4j-1.7.25.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/lib/metrics-core-3.2.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/hadoop-common-3.2.0-tests.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/hadoop-common-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/hadoop-nfs-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/common/hadoop-kms-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-util-ajax-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/netty-all-4.0.52.Final.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/okhttp-2.7.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/okio-1.6.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/hadoop-auth-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-codec-1.11.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/httpclient-4.5.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/httpcore-4.4.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/nimbus-jose-jwt-4.41.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jcip-annotations-1.0-1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/json-smart-2.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/accessors-smart-1.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/asm-5.0.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/zookeeper-3.4.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/audience-annotations-0.5.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/netty-3.10.5.Final.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/curator-framework-2.12.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/curator-client-2.12.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-simplekdc-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-client-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-config-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-core-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-pkix-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-asn1-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-util-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-common-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-crypto-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-io-2.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-util-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/token-provider-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-admin-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-server-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerb-identity-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/kerby-xdr-1.0.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-core-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jsr311-api-1.1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-server-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/javax.servlet-api-3.1.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/json-simple-1.1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-server-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-http-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-util-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-io-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-webapp-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-xml-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-servlet-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jetty-security-9.3.24.v20180605.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/hadoop-annotations-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-math3-3.1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-net-3.6.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-collections-3.2.2.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-servlet-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jersey-json-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jettison-1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jaxb-impl-2.2.3-1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jaxb-api-2.2.11.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-jaxrs-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-xc-1.9.13.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-beanutils-1.9.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-configuration2-2.1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-lang3-3.7.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-text-1.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/avro-1.7.7.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/paranamer-2.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/snappy-java-1.0.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/commons-compress-1.4.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/xz-1.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/re2j-1.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/gson-2.2.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jsch-0.1.54.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/curator-recipes-2.12.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/htrace-core4-4.1.0-incubating.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-databind-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-annotations-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/jackson-core-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/stax2-api-3.1.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/woodstox-core-5.0.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/lib/dnsjava-2.1.7.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-3.2.0-tests.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0-tests.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-client-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0-tests.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-native-client-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0-tests.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-rbf-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/hdfs/hadoop-hdfs-httpfs-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/lib/junit-4.11.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0-tests.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-nativetask-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-uploader-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/HikariCP-java7-2.4.12.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/ehcache-3.3.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/fst-2.50.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/geronimo-jcache_1.0_spec-1.0-alpha-1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/guice-4.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/guice-servlet-4.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/jackson-jaxrs-base-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/jackson-jaxrs-json-provider-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/jackson-module-jaxb-annotations-2.9.5.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/java-util-1.9.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/jersey-client-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/jersey-guice-1.19.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/json-io-2.5.1.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/metrics-core-3.2.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/mssql-jdbc-6.2.1.jre7.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/objenesis-1.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/snakeyaml-1.16.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/lib/swagger-annotations-1.5.4.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-api-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-client-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-common-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-registry-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-common-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-router-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-timeline-pluginstorage-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-services-api-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-services-core-3.2.0.jar:/opt/hadoop/hadoop-3.2.0/share/hadoop/yarn/hadoop-yarn-submarine-3.2.0.jar
 STARTUP_MSG:   build = https:///apache/hadoop.git -r e97acb3bd8f3befd27418996fa5d4b50bf2e17bf; compiled by 'sunilg' on 2019-01-08T06:08Z
 STARTUP_MSG:   java = 12.0.1
 ************************************************************/
 2019-04-19 05:49:21,711 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
 2019-04-19 05:49:21,862 INFO namenode.NameNode: createNameNode [-format]
 Formatting using clusterid: CID-6c05f0ac-260d-4e54-9e30-a8a87aace7e5
 2019-04-19 05:49:22,677 INFO namenode.FSEditLog: Edit logging is async:true
 2019-04-19 05:49:22,702 INFO namenode.FSNamesystem: KeyProvider: null
 2019-04-19 05:49:22,703 INFO namenode.FSNamesystem: fsLock is fair: true
 2019-04-19 05:49:22,704 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
 2019-04-19 05:49:22,774 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
 2019-04-19 05:49:22,774 INFO namenode.FSNamesystem: supergroup          = supergroup
 2019-04-19 05:49:22,774 INFO namenode.FSNamesystem: isPermissionEnabled = false
 2019-04-19 05:49:22,775 INFO namenode.FSNamesystem: HA Enabled: false
 2019-04-19 05:49:22,856 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
 2019-04-19 05:49:22,880 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
 2019-04-19 05:49:22,880 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
 2019-04-19 05:49:22,886 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
 2019-04-19 05:49:22,887 INFO blockmanagement.BlockManager: The block deletion will start around 2019 Apr 19 05:49:22
 2019-04-19 05:49:22,890 INFO util.GSet: Computing capacity for map BlocksMap
 2019-04-19 05:49:22,891 INFO util.GSet: VM type       = 64-bit
 2019-04-19 05:49:22,893 INFO util.GSet: 2.0% max memory 235.9 MB = 4.7 MB
 2019-04-19 05:49:22,894 INFO util.GSet: capacity      = 2^19 = 524288 entries
 2019-04-19 05:49:22,911 INFO blockmanagement.BlockManager: Storage policy satisfier is disabled
 2019-04-19 05:49:22,911 INFO blockmanagement.BlockManager: dfs.block.access.token.enable = false
 2019-04-19 05:49:22,920 INFO Configuration.deprecation: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
 2019-04-19 05:49:22,920 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManager: defaultReplication         = 3
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManager: maxReplication             = 512
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManager: minReplication             = 1
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManager: redundancyRecheckInterval  = 3000ms
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
 2019-04-19 05:49:22,921 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
 2019-04-19 05:49:22,956 INFO namenode.FSDirectory: GLOBAL serial map: bits=29 maxEntries=536870911
 2019-04-19 05:49:22,957 INFO namenode.FSDirectory: USER serial map: bits=24 maxEntries=16777215
 2019-04-19 05:49:22,957 INFO namenode.FSDirectory: GROUP serial map: bits=24 maxEntries=16777215
 2019-04-19 05:49:22,957 INFO namenode.FSDirectory: XATTR serial map: bits=24 maxEntries=16777215
 2019-04-19 05:49:22,972 INFO util.GSet: Computing capacity for map INodeMap
 2019-04-19 05:49:22,972 INFO util.GSet: VM type       = 64-bit
 2019-04-19 05:49:22,972 INFO util.GSet: 1.0% max memory 235.9 MB = 2.4 MB
 2019-04-19 05:49:22,972 INFO util.GSet: capacity      = 2^18 = 262144 entries
 2019-04-19 05:49:22,980 INFO namenode.FSDirectory: ACLs enabled? false
 2019-04-19 05:49:22,980 INFO namenode.FSDirectory: POSIX ACL inheritance enabled? true
 2019-04-19 05:49:22,980 INFO namenode.FSDirectory: XAttrs enabled? true
 2019-04-19 05:49:22,981 INFO namenode.NameNode: Caching file names occurring more than 10 times
 2019-04-19 05:49:22,985 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
 2019-04-19 05:49:22,989 INFO snapshot.SnapshotManager: SkipList is disabled
 2019-04-19 05:49:22,998 INFO util.GSet: Computing capacity for map cachedBlocks
 2019-04-19 05:49:22,998 INFO util.GSet: VM type       = 64-bit
 2019-04-19 05:49:22,999 INFO util.GSet: 0.25% max memory 235.9 MB = 603.8 KB
 2019-04-19 05:49:22,999 INFO util.GSet: capacity      = 2^16 = 65536 entries
 2019-04-19 05:49:23,012 INFO metrics.TopMetrics: NNTop conf: .window.num.buckets = 10
 2019-04-19 05:49:23,012 INFO metrics.TopMetrics: NNTop conf: .num.users = 10
 2019-04-19 05:49:23,012 INFO metrics.TopMetrics: NNTop conf: .windows.minutes = 1,5,25
 2019-04-19 05:49:23,017 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
 2019-04-19 05:49:23,017 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
 2019-04-19 05:49:23,021 INFO util.GSet: Computing capacity for map NameNodeRetryCache
 2019-04-19 05:49:23,021 INFO util.GSet: VM type       = 64-bit
 2019-04-19 05:49:23,022 INFO util.GSet: 0.029999999329447746% max memory 235.9 MB = 72.5 KB
 2019-04-19 05:49:23,022 INFO util.GSet: capacity      = 2^13 = 8192 entries
 2019-04-19 05:49:23,061 INFO namenode.FSImage: Allocated new BlockPoolId: BP-70141562-192.168.52.160-1555667363051
 2019-04-19 05:49:23,081 INFO common.Storage: Storage directory /opt/hadoop/dfs/name has been successfully formatted.
 2019-04-19 05:49:23,100 INFO namenode.FSImageFormatProtobuf: Saving image file /opt/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
 2019-04-19 05:49:23,206 INFO namenode.FSImageFormatProtobuf: Image file /opt/hadoop/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 399 bytes saved in 0 seconds .
 2019-04-19 05:49:23,223 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
 2019-04-19 05:49:23,234 INFO namenode.NameNode: SHUTDOWN_MSG: 
 /************************************************************
 SHUTDOWN_MSG: Shutting down NameNode at master/192.168.52.160
 ************************************************************/

启动所有:

[root@master sbin]#  
 Starting namenodes on [master]
 Last login: Sat Apr 20 22:44:34 EDT 2019 on pts/0
 Starting datanodes
 Last login: Sat Apr 20 22:45:13 EDT 2019 on pts/0
 Starting secondary namenodes [slave1]
 Last login: Sat Apr 20 22:45:16 EDT 2019 on pts/0
 Starting resourcemanager
 Last login: Sat Apr 20 22:45:28 EDT 2019 on pts/0
 Starting nodemanagers
 Last login: Sat Apr 20 22:45:47 EDT 2019 on pts/0停止所有如下:
[root@master sbin]#  
 Stopping namenodes on [master]
 Last login: Sat Apr 20 22:17:03 EDT 2019 on pts/0
 Stopping datanodes
 Last login: Sat Apr 20 22:44:22 EDT 2019 on pts/0
 Stopping secondary namenodes [slave1]
 Last login: Sat Apr 20 22:44:24 EDT 2019 on pts/0
 Stopping nodemanagers
 Last login: Sat Apr 20 22:44:27 EDT 2019 on pts/0
 Stopping resourcemanager
 Last login: Sat Apr 20 22:44:31 EDT 2019 on pts/0

6.测试

hdfs tmp 自动清理_hadoop

上传了一个文件后,去管理页面看到,三个子节点上已经分别有了一个block

hdfs tmp 自动清理_jar_02

hdfs tmp 自动清理_hadoop_03