1.准备机器

阿里云购买3台云服务器ECS实例,可以选择按量收费的

2.用xshell软件打开3台实例

用xshell软件打开3台实例,并且在查看->撰写->撰写栏(可以同一时间编写多个实例的命令)

云集群服务器架构 云主机集群_ci

3.添加hadoop用户,一般我们不会使用root超级用户来操作

[root@hadoop001 ~]# useradd hadoop
[root@hadoop001 ~]# su - hadoop
[hadoop@hadoop001 ~]$ pwd
/home/hadoop
[hadoop@hadoop001 ~]$

4.添加我们后期所需要的文件夹

文件夹:software app data lib source

[hadoop@hadoop001 ~]$ mkdir software app data lib source
    [hadoop@hadoop001 ~]$ ll
    total 20
    drwxrwxr-x 2 hadoop hadoop 4096 Nov 27 13:45 app
    drwxrwxr-x 2 hadoop hadoop 4096 Nov 27 13:45 data
    drwxrwxr-x 2 hadoop hadoop 4096 Nov 27 13:45 lib
    drwxrwxr-x 2 hadoop hadoop 4096 Nov 27 13:45 software
    drwxrwxr-x 2 hadoop hadoop 4096 Nov 27 13:45 source
    [hadoop@hadoop001 ~]$

5.安装上传下载的一个插件包 lrzsz

[hadoop@hadoop001 ~]$ yum install -y lrzsz
Loaded plugins: fastestmirror
You need to be root to perform this command.
[hadoop@hadoop001 ~]$ 
需要root账号才可以
[hadoop@hadoop001 ~]$ su - root
Password: 
[root@hadoop001 ~]# 
# yum install -y lrzsz
Loaded plugins: fastestmirror
Setting up Install Process
Determining fastest mirrors

6.rz上传我们所需要的软件

[root@hadoop001 ~]# rz

云集群服务器架构 云主机集群_云集群服务器架构_02

7.将这几个文件传到其他几台机子上

[root@hadoop001 ~]# scp * root@172.18.39.22:/home/hadoop/software
The authenticity of host '172.18.39.22 (172.18.39.22)' can't be established.
RSA key fingerprint is a4:a8:be:f9:df:04:b2:ab:4b:a6:55:a3:4a:79:1c:b5.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.18.39.22' (RSA) to the list of known hosts.
root@172.18.39.22's password: 
Permission denied, please try again.
root@172.18.39.22's password: 
hadoop-2.6.0-cdh5.7.0.tar.gz                                                                                                                                                                                                                100%  297MB 297.2MB/s   00:01    
jdk-8u45-linux-x64.gz                                                                                                                                                                                                                       100%  165MB  82.6MB/s   00:02    
zookeeper-3.4.6.tar.gz                                                                                                                                                                                                                      100%   17MB  16.9MB/s   00:00    
[root@hadoop001 ~]# mv * /home/hadoop/software
[root@hadoop001 ~]#

8.修改文件所属信息

[hadoop@hadoop001 software]$ su - root
Password: 
[root@hadoop001 ~]# chown -R hadoop:hadoop /home/hadoop/software/*
[root@hadoop001 ~]# 
[root@hadoop001 ~]# su - hadoop
[hadoop@hadoop001 ~]$ cd software
[hadoop@hadoop001 software]$ ll
total 490788
-rw-r--r-- 1 hadoop hadoop 311585484 Nov 27 14:17 hadoop-2.6.0-cdh5.7.0.tar.gz
-rw-r--r-- 1 hadoop hadoop 173271626 Sep 15 21:07 jdk-8u45-linux-x64.gz
-rw-r--r-- 1 hadoop hadoop  17699306 Nov 27 13:44 zookeeper-3.4.6.tar.gz
[hadoop@hadoop001 software]$

9.配置机器的/etc/hosts 文件 为多台信任做准备

[hadoop@hadoop001 ~]$ su - root
Password: 
[root@hadoop001 ~]# vi /etc/hosts

172.18.39.23    hadoop001       hadoop001
172.18.39.22    hadoop002       hadoop002
172.18.39.24    hadoop003       hadoop003

把修改的传到另外两台机子上面

[root@hadoop001 ~]# scp /etc/hosts root@172.18.39.22:/etc/hosts
 root@172.18.39.22's password: 
 hosts                                                                     100%  272     0.3KB/s   00:00    
 [root@hadoop001 ~]# scp /etc/hosts root@172.18.39.24:/etc/hosts
 root@172.18.39.24's password: 
 hosts                                                                     100%  272     0.3KB/s   00:00
 [hadoop@hadoop001 ~]$ cat /etc/hosts
 127.0.0.1	localhost	localhost.localdomain	localhost4	localhost4.localdomain4
 ::1	localhost	localhost.localdomain	localhost6	localhost6.localdomain6
 
 172.18.39.23	hadoop001       hadoop001
 172.18.39.22    hadoop002       hadoop002
 172.18.39.24    hadoop003       hadoop003
 [hadoop@hadoop001 ~]$

10.多台机器互相信任无密码访问

生成sshkey

[hadoop@hadoop001 ~]$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/hadoop/.ssh/id_rsa): 
Created directory '/home/hadoop/.ssh'.
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /home/hadoop/.ssh/id_rsa.
Your public key has been saved in /home/hadoop/.ssh/id_rsa.pub.
The key fingerprint is:
cd:db:01:df:8f:d3:41:9c:76:59:4c:c4:e6:ee:23:7e hadoop@hadoop001
The key's randomart image is:
+--[ RSA 2048]----+
|               =+|
|              . B|
|          .    O.|
|         o o .o o|
|        S o o .o |
|           o . +o|
|          . . o.o|
|             . E.|
|            ..o .|

选取第一台的

[hadoop@hadoop001 ~]$ cd .ssh
[hadoop@hadoop001 .ssh]$ ll
total 8
-rw------- 1 hadoop hadoop 1675 Nov 27 15:02 id_rsa		私钥
-rw-r--r-- 1 hadoop hadoop  398 Nov 27 15:02 id_rsa.pub	公钥
[hadoop@hadoop001 .ssh]$

hadoop001为主 hadoop002,hadoop003为辅
就需要把hadoop002,hadoop003的公钥文件发过来

[hadoop@hadoop002 .ssh]$ scp id_rsa.pub root@hadoop001:/home/hadoop/.ssh/id_rsa.pub2 
The authenticity of host 'hadoop001 (172.18.39.23)' can't be established.
RSA key fingerprint is 31:a2:03:77:1a:21:b6:4f:59:1b:bd:b5:24:c3:e4:d7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop001,172.18.39.23' (RSA) to the list of known hosts.
root@hadoop001's password: 
id_rsa.pub                                                                100%  398     0.4KB/s   00:00    
[hadoop@hadoop002 .ssh]$ 

[hadoop@hadoop003 ~]$ cd .ssh
[hadoop@hadoop003 .ssh]$ scp id_rsa.pub root@hadoop001:/home/hadoop/.ssh/id_rsa.pub3
The authenticity of host 'hadoop001 (172.18.39.23)' can't be established.
RSA key fingerprint is 31:a2:03:77:1a:21:b6:4f:59:1b:bd:b5:24:c3:e4:d7.
Are you sure you want to continue connecting (yes/no)? yesy^H
Warning: Permanently added 'hadoop001,172.18.39.23' (RSA) to the list of known hosts.
root@hadoop001's password: 
Permission denied, please try again.
root@hadoop001's password: 
id_rsa.pub                                                                100%  398     0.4KB/s   00:00    
[hadoop@hadoop003 .ssh]$

查看第一台机器的文件

[hadoop@hadoop001 .ssh]$ ll
total 16
-rw------- 1 hadoop hadoop 1675 Nov 27 15:02 id_rsa
-rw-r--r-- 1 hadoop hadoop  398 Nov 27 15:02 id_rsa.pub
-rw-r--r-- 1 root   root    398 Nov 27 15:06 id_rsa.pub2
-rw-r--r-- 1 root   root    398 Nov 27 15:07 id_rsa.pub3
[hadoop@hadoop001 .ssh]$

追加到authorized_key

[hadoop@hadoop001 .ssh]$ cat id_rsa.pub >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat id_rsa.pub2 >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat id_rsa.pub3 >> authorized_keys
[hadoop@hadoop001 .ssh]$ cat authorized_keys 
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAl0+3q4dd4gtjeis0mbmo/O+JRrFYqS75efmgc+K9vMYRJdzH/hJudExA+S78W04sW/WZ1V5BctMsmNiXd+LRx6rAh9DnpzB9flGDchJmPshOLPx25LnKn0MoYuCTqlXBiLHv5SIbRBq885E1KK+ZtagmKEdEIffeKXOhhd1GmydHh5n3wYb5kag5dAU/RAu2hmS/Vbo/NgEZvTPbB1ljyBDpGI53nOUTVrQYC0zHYai3/S+dF8PwsGeo2kd5mwxKsYppqjtVYTEb0SVeQGhls2HkGIXLBcpIqKa8uGu58M1iyeC+PM8L/Co09YZGVMFpWkvtwmOpVV0nGmf/cmsyvQ== hadoop@hadoop001
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4VCa/KBEYqFMhedxmEsrdTrW59VY9BRWLmnFWoV9hKYfic7M/0pb0f6qKE0O1ySIizGj1zAAcjwj1Q+LEfEhsPaw1lUT02MkTn5dSlbfSgiz4Ue637hWC2vB1ZcxfjIMOf4KFs+MmOgU27V/S1nwS9iMOWq4u4RZ8tAxH3dIHPvktx1nV9wRetuCi0PzCn1TH3j91YjKiAWFKA1YJaSo3MrATDdjDdQDR6/EYvOAuGX75W7oPv0rYovv7Z/Q7QUQqr5JhW7l67zsHE9gf2xJ+1XovA56qP77TImRVzCyPqCk+v/IulvJ9jvsgFyjRdwH3wxwWZ7pgfnLXl4dfWMs+w== hadoop@hadoop002
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwCUhdU4JstMg/l8S189e/4FbzYQMEu5NK/gGujO7abt+0wKpFtrcOfFt03IVtvD0vMt6qpznYYlm+Y9D8OLSwSxSgPPVUL6iQ40QQrlSlWFtnhPMVTg32rjmxFhP7E+M14EPqEAEhO4Kcv4+1WjWQszg4uqj8w4KHZBogae0pbBJ+CIARqVJCo2M7/dC0hJA0DzngEebFaSekRBmcxActQm8ULo2tktipoekTaHmhwpIFmXyh8iDkJA0QwApuiq7HUeRMeLwQCh5C4cFHkUGG2aTz/LPN/H8WfYakMokc8g2U8RNtn817qTy0N2AJniRijb5/KATK3j3ftOmc6EYfw== hadoop@hadoop003
[hadoop@hadoop001 .ssh]$

把文件传输给另外两台机器

[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop002:/home/hadoop/.ssh
[hadoop@hadoop001 .ssh]$ scp authorized_keys root@hadoop003:/home/hadoop/.ssh

修改每台机器的.ssh所属

[hadoop@hadoop001 .ssh]$ exit
logout
[root@hadoop001 ~]# chown -R hadoop:hadoop /home/hadoop/.ssh/*
[root@hadoop001 ~]# chown -R hadoop:hadoop /home/hadoop/.ssh
修改.ssh权限700  里面的authorized_keys 权限为600
[hadoop@hadoop001 ~]$ cd .ssh
[hadoop@hadoop001 .ssh]$ chmod 600 authorized_keys
[hadoop@hadoop001 .ssh]$ ll
total 24
-rw------- 1 hadoop hadoop 1194 Nov 27 15:10 authorized_keys
-rw------- 1 hadoop hadoop 1675 Nov 27 15:02 id_rsa
-rw-r--r-- 1 hadoop hadoop  398 Nov 27 15:02 id_rsa.pub
-rw-r--r-- 1 hadoop hadoop  398 Nov 27 15:06 id_rsa.pub2
-rw-r--r-- 1 hadoop hadoop  398 Nov 27 15:07 id_rsa.pub3
-rw-r--r-- 1 hadoop hadoop  808 Nov 27 15:12 known_hosts
[hadoop@hadoop001 .ssh]$

现在就不需要输入密码就可以访问了,每台机器都可以尝试下

[hadoop@hadoop001 .ssh]$ ssh hadoop001 date
The authenticity of host 'hadoop001 (172.18.39.23)' can't be established.
RSA key fingerprint is 31:a2:03:77:1a:21:b6:4f:59:1b:bd:b5:24:c3:e4:d7.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'hadoop001,172.18.39.23' (RSA) to the list of known hosts.
Tue Nov 27 15:16:05 CST 2018
[hadoop@hadoop001 .ssh]$ ssh hadoop002 date
Tue Nov 27 15:16:15 CST 2018
[hadoop@hadoop001 .ssh]$ ssh hadoop003 date
Tue Nov 27 15:16:25 CST 2018

11.部署jdk

[hadoop@hadoop001 software]$ java -version
-bash: java: command not found	没有安装jdk
[hadoop@hadoop001 software]$ 
[hadoop@hadoop001 software]$ ll /usr/java
ls: cannot access /usr/java: No such file or directory

创建/usr/java目录

[hadoop@hadoop001 software]$ exit
logout
[root@hadoop001 ~]# mkdir /usr/java

解压jdk文件

[root@hadoop001 ~]# tar -xzvf /home/hadoop/software/jdk-8u45-linux-x64.gz -C /usr/java
[root@hadoop001 ~]# cd /usr/java
[root@hadoop001 java]# ll
total 4
drwxr-xr-x 8 uucp 143 4096 Apr 11  2015 jdk1.8.0_45

注:修改jdk文件的所属用户组用户

[root@hadoop001 java]# chown -R root:root jdk1.8.0_45
[root@hadoop001 java]# 
[root@hadoop001 java]# ll
total 4
drwxr-xr-x 8 root root 4096 Apr 11  2015 jdk1.8.0_45
[root@hadoop001 java]#

配置环境变量

[root@hadoop001 java]# vi /etc/profile
文件末尾 加上
export JAVA_HOME=/usr/java/jdk1.8.0_45
export PATH=$JAVA_HOME/bin:$PATH
修改完毕,保存退出,重新source下,然后查看jdk版本
[root@hadoop001 java]# source /etc/profile
[root@hadoop001 java]# java -version
java version "1.8.0_45"
Java(TM) SE Runtime Environment (build 1.8.0_45-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.45-b02, mixed mode)
[root@hadoop001 java]#

12.解压安装hadoop和zookeeper

[hadoop@hadoop001 software]$  tar -xzvf hadoop-2.6.0-cdh5.7.0.tar.gz -C ../app
[hadoop@hadoop001 software]$  tar -xzvf zookeeper-3.4.6.tar.gz -C ../app
[hadoop@hadoop001 software]$ cd ../app
[hadoop@hadoop001 app]$ ll
total 8
drwxr-xr-x 14 hadoop hadoop 4096 Mar 24  2016 hadoop-2.6.0-cdh5.7.0
drwxr-xr-x 10 hadoop hadoop 4096 Feb 20  2014 zookeeper-3.4.6
[hadoop@hadoop001 app]$

规划目录,配置目录

[hadoop@hadoop003 ~]$ vi .bash_profile
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export ZOOKEEPER_HOME=/home/hadoop/app/zookeeper-3.4.6
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$ZOOKEEPER_HOME/bin:$PATH
让文件生效
[hadoop@hadoop001 ~]$ . .bash_profile
[hadoop@hadoop001 ~]$ cd $HADOOP_HOME
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$

创建目录

[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ mkdir $HADOOP_HOME/data && mkdir $HADOOP_HOME/logs && mkdir $HADOOP_HOME/tmp
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ chmod 777 -R $HADOOP_HOME/tmp
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ LL
-bash: LL: command not found
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ ll
total 88
drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 bin
drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 bin-mapreduce1
drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 cloudera
drwxrwxr-x  2 hadoop hadoop  4096 Nov 27 15:45 data
drwxr-xr-x  6 hadoop hadoop  4096 Mar 24  2016 etc
drwxr-xr-x  5 hadoop hadoop  4096 Mar 24  2016 examples
drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 examples-mapreduce1
drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 include
drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 lib
drwxr-xr-x  2 hadoop hadoop  4096 Mar 24  2016 libexec
-rw-r--r--  1 hadoop hadoop 17087 Mar 24  2016 LICENSE.txt
drwxrwxr-x  2 hadoop hadoop  4096 Nov 27 15:45 logs
-rw-r--r--  1 hadoop hadoop   101 Mar 24  2016 NOTICE.txt
-rw-r--r--  1 hadoop hadoop  1366 Mar 24  2016 README.txt
drwxr-xr-x  3 hadoop hadoop  4096 Mar 24  2016 sbin
drwxr-xr-x  4 hadoop hadoop  4096 Mar 24  2016 share
drwxr-xr-x 17 hadoop hadoop  4096 Mar 24  2016 src
drwxrwxrwx  2 hadoop hadoop  4096 Nov 27 15:45 tmp
可以看下防护墙是否关闭了
[root@hadoop001 ~]# service iptables status
iptables: Firewall is not running.

13.zookeeper配置

[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ cd ../zookeeper-3.4.6
[hadoop@hadoop001 zookeeper-3.4.6]$ cd conf
[hadoop@hadoop001 conf]$ ll
total 12
-rw-rw-r-- 1 hadoop hadoop  535 Feb 20  2014 configuration.xsl
-rw-rw-r-- 1 hadoop hadoop 2161 Feb 20  2014 log4j.properties
-rw-rw-r-- 1 hadoop hadoop  922 Feb 20  2014 zoo_sample.cfg
[hadoop@hadoop001 conf]$ cp zoo_sample.cfg zoo.cfg
[hadoop@hadoop001 conf]$ ll
total 16
-rw-rw-r-- 1 hadoop hadoop  535 Feb 20  2014 configuration.xsl
-rw-rw-r-- 1 hadoop hadoop 2161 Feb 20  2014 log4j.properties
-rw-rw-r-- 1 hadoop hadoop  922 Nov 27 15:50 zoo.cfg
-rw-rw-r-- 1 hadoop hadoop  922 Feb 20  2014 zoo_sample.cfg
[hadoop@hadoop001 conf]$ vi zoo.cfg
修改地址
dataDir=/home/hadoop/app/zookeeper-3.4.6/data
添加内容
server.1=hadoop001:2888:3888
server.2=hadoop002:2888:3888
server.3=hadoop003:2888:3888

保存退出,传到其他服务器

[hadoop@hadoop001 conf]$ scp zoo.cfg hadoop002://home/hadoop/app/zookeeper-3.4.6/conf
zoo.cfg                                                                   100% 1032     1.0KB/s   00:00    
[hadoop@hadoop001 conf]$ scp zoo.cfg hadoop003:/home/hadoop/app/zookeeper-3.4.6/conf
zoo.cfg                                                                   100% 1032     1.0KB/s   00:00    
[hadoop@hadoop001 conf]$ 
[hadoop@hadoop001 conf]$ cd ../
[hadoop@hadoop001 zookeeper-3.4.6]$ mkdir data
[hadoop@hadoop001 zookeeper-3.4.6]$ touch data/myid
[hadoop@hadoop001 zookeeper-3.4.6]$ echo 1 > data/myid
[hadoop@hadoop001 zookeeper-3.4.6]$ cat data/myid
1
[hadoop@hadoop001 zookeeper-3.4.6]$

注:这边的话hadoop002,hadoop003分别需要 echo 2 > data/myid 和 echo 3 > data/myid,相当于给自己分配的一个id

14.部署hadoop

[hadoop@hadoop001 zookeeper-3.4.6]$ cd ../
[hadoop@hadoop001 app]$ ll
total 8
drwxr-xr-x 17 hadoop hadoop 4096 Nov 27 15:45 hadoop-2.6.0-cdh5.7.0
drwxr-xr-x 11 hadoop hadoop 4096 Nov 27 15:57 zookeeper-3.4.6
[hadoop@hadoop001 app]$ cd hadoop-2.6.0-cdh5.7.0
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ cd etc/hadoop
[hadoop@hadoop001 hadoop]$ pwd
/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop
[hadoop@hadoop001 hadoop]$ vi hadoop-env.sh
[hadoop@hadoop001 hadoop]$ vi hadoop-env.sh
配置java目录
export JAVA_HOME=/usr/java/jdk1.8.0_45
传到另外两个服务器
[hadoop@hadoop001 hadoop]$ scp hadoop-env.sh hadoop002:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop
hadoop-env.sh                                                             100% 4233     4.1KB/s   00:00    
[hadoop@hadoop001 hadoop]$ scp hadoop-env.sh hadoop003:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop
hadoop-env.sh                                                             100% 4233     4.1KB/s   00:00    
[hadoop@hadoop001 hadoop]$ 
[hadoop@hadoop001 hadoop]$ rz

把几个在本地修改好的配置文件传上去 这几个文件是修改好的,可以直接上传
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
slaves

15.zookeeper&hdfs&yarn集群启动

启动zookeeper
[hadoop@hadoop001 ~]$ $ZOOKEEPER_HOME/bin/zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[hadoop@hadoop001 ~]$ 

[hadoop@hadoop001 ~]$ $ZOOKEEPER_HOME/bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
[hadoop@hadoop002 ~]$ $ZOOKEEPER_HOME/bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: follower
[hadoop@hadoop003 ~]$ $ZOOKEEPER_HOME/bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/app/zookeeper-3.4.6/bin/../conf/zoo.cfg
Mode: leader  三者有一个leader

启动JournalNode 进程
[hadoop@hadoop001 ~]$ cd app
[hadoop@hadoop001 app]$ cd hadoop-2.6.0-cdh5.7.0
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop001.out
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ jps
2147 JournalNode
2198 Jps
2072 QuorumPeerMain
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 
在第一台机器上面进行NameNode的格式化
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ hadoop namenode -format
第二台机器上的namenode应该和第一台一样,不能再第二台格式化
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ scp -r data hadoop002:/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
in_use.lock                                                               100%   14     0.0KB/s   00:00    
VERSION                                                                   100%  154     0.2KB/s   00:00    
seen_txid                                                                 100%    2     0.0KB/s   00:00    
fsimage_0000000000000000000.md5                                           100%   62     0.1KB/s   00:00    
fsimage_0000000000000000000                                               100%  338     0.3KB/s   00:00    
VERSION                                                                   100%  203     0.2KB/s   00:00    
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 
第一台机器上面进行初始化ZFKC	
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$  hdfs zkfc -formatZK
启动 HDFS 分布式存储系统
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ sbin/start-dfs.sh
18/11/27 16:34:36 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001 hadoop002]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
hadoop003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop003.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop001: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop001.out
hadoop002: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop002.out
hadoop003: starting journalnode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-journalnode-hadoop003.out
18/11/27 16:34:52 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out
hadoop002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop002.out
hadoop002: Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
hadoop002: 	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
hadoop002: 	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ sbin/start-dfs.sh
18/11/27 16:46:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001 hadoop002]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop002: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop002.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
hadoop002: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop002.out
hadoop003: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop003.out
Starting journal nodes [hadoop001 hadoop002 hadoop003]
hadoop001: journalnode running as process 5300. Stop it first.
hadoop002: journalnode running as process 3244. Stop it first.
hadoop003: journalnode running as process 2860. Stop it first.
18/11/27 16:46:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting ZK Failover Controllers on NN hosts [hadoop001 hadoop002]
hadoop001: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop001.out
hadoop002: starting zkfc, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-zkfc-hadoop002.out
hadoop002: Exception in thread "main" org.apache.hadoop.HadoopIllegalArgumentException: HA is not enabled for this namenode.
hadoop002: 	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.create(DFSZKFailoverController.java:130)
hadoop002: 	at org.apache.hadoop.hdfs.tools.DFSZKFailoverController.main(DFSZKFailoverController.java:186)
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ jps
5987 DFSZKFailoverController
5300 JournalNode
5576 NameNode
2072 QuorumPeerMain
6057 Jps
5708 DataNode
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 

[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ jps
5217 DataNode
5345 DFSZKFailoverController
5121 NameNode
5026 JournalNode
5395 Jps
1976 QuorumPeerMain

[hadoop@hadoop003 hadoop-2.6.0-cdh5.7.0]$ jps
4161 DataNode
4264 Jps
2024 QuorumPeerMain
4076 JournalNode

启动yarn
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop003: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop003.out
hadoop002: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop002.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ jps
8960 JournalNode
9217 NameNode
9859 NodeManager
9764 ResourceManager
2072 QuorumPeerMain
10186 Jps
9627 DFSZKFailoverController
9343 DataNode

[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ yarn-daemon.sh start resourcemanager
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop002.out
[hadoop@hadoop002 hadoop-2.6.0-cdh5.7.0]$ jps
5217 DataNode
5345 DFSZKFailoverController
5121 NameNode
5026 JournalNode
5683 Jps
5476 NodeManager
1976 QuorumPeerMain
5626 ResourceManager

配置阿里云端口号:自己在安全组配置端口

监控集群

HDFS

http://120.79.18.194:50070

云集群服务器架构 云主机集群_hadoop_03

http://120.78.178.195:50070/

云集群服务器架构 云主机集群_ci_04

YARN

http://120.79.18.194:8088/cluster active的

云集群服务器架构 云主机集群_ci_05


http://120.78.178.195:8088/cluster/cluster standby的

云集群服务器架构 云主机集群_ci_06

启动jobhistory

[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$  $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver
starting historyserver, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/mapred-hadoop-historyserver-hadoop001.out
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ jps
8960 JournalNode
9217 NameNode
9859 NodeManager
9764 ResourceManager
2072 QuorumPeerMain
11465 JobHistoryServer
9627 DFSZKFailoverController
11501 Jps
9343 DataNode
[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$

http://120.79.18.194:19888/jobhistory

云集群服务器架构 云主机集群_ci_07

关闭和启动顺序:

1.关闭 Hadoop(YARN–>HDFS)

[hadoop@hadoop001 sbin]# stop-yarn.sh
[hadoop@hadoop002 sbin]# yarn-daemon.sh stop resourcemanager
[hadoop@hadoop001 sbin]# stop-dfs.sh

2.关闭 Zookeeper

[hadoop@hadoop001 bin]# zkServer.sh stop
[hadoop@hadoop002 bin]# zkServer.sh stop
[hadoop@hadoop003 bin]# zkServer.sh stop

1.再次启动

[hadoop@hadoop001 bin]# zkServer.sh start
[hadoop@hadoop002 bin]# zkServer.sh start
[hadoop@hadoop003 bin]# zkServer.sh start

2.启动 Hadoop(HDFS–>YARN)

[hadoop@hadoop001 sbin]# start-dfs.sh
[hadoop@hadoop001 sbin]# start-yarn.sh
[hadoop@hadoop002 sbin]# yarn-daemon.sh start resourcemanager
[hadoop@hadoop001 ~]# $HADOOP_HOME/sbin/mr-jobhistory-daemon.sh start historyserver