本文档详细描述了Centos7.4+zookeeper3.5.8 + hadoop3.2.1+Hbase2.2.5完全分布式+高可用(HA)集群的搭建过程,以及验证等操作使用方法。


hadoop高可用完全分布式集群搭建

Centos7.4+zookeeper3.5.8 + hadoop3.2.1+Hbase2.2.5完全分布式+高可用(HA)

服务器环境准备

  • 主机列表

主机IP

主机名

配置

操作系统

安装软件

运行进程

192.168.26.100

hd100

2C 4G

CentOS 7.4.1708

jdk-8u261-linux-x64

zookeeper-3.5.8

hadoop-3.2.1

hbase-2.2.5

DataNode

ResourceManager

NameNode

QuorumPeerMain

JournalNode

NodeManager

DFSZKFailoverController

HMaster

HRegionServer

192.168.26.110

hd110

2C 4G

CentOS 7.4.1708

jdk-8u261-linux-x64

zookeeper-3.5.8

hadoop-3.2.1

hbase-2.2.5

DFSZKFailoverController

HMaster

QuorumPeerMain

DataNode

NodeManager

HRegionServer

JournalNode

NameNode

192.168.26.120

hd120

2C 4G

CentOS 7.4.1708

jdk-8u261-linux-x64

zookeeper-3.5.8

hadoop-3.2.1

hbase-2.2.5

ResourceManager

NodeManager

QuorumPeerMain

JournalNode

HRegionServer

DataNode

  • 部署规划

节点环境及功能

192.168.26.100(hd100)

192.168.26.110(hd110)

192.168.26.120(hd120)

备注

JDK

部署

部署

部署

java环境

zookeeper

部署

部署

部署

集群部署

JournalNode

配置

配置

配置

所有节点

NameNode

hmaster1

hmaster2

高可用

NodeManager

配置

配置

配置

高可用

ResourceManager

rm1

rm2

高可用

DataNode

配置

配置

配置

所有节点

HMaster

Master

Backup Master

高可用

HRegionServer

配置

配置

配置

所有节点

  • 设置主机名
hostnamectl set-hostname hd100
hostnamectl set-hostname hd110
hostnamectl set-hostname hd120
  • 设置服务器静态ip(以hd100为例)
[root@hd100 ~]# ifconfig

ThrottleStop设置开机自动启动_centos

[root@hd100 ~]# cd /etc/sysconfig/network-scripts/
[root@hd100 network-scripts]# ll

ThrottleStop设置开机自动启动_大数据_02

vi /etc/sysconfig/network-scripts/ifcfg-ens32
TYPE=Ethernet
BOOTPROTO=static
NAME=ens32
DEVICE=ens32
ONBOOT=yes
IPADDR=192.168.26.100
NETMASK=255.255.255.0
GATEWAY=192.168.26.2
DNS1=192.168.26.2

BOOTPROTO=static、ONBOOT=yes

systemctl restart network
  • 关闭防火墙
systemctl status firewalld
systemctl stop firewalld
systemctl disable firewalld  #开机不启动防火墙
  • /etc/hosts
192.168.26.100 hd100 vms100.example.com  vms100
192.168.26.110 hd110 vms110.example.com  vms110
192.168.26.120 hd120 vms120.example.com  vms120
  • 设置ssh免密登录
ssh-keygen -N ""
ssh-copy-id hd100
ssh-copy-id hd110
ssh-copy-id hd120
  • 设置用户参数(以hd100为例)

查看和修改用户可打开文件数(参数名:open files)及进程数(参数名:max user process) ulimit –a 查看系统对当前用户限定的最大可打开文件数及进程数

[root@hd100 ~]# ulimit -a
core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 15021
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 1024
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 15021
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited

修改: open files 建议改为10240以上 max user processes建议改为10240以上

[root@hd100 ~]# vi /etc/systemd/system.conf
...
#DefaultLimitNOFILE=
DefaultLimitNOFILE=10240
#DefaultLimitAS=
#DefaultLimitNPROC=
DefaultLimitNPROC=40960
...

重启后这两个参数对root用户生效,【open files】参数对普通用户也有效,普遍用户的【max user processes】参数是通过 /etc/security/limits.d/20-nproc.conf控制的。

[root@hd100 ~]# vi /etc/security/limits.d/20-nproc.conf
# Default limit for number of user's processes to prevent
# accidental fork bombs.
# See rhbz #432903 for reasoning.

*          soft    nproc     20480
root       soft    nproc     unlimited

修改后使参数生效:

[root@hd100 ~]# sysctl -p
  • 同步时间,配置自动同步计划任务

所有节点设置时区,中国所用时区

timedatectl #读取当前时间,查看是否为Time zone: Asia/Shanghai (CST, +0800),否则用以下命令修改
timedatectl set-timezone Asia/Shanghai #设置时区为亚洲/上海

所有节点安装(以hd100为例)

[root@hd100 ~]# rpm -q ntp
未安装软件包 ntp
[root@hd100 ~]# yum -y install ntp
...
[root@hd100 ~]# systemctl list-unit-files | grep chronyd
chronyd.service                               enabled
[root@hd100 ~]# systemctl disable chronyd.service   #关闭 chronyd 服务的开机自启行为
Removed symlink /etc/systemd/system/multi-user.target.wants/chronyd.service.
[root@hd100 ~]# systemctl list-unit-files | grep chronyd
chronyd.service                               disabled
[root@hd100 ~]# service ntpd start
Redirecting to /bin/systemctl start ntpd.service
[root@hd100 ~]# service ntpd status
...
[root@hd100 ~]# systemctl enable ntpd.service
...
[root@hd100 ~]# systemctl list-unit-files | grep ntpd   #查看自启动项ntpd状态
  ntpd.service        enable    # 开机自启(ntpd服务-平滑调整时间)
  ntpdate.service     disable   # 开机不自启(ntpdate服务-瞬间调整时间)

hd100上执行:配置内网NTP-Server

[root@hd100 ~]# vi /etc/ntp.conf
...
# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 203.107.6.88
restrict 127.0.0.1
restrict ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
restrict 192.168.26.0 mask 255.255.255.0 nomodify

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 203.107.6.88 # aliyun clock

#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst

server 127.127.1.0 # local clock
fudge 127.127.1.0 stratum 10
...

说明:

1.注释掉原有的四个server,如下:
  #server 0.centos.pool.ntp.org iburst
  #server 1.centos.pool.ntp.org iburst
  #server 2.centos.pool.ntp.org iburst
  #server 3.centos.pool.ntp.org iburst
2.在其上增加aliyun的server
  server 203.107.6.88 # aliyun clock
3.增加aliyun server放行
  restrict 203.107.6.88
4.增加内网server放行,不允许其修改服务器时间参数
  (单个IP放行)restrict 192.168.26.110 nomodify
  (网段放行)restrict 192.168.26.0 mask 255.255.255.0 nomodify
5.增加外部服务时间不可获取时,使用本地时间
  server 127.127.1.0 # local clock
  fudge 127.127.1.0 stratum 10
 
*注:配置文件中,写在靠前的 server 优先级高于 靠后的 server
    所以把 server 203.107.6.88 写在 server 127.127.1.0 之前,意为:本机优先与 203.107.6.88 同步

启动NTPD服务,等待同步

[root@hd100 ~]# service ntpd stop  #修改完配置文件后,停止NTPD服务
[root@hd100 ~]# ntpdate 203.107.6.88 #ntpd服务启动后不能执行ntpdate服务命令
[root@hd100 ~]# service ntpd start #启动并查看NTP服务是否开始同步,通常需要在ntpd启动后15分钟内才能开始同步
[root@hd100 ~]# ntpstat
synchronised to NTP server (203.107.6.88) at stratum 3
   time correct to within 52 ms
   polling server every 128 s
[root@hd100 ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*203.107.6.88    10.137.38.86     2 u   47  128  377   41.523   17.724  12.166
 LOCAL(0)        .LOCL.          10 l  40m   64    0    0.000    0.000   0.000

参数解释:

remote:是上层 NTP 服务器的 IP 或主主机名
    注意最左邊的符号: 
       1.如果有『 * 』代表目前正在作用当中的上层 NTP
       2.如果是『 + 』代表也有连上线,而且可作为下一个提供时间更新的候选者。
  refid:是 remote 服务器参考的上一层 NTP 服务器的地址
  st:是 stratum 层级
  when:几秒钟前曾经做过时间同步化更新的动作;
  poll:下一次更新在几秒钟之后;
  reach:已经向上层 NTP 服务器要求更新的次数
  delay:网络传输过程中延迟的时间,单位为 10^(-3) 秒
  offset:时间补偿的结果,单位为 10^(-3) 秒
  jitter:Linux 系统时间与 BIOS 硬件时间的差距时间,单位为 10^(-3) 秒。

hd110、hd120上执行:配置内网NTP-Clients

修改 /etc/ntp.conf 文件,主要的几个修改地方

1.注释掉原有的四个server,如下:
  #server 0.centos.pool.ntp.org iburst
  #server 1.centos.pool.ntp.org iburst
  #server 2.centos.pool.ntp.org iburst
  #server 3.centos.pool.ntp.org iburst
2.在其上增加主节点IP
  server 192.168.26.100 # master lock
3.增加master server放行
  restrict 192.168.26.100
4.增加外部服务时间不可获取时,使用本地时间
  server 127.127.1.0 # local clock
  fudge 127.127.1.0 stratum 10  # 不可以超过15层

启动NTPD服务,等待同步(15分钟以内)

1.修改完配置文件后,停止NTPD服务
  # service ntpd stop
2.同步本机与主节点时间服务器 192.168.26.100 (master clock)时间
  (以免时间差距太大,使ntp不能正常同步校准)
  # ntpdate 192.168.26.100
3.启动并查看NTP服务是否开始同步,通常需要在ntpd启动后15分钟内才能开始同步
  # service ntpd start
  # ntpstat
  synchronised to NTP server (192.168.26.100) at stratum 4 -- 已经开始同步,层数4
                            ---比上层master的NTPD服务(为3)低一层,最高层为1
   time correct to within 508 ms    -- 时间校准到508ms以内
   polling server every 128 s    -- 每隔128s轮询一次服务器
4.查看当前NTP服务器与上层NTP服务器的状态 (hd100就是192.168.26.100)
  # [root@hd110 ~]# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
 hd100           LOCAL(0)        11 u   40   64   17    0.843   40.812   0.659
*LOCAL(0)        .LOCL.          10 l   48   64   17    0.000    0.000   0.000
  # [root@hd110 ~]# ntpq -p  #等一会后与NTP服务器hd100同步
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
*hd100           203.107.6.88     3 u   98  128  377    0.649   32.619   7.959
 LOCAL(0)        .LOCL.          10 l  299   64  360    0.000    0.000   0.000

软件下载地址

https://mirrors.tuna.tsinghua.edu.cn/apache/

  • hadoop-3.2.1.tar.gzhttps://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/core/stable2/hadoop-3.2.1.tar.gz
  • hbase-2.2.5-bin.tar.gzhttps://mirrors.tuna.tsinghua.edu.cn/apache/hbase/stable/hbase-2.2.5-bin.tar.gz
  • apache-zookeeper-3.5.8-bin.tar.gzhttps://mirrors.tuna.tsinghua.edu.cn/apache/zookeeper/stable/apache-zookeeper-3.5.8-bin.tar.gz
  • jdk-8u261-linux-x64.tar.gzhttps://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html#license-lightbox

下载软件并存放到:/opt/src

-rw-r--r-- 1 root root   9394700  apache-zookeeper-3.5.8-bin.tar.gz
-rw-r--r-- 1 root root 359196911  hadoop-3.2.1.tar.gz
-rw-r--r-- 1 root root 220221311  hbase-2.2.5-bin.tar.gz
-rw-r--r-- 1 root root 143111803  jdk-8u261-linux-x64.tar.gz

官网链接:

https://hadoop.apache.org/releases.html

http://hbase.apache.org/

http://hbase.apache.org/book.html#quickstart

https://cwiki.apache.org/confluence/display/HADOOP/Hadoop+Java+Versions

安装JDK

为了避免干扰,先卸载系统自带的open jdk

  • 查看:
rpm -qa|grep jdk
  • 卸载系统自带的open jdk:
rpm -e --nodeps `rpm -qa | grep java`
  • 解压
mkdir /home/hadoop
cd /opt/src
tar -zxvf jdk-8u261-linux-x64.tar.gz -C /home/hadoop/
cd /home/hadoop/
mv jdk1.8.0_261 jdk8
  • 配置:在/etc/profile末尾追加
HADOOP_HOME=/home/hadoop/hadoop
JAVA_HOME=/home/hadoop/jdk8
HBASE_HOME=/home/hadoop/hbase 
PATH=.:$HBASE_HOME/bin:$HIVE_HOME/bin:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export HADOOP_HOME
export JAVA_HOME
export PATH
export CLASSPATH
export HBASE_HOME
source /etc/profile

配置zookeeper

cd /opt/src
tar -zxvf apache-zookeeper-3.5.8-bin.tar.gz -C /home
cd /home
mv apache-zookeeper-3.5.8-bin zookeeper
[root@hd100 ~]# cd /home/zookeeper/conf
[root@hd100 conf]# cp zoo_sample.cfg zoo.cfg
[root@hd100 conf]# vi zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/home/hadoop/zookeeper/data
dataLogDir=/home/hadoop/zookeeper/log
clientPort=2181
# 保留多少个快照
autopurge.snapRetainCount=3
# 日志多少小时清理一次
autopurge.purgeInterval=1
# 集群中服务器地址
server.1=hd100:2888:3888
server.2=hd110:2888:3888
server.3=hd120:2888:3888

说明:

  • tickTime表示节点间通信超时的单位时长,单位是毫秒。
  • initLimit是指follower服务器初始化连接到leader服务器时可以忍受的超时时间,时长以initLimit * tickTime表示。
  • syncLimit指leader与follower之间通信的超时时长,以syncLimit * tickTime 表示,这里是5*2000=10秒。
  • server.1=hd100:2888:3888这一行中的server.1表示节点编号,hd100表示这台服务器的主机名,也可以直接指定ip地址,2888是ZooKeeper服务间通信的端口,3888是ZooKeeper服务与其他服务通信的端口。
  • dataDir指定ZooKeeper的数据目录。
  • autopurge.purgeInterval=1表示开启日志和镜像文件自动清理功能。

创建dataDir文件夹和在里面创建文件myid 并写入数字1(hd100),其他两个节点的myid为2(hd110)、3(hd120)

myid 与server.x的对应

mkdir -p /home/hadoop/zookeeper/{data,log}
vi /home/hadoop/zookeeper/data/myid

修改环境变量文件:/etc/profile

  • 追加内容: export ZOOKEEPER_HOME=/home/zookeeper
  • 修改PATH:在PATH项的$PATH前面面追加:$ZOOKEEPER_HOME/bin:$ZOOKEEPER_HOME/conf

三台机器启动zookeeper服务,这个命令三台机器都要执行

/home/zookeeper/bin/zkServer.sh start
/home/zookeeper/bin/zkServer.sh status  #查看启动状态

配置hadoop

安装目录:/home/hadoop/hadoop

cd /opt/src
tar -zxvf hadoop-3.2.1.tar.gz -C /home/hadoop
cd /home/hadoop/
mv hadoop-3.2.1 hadoop
配置hadoop-env.sh
vi /home/hadoop/hadoop/etc/hadoop/hadoop-env.sh

1、配置java安装录入

...
export JAVA_HOME=/home/hadoop/jdk8
...

2、配置hadoop NameNode运行堆内存-Xmx2g -Xms2g

...
export HADOOP_NAMENODE_OPTS="-Xmx2g -Xms2g -Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
...

3、配置hadoop DataNode运行堆内存

...
export HADOOP_DATANODE_OPTS="-Xmx2g -Xms2g -Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"
...

4、配置hadoop进程id文件路径,HADOOP_PID_DIR默认为/tmp,操作系统重启时可能会清除

...
export HADOOP_PID_DIR=/home/hadoop/hadoop
...
配置core-site.xml
vi /home/hadoop/hadoop/etc/hadoop/core-site.xml
<configuration>
<!-- 指定hdfs的nameservice为masters -->
<property>
	<name>fs.defaultFS</name>
	<value>hdfs://mycluster</value>
</property>
<!-- The size of buffer for use in sequence files. The size of this buffer should probably be a multiple of hardware page size (4096 on Intel x86), and it determines how much data is buffered during read and write operations.默认为4096-->
<property>
	<name>io.file.buffer.size</name>
	<value>40960</value>
</property>
<!-- 指定hadoop临时目录 -->
<property>
	<name>hadoop.tmp.dir</name>
	<value>/home/hadoop/hadoop/tmp/${user.name}</value>
</property>
<!-- 指定zookeeper地址 -->
<property>
	<name>ha.zookeeper.quorum</name>
	<value>hd100:2181,hd110:2181,hd120:2181</value>
</property>
<!-- 解决:Active NameNode日志出现异常IPC‘s epoch [X] is less than the last promised epoch [X+1],出现短期的双Active -->
<property>
	<name>ha.health-monitor.rpc-timeout.ms</name> 
	<value>180000</value> 
</property>

</configuration>
配置hdfs-site.xml
vi /home/hadoop/hadoop/etc/hadoop/hdfs-site.xml
<configuration>
    <!--指定hdfs的nameservice为mycluster,需要和core-site.xml中的保持一致 -->
	<property>
            <name>dfs.nameservices</name>
            <value>mycluster</value>
    </property>
    <!-- mycluster下面有两个NameNode,逻辑名分别设置为hmaster1,hmaster2,也可设置为nn1,nn2,后面的配置要统一引用该逻辑名 -->
    <property>
            <name>dfs.ha.namenodes.mycluster</name>
            <value>hmaster1,hmaster2</value>
    </property>
    <!-- hmaster1的RPC通信地址 -->
    <property>
            <name>dfs.namenode.rpc-address.mycluster.hmaster1</name>
            <value>hd100:9000</value>
    </property>
    <!-- hmaster1的http通信地址 -->
    <property>
            <name>dfs.namenode.http-address.mycluster.hmaster1</name>
            <value>hd100:50070</value>
    </property>
    <!-- hmaster1的servicerpc通信地址 -->
    <property>
            <name> dfs.namenode.servicerpc-address.mycluster.hmaster1</name>
            <value>hd100:53310</value>
    </property>
    <!-- hmaster2的RPC通信地址 -->
    <property>
            <name>dfs.namenode.rpc-address.mycluster.hmaster2</name>
            <value>hd110:9000</value>
    </property>
    <!-- hmaster2的http通信地址 -->
    <property>
            <name>dfs.namenode.http-address.mycluster.hmaster2</name>
            <value>hd110:50070</value>
    </property>
    <!--hmaster2的servicerpc通信地址 -->
    <property>
            <name> dfs.namenode.servicerpc-address.mycluster.hmaster2</name>
            <value>hd110:53310</value>
    </property>
    <!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
    <property>
            <name> dfs.namenode.name.dir </name>
            <value>/home/hadoop/hadoop/data01/mycluster</value>
            <final>true</final>
    </property>
    <!-- 指定NameNode的元数据在JournalNode上的存放位置,必须是/home/hadoop/hadoop/sbin/hadoop-daemons.sh start journalnode启动的节点 -->
    <property>
            <name>dfs.namenode.shared.edits.dir</name>
            <value>qjournal://hd100:8485;hd110:8485;hd120:8485/mycluster</value>
    </property>
    <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
    <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/home/hadoop/hadoop/data01/tmp/journal</value>
    </property>
    <!-- 开启NameNode失败自动切换 -->
    <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
    </property>
    <!-- 配置失败自动切换实现方式 -->
    <property>
            <name>dfs.client.failover.proxy.provider.mycluster</name>
            <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
    <property>
            <name>dfs.ha.fencing.methods</name>
            <value>
                    sshfence
                    shell(/bin/true)
            </value>
    </property>
    <!-- 使用sshfence隔离机制时需要ssh免登陆。注意换成登陆用户名的/home/hadoop/.ssh/id_dsa -->
    <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/root/.ssh/id_rsa</value>
    </property>
    <!-- 配置sshfence隔离机制超时时间 -->
    <property>
            <name>dfs.ha.fencing.ssh.connect-timeout</name>
            <value>30000</value>
    </property>
    <!-- 指定DataNode数据的存放位置,建议一台机器挂多个盘,,一方面增大容量,另一方面减少磁盘单点故障及磁盘读写能力 -->
    <property>
            <name> dfs.datanode.data.dir </name>
           	<value>
           	/home/hadoop/hadoop/data01/dn,
           	/home/hadoop/hadoop/data02/dn,
           	/home/hadoop/hadoop/data03/dn,
           	/home/hadoop/hadoop/data04/dn
           	</value>
            <final>true</final>
    </property>
  <property>
            <name> dfs.namenode.checkpoint.dir.mycluster </name>
           	<value>/home/hadoop/hadoop/data01/dfs/namesecondary</value>
            <final>true</final>
    </property>
    <!--每个DataNode上需预留的空间,给非hdfs使用,默认为0,Reserved space in bytes per volume -->
    <property>
            <name> dfs.datanode.du.reserved </name>
           	<value>102400</value>
            <final>true</final>
    </property>
    <!--限制hdfs负载均衡时占用的最大带宽Specifies the maximum amount of bandwidth that each datanode can utilize for the balancing purpose in term of the number of bytes per second. -->
    <property>
            <name>dfs.datanode.balance.bandwidthPerSec</name>
           	<value>10485760000</value>
    </property>
</configuration>

注意登陆用户名进行替换/home/hadoop/.ssh/id_dsa

<configuration>
        <!--指定hdfs的nameservice为masters,需要和core-site.xml中的保持一致 -->
        <property>
            <name>dfs.nameservices</name>
            <value>masters</value>
        </property>
        <!-- Master下面有两个NameNode,分别是nn1,nn2 -->
        <property>
            <name>dfs.ha.namenodes.masters</name>
            <value>nn1,nn2</value>
        </property>
        <!--nn1的RPC通信地址-->
	    <property>
			<name>dfs.namenode.rpc-address.masters.nn1</name>
			<value>hd100:9000</value>
		</property>
		<!--nn1的http通信地址-->
		<property>
			<name>dfs.namenode.http-address.masters.nn1</name>
			<value>hd100:50070</value>
		</property>
		<!--nn2的RPC通信地址-->
		<property>
			<name>dfs.namenode.rpc-address.masters.nn2</name>
			<value>hd110:9000</value>
		</property>
		<!--nn2的http通信地址-->
		<property>
			<name>dfs.namenode.http-address.masters.nn2</name>
			<value>hd110:50070</value>
		</property>        
		<!-- 指定NameNode的元数据在JournalNode上的存放位置 -->
        <property>
        	<name>dfs.namenode.shared.edits.dir</name>
			<value>qjournal://hd100:8485;hd110:8485;hd120:8485/masters</value>        			 			   </property>
        <!-- 指定JournalNode在本地磁盘存放数据的位置 -->
        <property>
            <name>dfs.journalnode.edits.dir</name>
            <value>/home/hadoophadoop/journal</value>
        </property>
        <!-- 开启NameNode失败自动切换 -->
        <property>
            <name>dfs.ha.automatic-failover.enabled</name>
            <value>true</value>
        </property>
        <!-- 配置失败自动切换实现方式 -->
        <property>
            <name>dfs.client.failover.proxy.provider.masters</name>
 			<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
        </property>
        <!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
        <property>
            <name>dfs.ha.fencing.methods</name>
            <value>sshfence</value>
        </property>
        <!-- 使用sshfence隔离机制时需要ssh免登陆 -->
        <property>
            <name>dfs.ha.fencing.ssh.private-key-files</name>
            <value>/root/.ssh/id_rsa</value>
        </property>
        <!-- 配置sshfence隔离机制超时时间 -->
        <property>
            <name>dfs.ha.fencing.ssh.connect-timeout</name>
            <value>30000</value>
        </property>
		<!--设置hdfs的操作权限,false表示任何用户都可以在hdfs上操作文件-->
		<property>
			<name>dfs.permissions</name>
			<value>false</value>
		</property>
</configuration>
配置mapred-site.xml
vi /home/hadoop/hadoop/etc/hadoop/mapred-site.xml
<configuration>
    <!-- 指定mr框架为yarn方式 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <!-- Expert: Set this to true to let the tasktracker send an out-of-band heartbeat on task-completion for better latency. -->
    <property>
        <name>mapreduce.tasktracker.outofband.heartbeat</name>
        <value>true</value>
    </property>
</configuration>
配置yarn-site.xml
vi /home/hadoop/hadoop/etc/hadoop/yarn-site.xml
<configuration>
<!-- Site specific YARN configuration properties -->
		<!-- 开启RM高可靠 -->
        <property>
                <name>yarn.resourcemanager.ha.enabled</name>
                <value>true</value>
        </property>
        <!-- 指定RM的cluster id -->
        <property>
                <name>yarn.resourcemanager.cluster-id</name>
                <value>RM_HA_ID</value>
        </property>
        <!-- 指定RM的名字 -->
        <property>
                <name>yarn.resourcemanager.ha.rm-ids</name>
                <value>rm1,rm2</value>
        </property>
        <!-- 分别指定RM的地址。因为他们都要占用大量资源,可以把namenode和resourcemanager分开到不同的服务器上 -->
        <property>
                <name>yarn.resourcemanager.hostname.rm1</name>
                <value>hd100</value>
        </property>
        <property>
                <name>yarn.resourcemanager.hostname.rm2</name>
                <value>hd120</value>
        </property>
        <property>
                <name>yarn.resourcemanager.recovery.enabled</name>
                <value>true</value>
        </property>
        <property>
                <name>yarn.resourcemanager.store.class</name>
                <value>org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore</value>
        </property>
        <!-- 指定zk集群地址 -->
        <property>
                <name>yarn.resourcemanager.zk-address</name>
                <value>hd100:2181,hd110:2181,hd120:2181</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>

        <property>
            <name>yarn.application.classpath</name>
            <value>
            /home/hadoop/hadoop/etc/hadoop,
            /home/hadoop/hadoop/share/hadoop/common/lib/*,
            /home/hadoop/hadoop/share/hadoop/common/*,
            /home/hadoop/hadoop/share/hadoop/hdfs,
            /home/hadoop/hadoop/share/hadoop/hdfs/lib/*,
            /home/hadoop/hadoop/share/hadoop/hdfs/*,
            /home/hadoop/hadoop/share/hadoop/mapreduce/lib/*,
            /home/hadoop/hadoop/share/hadoop/mapreduce/*,
            /home/hadoop/hadoop/share/hadoop/yarn,
            /home/hadoop/hadoop/share/hadoop/yarn/lib/*,
            /home/hadoop/hadoop/share/hadoop/yarn/*
            </value>
        </property>

</configuration>
配置workers
vi /home/hadoop/hadoop/etc/hadoop/workers
hd100
hd110
hd120

以上配置可以在hd100配置好后,复制到其它节点:

[root@hd100 hadoop]# scp -r  /home/hadoop/hadoop/etc/hadoop hd110:/home/hadoop/hadoop/etc
[root@hd100 hadoop]# scp -r  /home/hadoop/hadoop/etc/hadoop hd120:/home/hadoop/hadoop/etc
初始化并启动集群

启动前配置操作用户:

对于start-dfs.sh和stop-dfs.sh文件,添加下列参数:

cd /home/hadoop/hadoop/sbin
#!/usr/bin/env bash
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
HDFS_JOURNALNODE_USER=root
HDFS_ZKFC_USER=root

对于start-yarn.sh和stop-yarn.sh文件,添加下列参数:

#!/usr/bin/env bash
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root

注意:严格按照下面的步骤 1 、 启动zookeeper集群

/home/zookeeper/bin/zkServer.sh start
/home/zookeeper/bin/zkServer.sh status #3个节点都启动后,查看状态

2、启动journalnode Namenode和datanode上执行启动命令都可以,自动启动三个workers节点:

hdfs --workers --daemon start journalnode

或者

/home/hadoop/hadoop/sbin/hadoop-daemons.sh start journalnode
[root@hd100 bin]# jps
1811 Jps
1526 QuorumPeerMain
1770 JournalNode

3、 格式化HDFS

第一次启动需要格式化,后面启动不再需要。 格式化会根据 core-site.xml 中的hadoop.tmp.dir配置生成一个目录。如果之前有格式化过,那么先删除所有节点的该目录,比如我这里配置的是/home/hadoop/hadoop/tmp/${user.name},我之前有格式化过,那么需要先删除所有结点的该目录,在三个节点上执行:

rm -rf /home/hadoop/hadoop/tmp
rm -rf /home/hadoop/hadoop/data*
rm -rf /home/hadoop/hadoop/logs

在hmaster1(hd100)上执行命令:

hdfs namenode -format

格式化后会在根据hdfs-site.xml中的dfs.namenode.name.dir配置生成个文件夹及原数据,拷贝原数据到 hmaster2(hd110)

...
INFO common.Storage: Storage directory /home/hadoop/hadoop/data01/mycluster has been successfully formatted.
...
scp -r /home/hadoop/hadoop/data01 hd110:/home/hadoop/hadoop/

4、 格式化zookeeper(在hd100上执行即可)

同样,第一次启动需要格式化,后面启动不再需要

hdfs zkfc -formatZK
...
INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
...

5、 启动HDFS(在hd100上执行)

/home/hadoop/hadoop/sbin/start-dfs.sh

6、 启动YARN(是在yarn-site.xml 中配置的服务上执行start-yarn.sh)(在hd100上执行即可)

/home/hadoop/hadoop/sbin/start-yarn.sh

如果namenode和resourcemanager 在同一台机器上,启动hdfs和 yarn 也可以用一条命令 start-all.sh 来启动;如果 namenode和resourcemanager分开了建议分成两步启动。

7、查看resourcemanager状态

[root@hd100 ~]# yarn rmadmin -getServiceState rm1
active
[root@hd100 ~]# yarn rmadmin -getServiceState rm2
standby
[root@hd100 ~]# yarn rmadmin -transitionToStandby rm1 --forcemanual  #强制转换指令

8、检查启动情况

[root@hd100 ~]# jps | grep -v Jps
1968 DataNode
2848 ResourceManager
1826 NameNode
1144 QuorumPeerMain
1353 JournalNode
2986 NodeManager
2459 DFSZKFailoverController
[root@hd110 ~]# jps | grep -v Jps
1760 DFSZKFailoverController
1137 QuorumPeerMain
1506 DataNode
1891 NodeManager
1290 JournalNode
1439 NameNode
[root@hd120 ~]# jps | grep -v Jps
1617 ResourceManager
1682 NodeManager
1110 QuorumPeerMain
1256 JournalNode
1407 DataNode
测试集群

启动完成后浏览器访问:

  • http://192.168.26.100:50070/namenode处于standby状态

ThrottleStop设置开机自动启动_centos_03

  • http://192.168.26.110:50070/namenode处于active状态

ThrottleStop设置开机自动启动_centos_04

  • http://192.168.26.100:8088/

ThrottleStop设置开机自动启动_1024程序员节_05

  • 测试集群的高可用性

用kill命令强制结束正处于active的namenode(从上可知是hd110)

[root@hd110 ~]# jps | grep -v Jps
1760 DFSZKFailoverController
1137 QuorumPeerMain
1506 DataNode
1891 NodeManager
1290 JournalNode
1439 NameNode
[root@hd110 ~]# kill -9 1439

刷新刚刚处于standby状态的namenode的50070页面,可以看到该节点变成active了!

ThrottleStop设置开机自动启动_hbase_06

也可以用hadoop jar命令运行自带的jar包来测试,在运行过程中用kill -9 namenode的进程号 来强制杀死active的namenode,如果仍然能正常运行且结果正确,说明hadoop高可用集群完美搭建成功!

手动启动hd110的namenode:

[root@hd110 ~]# jps | grep -v Jps
1760 DFSZKFailoverController
1137 QuorumPeerMain
1506 DataNode
1891 NodeManager
1290 JournalNode
[root@hd110 ~]# hadoop-daemon.sh start namenode
[root@hd110 ~]# jps | grep -v Jps
1760 DFSZKFailoverController
1137 QuorumPeerMain
1506 DataNode
1891 NodeManager
1290 JournalNode
2701 NameNode

观察hd110 状态为standby

ThrottleStop设置开机自动启动_hadoop_07

主备切换
[root@hd100 ~]# hdfs haadmin -failover hmaster1 hmaster2
Failover to NameNode at hd110/192.168.26.110:53310 successful

观察:hmaster2(hd110)状态为acitve

ThrottleStop设置开机自动启动_1024程序员节_08

观察:hmaster1(hd100)状态为standby

ThrottleStop设置开机自动启动_大数据_09

执行MR
  • 创建测试数据文件
[root@hd120 /]# mkdir test
[root@hd120 /]# vi /test/test.dat
java hadoop c++ python docker kubernetes redhat centos springboot springcloud java hadoop c++ python docker kubernetes redhat centos springboot springcloud spring cloud master hadoop worker node namenode master worker
  • 创建hdfs目录:/input,并put数据文件
[root@hd120 /]# hdfs dfs -mkdir /input
[root@hd120 /]# hdfs dfs -put /test/test.dat /input
[root@hd120 /]# hdfs dfs -ls /input
Found 1 items
-rw-r--r--   3 root supergroup        218 2020-10-23 11:00 /input/test.dat
[root@hd120 /]# hdfs dfs -cat /input/test.dat
2020-10-23 11:01:08,444 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
java hadoop c++ python docker kubernetes redhat centos springboot springcloud java hadoop c++ python docker kubernetes redhat centos springboot springcloud spring cloud master hadoop worker node namenode master worker
  • 执行
hadoop jar /home/hadoop/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.1.jar wordcount /input/test.dat /output

执行过程输出:(如果出错,请参考后文的出错处理方法)

...
2020-10-23 11:05:58,672 INFO mapreduce.Job:  map 100% reduce 100%
2020-10-23 11:05:59,702 INFO mapreduce.Job: Job job_1603414833398_0005 completed successfully
2020-10-23 11:05:59,868 INFO mapreduce.Job: Counters: 54
        File System Counters
                ...
        Job Counters
                ...
        Map-Reduce Framework
                ...
        Shuffle Errors
                ...
        File Input Format Counters
                ...
        File Output Format Counters
               ...

查看结果:

[root@hd120 /]# hdfs dfs -ls /output
Found 2 items
-rw-r--r--   3 root supergroup          0 2020-10-23 11:05 /output/_SUCCESS
-rw-r--r--   3 root supergroup        151 2020-10-23 11:05 /output/part-r-00000
[root@hd120 /]# hdfs dfs -cat /output/part-r-00000
2020-10-23 11:07:43,516 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false
c++     2
centos  2
cloud   1
docker  2
hadoop  3
java    2
kubernetes      2
master  2
namenode        1
node    1
python  2
redhat  2
spring  1
springboot      2
springcloud     2
worker  2
集群启动和关闭顺序

1、启动集群

三台机器先启动 zookeeper 集群

zkServer.sh start

master机器上先启动 hdfs,再启动yarn (hbase最后启动)

start-dfs.sh 
start-yarn.sh
start-hbase.sh

以上两句可以用 start-all.sh 一并启动,但是最好分步启动。 需要的话可以启动history服务:

mr-jobhistory-daemon.sh start historyserver

2、关闭集群

master 机器上先停止yarn,再停止hdfs (如果启动了hbase则先停止hbase)

stop-hbase.sh
stop-yarn.sh
stop-dfs.sh

stop-all.sh

三台机器关闭 zookeeper 集群

zkServer.sh stop
运行自带的计算pi值程序时出错及处理
cd /home/hadoop/hadoop/share/hadoop/mapreduce
[root@hd120 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.2.1.jar pi 5 5

出错截图:(最后部分)

ThrottleStop设置开机自动启动_hadoop_10

在yarn的管理界面(http://192.168.26.100:8088)上查看运行日志发现如下错误:

...
 ERROR [Listener at 0.0.0.0/44007] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Error starting MRAppMaster
org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.NullPointerException
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:178)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStart(RMCommunicator.java:122)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStart(RMContainerAllocator.java:280)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStart(MRAppMaster.java:979)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
	at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStart(MRAppMaster.java:1293)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$6.run(MRAppMaster.java:1761)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.initAndStartAppMaster(MRAppMaster.java:1757)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.main(MRAppMaster.java:1691)
Caused by: java.lang.NullPointerException
	at org.apache.hadoop.mapreduce.v2.app.client.MRClientService.getHttpPort(MRClientService.java:177)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.register(RMCommunicator.java:159)
	... 14 more
INFO [Listener at 0.0.0.0/44007] org.apache.hadoop.util.ExitUtil: Exiting with status 1: org.apache.hadoop.yarn.exceptions.YarnRuntimeException: java.lang.NullPointerException

管理界面查看方法:

ThrottleStop设置开机自动启动_大数据_11

点击报错ID,进入:

ThrottleStop设置开机自动启动_1024程序员节_12

点击对应的Logs:

http://hd100:8042/node/containerlogs/container_e02_1603414833398_0001_01_000001/root

将had100换成IP:192.168.26.100即可打开页面:

ThrottleStop设置开机自动启动_centos_13

点击syslog:

ThrottleStop设置开机自动启动_centos_14

原因:大概是 MRClientService 的 WebApp 创建过程出错,导致 WebApp 对象为 null,后边调用了 WebApp 的 getHttpPort() 方法,导致空指针。

解决方法:一种是修改源码重新编译生成 class 文件(比较麻烦),简单的方法就是直接在 yarn-site.xml 文件中添加:

vi /home/hadoop/hadoop/etc/hadoop/yarn-site.xml
<property>
                <name>yarn.resourcemanager.webapp.address.rm1</name>
                <value>hd100:8088</value>
        </property>

        <property>
                 <name>yarn.resourcemanager.webapp.address.rm2</name>
                 <value>hd120:8088</value>
        </property>

hd100和hd120为已配置的 ResourceManager 节点(hd100、hd120)。加上后正常运行!

[root@hd120 mapreduce]# hadoop jar hadoop-mapreduce-examples-3.2.1.jar pi 5 5
...
INFO mapreduce.Job:  map 100% reduce 100%
INFO mapreduce.Job: Job job_1603414833398_0002 completed successfully
 INFO mapreduce.Job: Counters: 54
        File System Counters
                ...
        Job Counters
                ...
        Map-Reduce Framework
                ...
        Shuffle Errors
                ...
        File Input Format Counters
                ...
        File Output Format Counters
                ...
Job Finished in 129.214 seconds
...
Estimated value of Pi is 3.68000000000000000000

配置HBase

安装目录:/home/hadoop/hbase

[root@hd100 ~]# cd /opt/src
[root@hd100 src]# tar -zxvf  hbase-2.2.5-bin.tar.gz -C /home/hadoop/
[root@hd100 src]# cd /home/hadoop/
[root@hd100 hadoop]# mv hbase-2.2.5 hbase
配置环境变量

参见前文。

配置hbase-env.sh

在hmaster1(hd100)上操作:

[root@hd100 hadoop]# vi /home/hadoop/hbase/conf/hbase-env.sh
export JAVA_HOME=/home/hadoop/jdk8
export HADOOP_HOME=/home/hadoop/hadoop
export HBASE_HOME=/home/hadoop/hbase
#关闭自身zookeeper,采用外部的zookeeper
export HBASE_MANAGES_ZK=false
#The directory where pid files are stored. /tmp by default.
export HBASE_PID_DIR=/home/hadoop/hbase/pids
配置hbase-site.xml
[root@hd100 hadoop]# vi /home/hadoop/hbase/conf/hbase-site.xml
<!-- hadoop集群名称 -->
  <property>
    <name>hbase.rootdir</name>
    <value>hdfs://mycluster/hbase</value>
  </property>
  <property>
    <name>hbase.zookeeper.quorum</name>
    <value>hd100,hd110,hd120</value>
  </property>
  <property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
  </property>
  <!--  是否是完全分布式 -->
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <!--  完全分布式必须为false  -->
  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>
  <!--  指定缓存文件存储的路径 -->
  <property>
    <name>hbase.tmp.dir</name>
    <value>/home/hadoop/hadoop/data01/hbase/hbase_tmp</value>
  </property>
  <!--  指定Zookeeper数据存储的路径  -->
  <property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/home/hadoop/hadoop/data01/hbase/zookeeper_data</value>
  </property>

注意:$HBASE_HOME/conf/hbase-site.xml的hbase.rootdir的value值(包括主机和端口号)与$HADOOP_HOME/conf/core-site.xml的fs.default.name的value值(包括主机和端口号)一致

配置regionservers

修改文件:/home/hadoop/hbase/conf/regionservers。

[root@hd100 hadoop]# vi /home/hadoop/hbase/conf/regionservers

添加DataNode的IP或者主机名即可,这个文件把RegionServer的节点列了下来,内容为:

hd100
hd110
hd120
配置HMaster高可用

为了保证HBase集群的高可靠性,HBase支持多Backup Master 设置。当Active Master挂掉后,Backup Master可以自动接管整个HBase的集群。 该配置极其简单:在 $HBASE_HOME/conf/目录下新增文件配置backup-masters:

[root@hd100 hadoop]# vi /home/hadoop/hbase/conf/backup-masters

在其内添加要用做Backup Master的节点hostname。

hd110

没设置backup-masters之前启动hbase, 只有一台有启动了HMaster进程;

完成之后,重新启动整个集群,我们会发现,在backup-masters清单上的主机,都启动了HMaster进程。

log4j冲突处理

启动hbase时会报错:

[root@hd100 hadoop]# /home/hadoop/hbase/bin/start-hbase.sh
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
running master, logging to /home/hadoop/hbase/logs/hbase-root-master-hd100.out
hd110: running regionserver, logging to /home/hadoop/hbase/logs/hbase-root-regionserver-hd110.out
hd120: running regionserver, logging to /home/hadoop/hbase/logs/hbase-root-regionserver-hd120.out
hd100: running regionserver, logging to /home/hadoop/hbase/logs/hbase-root-regionserver-hd100.out
hd110: running master, logging to /home/hadoop/hbase/logs/hbase-root-master-hd110.out

原因是有两个log4j的jar起了冲突,只需要删除其中一个:

mv /home/hadoop/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar /home/hadoop/hbase/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar.bak
将hbase目录复制到其他节点

复制hd100的hbase目录到hd110、hd120:

[root@hd100 hadoop]# scp -r /home/hadoop/hbase/ hd110:/home/hadoop/
[root@hd100 hadoop]# scp -r /home/hadoop/hbase/ hd120:/home/hadoop/

至此Hbase完美配置成功!

启动/停止hbase

在安装完成zookeeper、hadoop、hbase后,基本的数据仓库框架算是搭建好了,接下来就是将启动,使之运行工作。 启动顺序要求且必须是这样:zookeeper --> hadoop --> hbase

  • 启动hbase

按照前面启动hadoop ha的顺序先启动好zookeeper、hadoop

/home/hadoop/hbase/bin/start-hbase.sh
[root@hd100 hadoop]# /home/hadoop/hbase/bin/start-hbase.sh
running master, logging to /home/hadoop/hbase/logs/hbase-root-master-hd100.out
hd120: running regionserver, logging to /home/hadoop/hbase/logs/hbase-root-regionserver-hd120.out
hd100: running regionserver, logging to /home/hadoop/hbase/logs/hbase-root-regionserver-hd100.out
hd110: running regionserver, logging to /home/hadoop/hbase/logs/hbase-root-regionserver-hd110.out
hd110: running master, logging to /home/hadoop/hbase/logs/hbase-root-master-hd110.out

hd100和hd110上HMaster进程已经启动:

[root@hd100 hadoop]# jps | grep -v Jps
1968 DataNode
2848 ResourceManager
1826 NameNode
1144 QuorumPeerMain
1353 JournalNode
2986 NodeManager
2459 DFSZKFailoverController
8683 HMaster
8844 HRegionServer
[root@hd110 ~]# jps | grep -v Jps
1760 DFSZKFailoverController
5056 HMaster
1137 QuorumPeerMain
1506 DataNode
1891 NodeManager
4918 HRegionServer
1290 JournalNode
2701 NameNode
[root@hd120 /]# jps | grep -v Jps
1617 ResourceManager
1682 NodeManager
1110 QuorumPeerMain
1256 JournalNode
4333 HRegionServer
1407 DataNode
  • 停止hbase
/home/hadoop/hbase/bin/stop-hbase.sh
  • Hbase web :http://192.168.26.100:16010

ThrottleStop设置开机自动启动_hbase_15

HBase常用命令

Common
[root@hd100 hadoop]# hbase shell
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.2.5, rf76a601273e834267b55c0cda12474590283fd4c, 2020年 05月 21日 星期四 18:34:40 CST
Took 0.0033 seconds
hbase(main):001:0> help
HBase Shell, version 2.2.5, rf76a601273e834267b55c0cda12474590283fd4c, 2020年 05月 21日 星期四 18:34:40 CST
Type 'help "COMMAND"', (e.g. 'help "get"' -- the quotes are necessary) for help on a specific command.
Commands are grouped. Type 'help "COMMAND_GROUP"', (e.g. 'help "general"') for help on a command group.

COMMAND GROUPS:
  Group name: general
  Commands: processlist, status, table_help, version, whoami

  Group name: ddl
  Commands: alter, alter_async, alter_status, clone_table_schema, create, describe, disable, disable_all, drop, drop_all, enable, enable_all, exists, get_table, is_disabled, is_enabled, list, list_regions, locate_region, show_filters

  Group name: namespace
  Commands: alter_namespace, create_namespace, describe_namespace, drop_namespace, list_namespace, list_namespace_tables

  Group name: dml
  Commands: append, count, delete, deleteall, get, get_counter, get_splits, incr, put, scan, truncate, truncate_preserve

  Group name: tools
  Commands: assign, balance_switch, balancer, balancer_enabled, catalogjanitor_enabled, catalogjanitor_run, catalogjanitor_switch, cleaner_chore_enabled, cleaner_chore_run, cleaner_chore_switch, clear_block_cache, clear_compaction_queues, clear_deadservers, close_region, compact, compact_rs, compaction_state, compaction_switch, decommission_regionservers, flush, hbck_chore_run, is_in_maintenance_mode, list_deadservers, list_decommissioned_regionservers, major_compact, merge_region, move, normalize, normalizer_enabled, normalizer_switch, recommission_regionserver, regioninfo, rit, split, splitormerge_enabled, splitormerge_switch, stop_master, stop_regionserver, trace, unassign, wal_roll, zk_dump

  Group name: replication
  Commands: add_peer, append_peer_exclude_namespaces, append_peer_exclude_tableCFs, append_peer_namespaces, append_peer_tableCFs, disable_peer, disable_table_replication, enable_peer, enable_table_replication, get_peer_config, list_peer_configs, list_peers, list_replicated_tables, remove_peer, remove_peer_exclude_namespaces, remove_peer_exclude_tableCFs, remove_peer_namespaces, remove_peer_tableCFs, set_peer_bandwidth, set_peer_exclude_namespaces, set_peer_exclude_tableCFs, set_peer_namespaces, set_peer_replicate_all, set_peer_serial, set_peer_tableCFs, show_peer_tableCFs, update_peer_config

  Group name: snapshots
  Commands: clone_snapshot, delete_all_snapshot, delete_snapshot, delete_table_snapshots, list_snapshots, list_table_snapshots, restore_snapshot, snapshot

  Group name: configuration
  Commands: update_all_config, update_config

  Group name: quotas
  Commands: disable_exceed_throttle_quota, disable_rpc_throttle, enable_exceed_throttle_quota, enable_rpc_throttle, list_quota_snapshots, list_quota_table_sizes, list_quotas, list_snapshot_sizes, set_quota

  Group name: security
  Commands: grant, list_security_capabilities, revoke, user_permission

  Group name: procedures
  Commands: list_locks, list_procedures

  Group name: visibility labels
  Commands: add_labels, clear_auths, get_auths, list_labels, set_auths, set_visibility

  Group name: rsgroup
  Commands: add_rsgroup, balance_rsgroup, get_rsgroup, get_server_rsgroup, get_table_rsgroup, list_rsgroups, move_namespaces_rsgroup, move_servers_namespaces_rsgroup, move_servers_rsgroup, move_servers_tables_rsgroup, move_tables_rsgroup, remove_rsgroup, remove_servers_rsgroup, rename_rsgroup

SHELL USAGE:
Quote all names in HBase Shell such as table and column names.  Commas delimit
command parameters.  Type <RETURN> after entering a command to run it.
Dictionaries of configuration used in the creation and alteration of tables are
Ruby Hashes. They look like this:

  {'key1' => 'value1', 'key2' => 'value2', ...}

and are opened and closed with curley-braces.  Key/values are delimited by the
'=>' character combination.  Usually keys are predefined constants such as
NAME, VERSIONS, COMPRESSION, etc.  Constants do not need to be quoted.  Type
'Object.constants' to see a (messy) list of all constants in the environment.

If you are using binary keys or values and need to enter them in the shell, use
double-quote'd hexadecimal representation. For example:

  hbase> get 't1', "key\x03\x3f\xcd"
  hbase> get 't1', "key\003\023\011"
  hbase> put 't1', "test\xef\xff", 'f1:', "\x01\x33\x40"

The HBase shell is the (J)Ruby IRB with the above HBase-specific commands added.
For more on the HBase Shell, see http://hbase.apache.org/book.html
hbase(main):002:0> exit
DDL
  • 创建表:create ‘firstTable’,‘cf1’, ‘cf2’ 使用create命令创建一个表,必须指出表名和ColumnFamily名(列族名)
hbase(main):002:0> create 'firstTable','cf1', 'cf2'
Created table firstTable
Took 3.4291 seconds
=> Hbase::Table - firstTable
  • 列出表清单:list
hbase(main):003:0> list
TABLE
firstTable
1 row(s)
Took 0.0350 seconds
=> ["firstTable"]
  • 查看指定的表:list ‘firstTable’
hbase(main):004:0> list 'firstTable'
TABLE
firstTable
1 row(s)
Took 0.0263 seconds
=> ["firstTable"]
  • 展示一个表的详细信息:describe ‘firstTable’
hbase(main):005:0> describe 'firstTable'
Table firstTable is ENABLED
firstTable
COLUMN FAMILIES DESCRIPTION
{NAME => 'cf1', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRIT
E => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WR
ITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'tru
e', BLOCKSIZE => '65536'}

{NAME => 'cf2', VERSIONS => '1', EVICT_BLOCKS_ON_CLOSE => 'false', NEW_VERSION_BEHAVIOR => 'false', KEEP_DELETED_CELLS => 'FALSE', CACHE_DATA_ON_WRIT
E => 'false', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', MIN_VERSIONS => '0', REPLICATION_SCOPE => '0', BLOOMFILTER => 'ROW', CACHE_INDEX_ON_WR
ITE => 'false', IN_MEMORY => 'false', CACHE_BLOOMS_ON_WRITE => 'false', PREFETCH_BLOCKS_ON_OPEN => 'false', COMPRESSION => 'NONE', BLOCKCACHE => 'tru
e', BLOCKSIZE => '65536'}

2 row(s)

QUOTAS
0 row(s)
Took 0.3599 seconds
  • 禁用表:disable ‘firstTable’
  • 启用表:enable ‘firstTable’
  • 查看是否启用/禁用: is_enabled/is_disabled
  • 删除表:先禁用再 drop ‘tablemame’
  • 查看命名空间:list_namespace
DML
  • 插入记录:put ‘firstTable’,‘rowkey’,‘cf:key’,‘value’
put 'firstTable', 'row1', 'cf1', 'row1cf1value'
put 'firstTable', 'row1', 'cf2', 'row1c21value'
put 'firstTable', 'row2', 'cf1', 'row2cf1value'
put 'firstTable', 'row2', 'cf2', 'row2c21value'
  • 获取记录:get ‘firstTable’,‘rowkey’,‘cf:key’
get 'firstTable','row1','cf1'
get 'firstTable','row2','cf1'
hbase(main):015:0> get 'firstTable','row1','cf1'
COLUMN                                 CELL
 cf1:                                  timestamp=1603442998773, value=row1cf1value
1 row(s)
Took 0.0608 seconds
hbase(main):016:0> get 'firstTable','row2','cf1'
COLUMN                                 CELL
 cf1:                                  timestamp=1603443068461, value=row2cf1value
1 row(s)
Took 0.0243 seconds
  • 查看/扫描记录:scan ‘firstTable’
hbase(main):014:0> scan 'firstTable'
ROW                                    COLUMN+CELL
 row1                                  column=cf1:, timestamp=1603442998773, value=row1cf1value
 row1                                  column=cf2:, timestamp=1603443024500, value=row1c21value
 row2                                  column=cf1:, timestamp=1603443068461, value=row2cf1value
 row2                                  column=cf2:, timestamp=1603443082622, value=row2c21value
2 row(s)
Took 0.3387 seconds
  • 修改数据:put ‘table’,‘rowkey’,‘列名’,‘修改后的值’
hbase(main):023:0> put 'firstTable','row1','cf1','abcd-row1cf1value'
Took 0.0327 seconds
hbase(main):024:0> get 'firstTable','row1','cf1'
COLUMN                                 CELL
 cf1:                                  timestamp=1603443850892, value=abcd-row1cf1value
1 row(s)
Took 0.0241 seconds
hbase(main):025:0> scan 'firstTable'
ROW                                    COLUMN+CELL
 row1                                  column=cf1:, timestamp=1603443850892, value=abcd-row1cf1value
 row1                                  column=cf2:, timestamp=1603443024500, value=row1c21value
 row2                                  column=cf1:, timestamp=1603443068461, value=row2cf1value
 row2                                  column=cf2:, timestamp=1603443082622, value=row2c21value
2 row(s)
Took 0.1077 seconds

– OK –

系列文章

hadoop-3.3.0完全分布式集群搭建HBase客户端SQL引擎之Apache Phoenix的安装与使用