关闭hadoop ,设置hdfs允许 append,如下操作:

[hadoop@oversea-stable hadoop]$ sbin/stop-all.sh 
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [oversea-stable bus-stable]
bus-stable: stopping namenode
oversea-stable: stopping namenode
permission-stable: stopping datanode
open-stable: stopping datanode
sp-stable: stopping datanode
3^HStopping journal nodes [open-stable permission-stable sp-stable]
permission-stable: stopping journalnode
sp-stable: stopping journalnode
open-stable: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [oversea-stable bus-stable]
bus-stable: stopping zkfc
oversea-stable: stopping zkfc
stopping yarn daemons
stopping resourcemanager
open-stable: stopping nodemanager
permission-stable: stopping nodemanager
sp-stable: stopping nodemanager
permission-stable: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
open-stable: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
sp-stable: nodemanager did not stop gracefully after 5 seconds: killing with kill -9
no proxyserver to stop
[hadoop@oversea-stable hadoop]$ 
[hadoop@oversea-stable hadoop]$ tail -9 etc/hadoop/hdfs-site.xml
  <property>
     <name>dfs.support.append</name>
     <value>true</value>
  </property>
  <property>
     <name>dfs.datanode.max.xcievers</name>
     <value>4096</value>
  </property>
</configuration>
[hadoop@oversea-stable hadoop]$ 

再次启动hadoop ,如下:

[hadoop@oversea-stable hadoop]$ sbin/start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [oversea-stable bus-stable]
bus-stable: starting namenode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-namenode-bus-stable.out
oversea-stable: starting namenode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-namenode-oversea-stable.out
open-stable: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-open-stable.out
permission-stable: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-permission-stable.out
sp-stable: starting datanode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-datanode-sp1-stable.out
Starting journal nodes [open-stable permission-stable sp-stable]
permission-stable: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-permission-stable.out
sp-stable: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-sp1-stable.out
open-stable: starting journalnode, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-journalnode-open-stable.out
Starting ZK Failover Controllers on NN hosts [oversea-stable bus-stable]
bus-stable: starting zkfc, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-zkfc-bus-stable.out
oversea-stable: starting zkfc, logging to /opt/hadoop-2.9.1/logs/hadoop-hadoop-zkfc-oversea-stable.out
starting yarn daemons
starting resourcemanager, logging to /opt/hadoop-2.9.1/logs/yarn-hadoop-resourcemanager-oversea-stable.out
permission-stable: starting nodemanager, logging to /opt/hadoop-2.9.1/logs/yarn-hadoop-nodemanager-permission-stable.out
sp-stable: starting nodemanager, logging to /opt/hadoop-2.9.1/logs/yarn-hadoop-nodemanager-sp1-stable.out
open-stable: starting nodemanager, logging to /opt/hadoop-2.9.1/logs/yarn-hadoop-nodemanager-open-stable.out
[hadoop@oversea-stable hadoop]$ 

下载解压 hbase

[hadoop@oversea-stable ~]$ wget http://mirrors.hust.edu.cn/apache/hbase/1.4.5/hbase-1.4.5-bin.tar.gz
--2018-06-25 18:37:55--  http://mirrors.hust.edu.cn/apache/hbase/1.4.5/hbase-1.4.5-bin.tar.gz
Resolving mirrors.hust.edu.cn (mirrors.hust.edu.cn)... 202.114.18.160
Connecting to mirrors.hust.edu.cn (mirrors.hust.edu.cn)|202.114.18.160|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 112514838 (107M) [application/octet-stream]
Saving to: ‘hbase-1.4.5-bin.tar.gz’
100%[========================================>] 112,514,838  589KB/s   in 3m 18s 
2018-06-25 18:41:13 (556 KB/s) - ‘hbase-1.4.5-bin.tar.gz’ saved [112514838/112514838]

[hadoop@oversea-stable ~]$ tar xfz hbase-1.4.5-bin.tar.gz -C /opt/
[hadoop@oversea-stable ~]$ cd /opt/
[hadoop@oversea-stable opt]$ ln -s hbase-1.4.5 hbase
[hadoop@oversea-stable opt]$ cd hbase/conf/
[hadoop@oversea-stable conf]$ ls
hadoop-metrics2-hbase.properties  hbase-env.sh      hbase-site.xml    regionservers
hbase-env.cmd                     hbase-policy.xml  log4j.properties
[hadoop@oversea-stable conf]$ 

创建 hbase 的数据目录

[hadoop@oversea-stable ~]$ hadoop fs -ls /user
Found 2 items
drwxrwxrwx   - hadoop supergroup          0 2018-06-26 09:53 /user/hbase
drwxr-xr-x   - hadoop supergroup          0 2018-06-15 16:13 /user/hive
[hadoop@oversea-stable ~]$

配置hbase

[hadoop@oversea-stable conf]$ vim hbase-env.sh 
[hadoop@oversea-stable conf]$ grep -Pv "^(#|$)" hbase-env.sh
export JAVA_HOME=/usr/java/latest/
export HBASE_OPTS="$HBASE_OPTS -XX:+UseConcMarkSweepGC"
export HBASE_MANAGES_ZK=false  #不使用hbase自带的zk
[hadoop@oversea-stable conf]$ 

[hadoop@oversea-stable conf]$ mkdir /opt/hbase/tmp
[hadoop@oversea-stable conf]$ vim  hbase-site.xml
[hadoop@oversea-stable conf]$ tail -51 hbase-site.xml
<configuration>

       <!--HBase数据目录位置,需要指定namenode的名称及port-->  
   <property>  
       <name>hbase.rootdir</name>  
       <value>hdfs://bus-stable:9000/user/hbase</value>
   </property>  

       <!--启用分布式集群-->  
   <property>  
       <name>hbase.cluster.distributed</name>  
       <value>true</value>  
   </property>  

   <property>
       <name>hbase.tmp.dir</name>
       <value>/opt/hbase/tmp</value>
   </property>

   <property>
      <name>hbase.master</name>
      <value>60000</value><!--多Hmaster环境只需指定端口,不必指定主机名称-->
   </property>

       <!-t默认HMaster HTTP访问端口-->  
   <property>  
       <name>hbase.master.info.port</name>  
       <value>16010</value>  
    </property>  

       <!--默认HRegionServer HTTP访问端口-->  
    <property>  
       <name>hbase.regionserver.info.port</name>  
       <value>16030</value>  
    </property>  

       <!--不使用默认内置的,配置独立的ZK集群地址-->  
   <property>  
       <name>hbase.zookeeper.quorum</name>  
       <value>open-stable,permission-stable,sp-stable</value>
   </property> 
   <property>
       <name>hbase.zookeeper.property.clientPort</name>
       <value>2181</value>
   </property>
   <property>
       <name>hbase.zookeeper.property.dataDir</name>
       <value>/opt/zookeeper/zkdata</value>
   </property>

</configuration>
[hadoop@oversea-stable conf]$

[hadoop@oversea-stable conf]$ vim regionservers 
[hadoop@oversea-stable conf]$ cat regionservers 
open-stable
permission-stable
sp1-stable
[hadoop@oversea-stable conf]$ 

backup-masters文件默认没有,需创建该文件
[hadoop@oversea-stable hbase]$ cat conf/backup-masters 
bus-stable
oversea-stable
[hadoop@oversea-stable hbase]$ 

同步到各节点 将已经配置完毕的hbase目录同步到其它四台server for((i=67;i>=64;i--)); do rsync -avzoptgl /opt/hbase-1.4.5 192.168.20.$i:/opt/ ; done 并在各server上为hbase 创建soft link

各server配置 profile:

export HBASE_HOME=/opt/hbase
PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$PATH

hbase启动

[hadoop@bus-stable hbase]$ bin/start-hbase.sh 
running master, logging to /opt/hbase/logs/hbase-hadoop-master-bus-stable.out
sp-stable: running regionserver, logging to /opt/hbase/bin/../logs/hbase-hadoop-regionserver-sp1-stable.out
open-stable: running regionserver, logging to /opt/hbase/bin/../logs/hbase-hadoop-regionserver-open-stable.out
permission-stable: running regionserver, logging to /opt/hbase/bin/../logs/hbase-hadoop-regionserver-permission-stable.out
oversea-stable: running master, logging to /opt/hbase/bin/../logs/hbase-hadoop-master-oversea-stable.out
bus-stable: master running as process 19581. Stop it first.
[hadoop@bus-stable hbase]$ jps
9426 DFSZKFailoverController
9301 NameNode
19884 Jps
19581 HMaster
[hadoop@bus-stable hbase]$ 

验证: (1) 观察log 文件中是否出理ERROR

[hadoop@bus-stable logs]$ pwd
/opt/hbase/logs
[hadoop@bus-stable logs]$ ls
hbase-hadoop-master-bus-stable.log  hbase-hadoop-master-bus-stable.out  SecurityAuth.audit
[hadoop@bus-stable logs]$ 

(2) 查看各server 的java 进程

[hadoop@oversea-stable logs]$ jps
10065 Jps
9940 HMaster
31559 DFSZKFailoverController
31675 ResourceManager
31197 NameNode
[hadoop@oversea-stable logs]$ 

[hadoop@bus-stable logs]$ jps
9426 DFSZKFailoverController
20018 Jps
9301 NameNode
19581 HMaster
[hadoop@bus-stable logs]$ 

[hadoop@open-stable hbase]$ jps
18417 JournalNode
14257 Jps
14084 HRegionServer
18567 NodeManager
18300 DataNode
18125 QuorumPeerMain
[hadoop@open-stable hbase]$ 

[hadoop@permission-stable ~]$ jps
12800 QuorumPeerMain
13284 NodeManager
13013 DataNode
23226 Jps
13132 JournalNode
23069 HRegionServer
[hadoop@permission-stable ~]$ 

(3) 通过zookeeper查看 hbase 的注册情况

[hadoop@permission-stable zookeeper]$ bin/zkCli.sh

[zk: localhost:2181(CONNECTED) 7] ls /
[zookeeper, hadoop-ha, hbase]
[zk: localhost:2181(CONNECTED) 8] ls /hbase
[replication, meta-region-server, rs, splitWAL, backup-masters, table-lock, flush-table-proc, master-maintenance, region-in-transition, online-snapshot, switch, recovering-regions, draining, namespace, hbaseid, table]
[zk: localhost:2181(CONNECTED) 9] 

在web浏览器查看hmaster 的状态

在web浏览器查看regionserver 的状态

hbase关闭

[hadoop@oversea-stable hbase]$ bin/stop-hbase.sh 
stopping hbase.................
[hadoop@oversea-stable hbase]$ jps
10486 Jps
31559 DFSZKFailoverController
31675 ResourceManager
31197 NameNode
[hadoop@oversea-stable hbase]$ 

排错: (1) 在安装部署完成之后,进入hbase shell,发现无法成功创建表。最后,发现是在hmaster启动时报错了,log信息如下:

2018-06-25 18:08:05,885 ERROR [master/bus-stable:16000] master.HMaster: Failed to become active master
java.lang.IllegalStateException: The procedure WAL relies on the ability to hsync for proper operation during component failures, but the underlying filesystem does not support doing so. Please check the config value of 'hbase.procedure.store.wal.use.hsync' to set the desired level of robustness and ensure the config value of 'hbase.wal.dir' points to a FileSystem mount that can provide it.
        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.rollWriter(WALProcedureStore.java:1043)
        at org.apache.hadoop.hbase.procedure2.store.wal.WALProcedureStore.recoverLease(WALProcedureStore.java:382)
        at org.apache.hadoop.hbase.procedure2.ProcedureExecutor.start(ProcedureExecutor.java:550)
        at org.apache.hadoop.hbase.master.HMaster.startProcedureExecutor(HMaster.java:1239)
        at org.apache.hadoop.hbase.master.HMaster.startServiceThreads(HMaster.java:1158)
        at org.apache.hadoop.hbase.master.HMaster.finishActiveMasterInitialization(HMaster.java:849)
        at org.apache.hadoop.hbase.master.HMaster.startActiveMasterManager(HMaster.java:2036)
        at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:553)
        at java.lang.Thread.run(Thread.java:748)
2018-06-25 18:08:05,887 ERROR [master/bus-stable:16000] master.HMaster: Master server abort: loaded coprocessors are: []

经过查询官方文档和资料,终于发现了问题所在。我安装部署的是Hadoop-2.9.1和hbase-2.0.1,版本中不支持hsync,所以才会报错。将hbase换成hbase-1.4.5,然后重新启动Hadoop和hbase问题就解决了!

(2) 启动时log出现 :

2018-06-25 20:03:05,845 FATAL [oversea-stable:16000.activeMasterManager] master.HMaster: Unhandled exception. Starting shutdown.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby. Visit https://s.apache.org/sbnn-error
        at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:88)
        at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1983)

hdfs写的那台机器是standby状态的,所以不支持,要在active 机器中写才行。

hbase简单使用

[hadoop@oversea-stable hbase]$ hbase shell 
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hbase-1.4.5/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hadoop-2.9.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
Version 1.4.5, rca99a9466415dc4cfc095df33efb45cb82fe5480, Wed Jun 13 15:13:00 EDT 2018

hbase(main):001:0> list
TABLE                                                                                                                                                     
0 row(s) in 0.3120 seconds

=> []
hbase(main):002:0> create 'user','f1','f2','f3'
0 row(s) in 1.5760 seconds

=> Hbase::Table - user
hbase(main):003:0> put 'user','1','f1:name','meteor'
0 row(s) in 0.2330 seconds

hbase(main):004:0> scan 'user'
ROW                                     COLUMN+CELL                                                                                                       
 1                                      column=f1:name, timestamp=1529982057221, value=meteor                                                             
1 row(s) in 0.0270 seconds

hbase(main):005:0> put 'user','row2','f1:name','qianfeng'
0 row(s) in 0.1010 seconds

hbase(main):006:0> put 'user','row3','f1:name','tianyun'
0 row(s) in 0.0130 seconds

hbase(main):007:0> scan 'user'
ROW                                     COLUMN+CELL                                                                                                       
 1                                      column=f1:name, timestamp=1529982057221, value=meteor                                                             
 row2                                   column=f1:name, timestamp=1529982408171, value=qianfeng                                                           
 row3                                   column=f1:name, timestamp=1529982437171, value=tianyun                                                            
3 row(s) in 0.0160 seconds


hbase(main):008:0> get 'user','row2'
COLUMN                                  CELL                                                                                                              
 f1:name                                timestamp=1529982408171, value=qianfeng                                                                           
1 row(s) in 0.0370 seconds

hbase(main):009:0> 
hbase(main):010:0> disable 'user'
0 row(s) in 2.3360 seconds

hbase(main):011:0> enable 'user'
0 row(s) in 1.2630 seconds

hbase(main):012:0> drop 'user' 

ERROR: Table user is enabled. Disable it first.

Here is some help for this command:
Drop the named table. Table must first be disabled:
  hbase> drop 't1'
  hbase> drop 'ns1:t1'

hbase(main):013:0> list 
TABLE                                                                                                                                                     
user                                                                                                                                                      
1 row(s) in 0.0100 seconds

=> ["user"]
hbase(main):014:0> disable 'user'
0 row(s) in 2.2440 seconds

hbase(main):015:0> drop 'user'
0 row(s) in 1.2550 seconds

hbase(main):016:0> list
TABLE                                                                                                                                                     
0 row(s) in 0.0080 seconds

=> []
hbase(main):017:0> quit
[hadoop@oversea-stable hbase]$