文章目录

注意:在写本文前,已经完成了三台机的Hadoop集群,desktop机已经配好了网络、yum源、关闭了防火墙等操作,详细请看本专栏第一、二篇

hbase安装

1、上传hbase安装包并解压

hbase的部署应用_ubuntu

hbase的部署应用_ubuntu_02

hadoop@ddai-master:~$ tar xzvf /home/hadoop/hbase-2.2.6-bin.tar.gz -C /opt/

2、修改配置文件(4个)

hbase-env.sh

hadoop@ddai-master:~$ vim /opt/hbase-2.2.6/conf/hbase-env.sh 
#添加
export JAVA_HOME=/opt/jdk1.8.0_221
export HBASE_MANAGES_ZK=false

hbase的部署应用_hbase_03

hbase的部署应用_hbase_04

hbase-site.xml

hadoop@ddai-master:~$ vim /opt/hbase-2.2.6/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://ddai-master:9000/hbase</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>ddai-master,ddai-slave1,ddai-slave2</value>
</property>
<property>
<name>hbase.zookeeper.property.datadir</name>
<value>/opt/apache-zookeeper-3.5.8-bin/data/</value>
</property>
<property>
<name>hbase.regionserver.handler.count</name>
<value>20</value>
</property>
<property>
<name>hbase.regionserver.maxlogs</name>
<value>64</value>
</property>
<property>
<name>hbase.hregion.max.filesize</name>
<value>10485760</value>
</property>
</configuration>

regionservers

hadoop@ddai-master:~$ vim /opt/hbase-2.2.6/conf/regionservers 

#添加
ddai-slave1
ddai-slave2

backup-masters

hadoop@ddai-master:~$ vim /opt/hbase-2.2.6/conf/backup-masters

#添加
ddai-slave1

3、传到其他节点

hadoop@ddai-master:~$ scp -r /opt/hbase-2.2.6 hadoop@ddai-slave1:/opt
hadoop@ddai-master:~$ scp -r /opt/hbase-2.2.6 hadoop@ddai-slave2:/opt

4、所有节点编辑hbase环境变量并使其生效

添加
export HBASE_HOME=/opt/hbase-2.2.6
export PATH=$PATH:$HBASE_HOME/bin

export ZOOKEEPER_HOME=/opt/apache-zookeeper-3.5.8-bin
export PATH=$PATH:$ZOOKEEPER_HOME/bin

hbase的部署应用_zookeeper_05

5、出现无法找到mycluster(所有节点)

hadoop@ddai-master:~$ cp /opt/hadoop-2.8.5/etc/hadoop/core-site.xml /opt/hbase-2.2.6/conf/
hadoop@ddai-master:~$ cp /opt/hadoop-2.8.5/etc/hadoop/hdfs-site.xml /opt/hbase-2.2.6/conf/

zookeeper安装

1、上传zookeeper安装包并解压

hbase的部署应用_hadoop_06

hadoop@ddai-master:~$ tar xzvf /home/hadoop/apache-zookeeper-3.5.8-bin.tar.gz -C /opt/

2、编辑zoo.cfg

hadoop@ddai-master:/opt/apache-zookeeper-3.5.8-bin/conf$ mv zoo_sample.cfg zoo.cfg
hadoop@ddai-master:/opt/apache-zookeeper-3.5.8-bin/conf$ vim zoo.cfg 


#注释全部,添加
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/apache-zookeeper-3.5.8-bin/data
clientPort=2181
maxClientCnxns=60
autopurge.snapRetainCount=3
autopurge.purgelnterval=1

server.0=ddai-master:2888:3888
server.1=ddai-slave1:2888:3888
server.2=ddai-slave2:2888:3888

3、创建myid

hadoop@ddai-master:~$ mkdir /opt/apache-zookeeper-3.5.8-bin/data
hadoop@ddai-master:~$ echo 0 > /opt/apache-zookeeper-3.5.8-bin/data/myid

4、复制zookeeper到其他节点

hadoop@ddai-master:~$ scp -r /opt/apache-zookeeper-3.5.8-bin hadoop@ddai-slave1:/opt
hadoop@ddai-master:~$ scp -r /opt/apache-zookeeper-3.5.8-bin hadoop@ddai-slave2:/opt

5、其他节点分别创建myid

hadoop@ddai-slave1:~$ echo 1 > /opt/apache-zookeeper-3.5.8-bin/data/myid 
hadoop@ddai-slave2:~$ echo 2 > /opt/apache-zookeeper-3.5.8-bin/data/myid

6、所有节点启动zookeeper

必须所有节点都开启,不然状态会出错

/opt/apache-zookeeper-3.5.8-bin/bin/zkServer.sh start

hbase的部署应用_zookeeper_07

关闭zookeeper

/opt/apache-zookeeper-3.5.8-bin/bin/zkServer.sh stop

7、检查zookeeper状态

常见错误,一般这种情况为配置文件的错误

hbase的部署应用_hbase_08

正确的如下

hbase的部署应用_hbase_09

hbase的部署应用_ubuntu_10

hbase的部署应用_大数据_11

hbase启动

先启动hadoop

1、启动hbase并查看进程

hadoop@ddai-master:~$ start-hbase.sh

hbase的部署应用_hbase_12

hbase的部署应用_hadoop_13

hbase的部署应用_大数据_14

2、网页测试

hbase的部署应用_ubuntu_15

hbase的部署应用_zookeeper_16

hbase的部署应用_zookeeper_17

hbase shell

测试命令

启动顺序一般为hadoop zookeeper hbase


hadoop@ddai-master:~$ hbase shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/hadoop-2.8.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/hbase-2.2.6/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell
Use “help” to get list of supported commands.
Use “exit” to quit this interactive shell.
For Reference, please visit: http://hbase.apache.org/2.0/book.html#shell
Version 2.2.6, r88c9a386176e2c2b5fd9915d0e9d3ce17d0e456e, Tue Sep 15 17:36:14 CST 2020
Took 0.0072 seconds



hbase(main):001:0> create ‘test’,‘cf’
Created table test
Took 1.9827 seconds
=> Hbase::Table - test



hbase(main):002:0> list ‘test’
TABLE
test
1 row(s)
Took 0.0461 seconds
=> [“test”]



hbase(main):004:0> describe ‘test’
Table test is ENABLED
test
COLUMN FAMILIES DESCRIPTION
{NAME => ‘cf’, VERSIONS => ‘1’, EVICT_BLOCKS_ON_CLOSE => ‘false’, NEW_VERSION_BEHAVIOR => ‘false’, KEEP_DELETED_CELLS => ‘FALSE’, CACHE_DATA_ON_WRITE => ‘false’, DATA_BLOCK_
ENCODING => ‘NONE’, TTL => ‘FOREVER’, MIN_VERSIONS => ‘0’, REPLICATION_SCOPE => ‘0’, BLOOMFILTER => ‘ROW’, CACHE_INDEX_ON_WRITE => ‘false’, IN_MEMORY => ‘false’, CACHE_BLOOM
S_ON_WRITE => ‘false’, PREFETCH_BLOCKS_ON_OPEN => ‘false’, COMPRESSION => ‘NONE’, BLOCKCACHE => ‘true’, BLOCKSIZE => ‘65536’}
1 row(s)
QUOTAS
0 row(s)
Took 0.3199 seconds



hbase(main):005:0> put ‘test’,‘row1’,‘cf:a’,‘value1’
Took 0.1477 seconds
hbase(main):006:0> put ‘test’,‘row2’,‘cf:b’,‘value2’
Took 0.0136 seconds
hbase(main):007:0> put ‘test’,‘row3’,‘cf:c’,‘value3’
Took 0.0109 seconds



hbase(main):008:0> scan ‘test’
ROW COLUMN+CELL
row1 column=cf:a, timestamp=1628589131227, value=value1
row2 column=cf:b, timestamp=1628589151620, value=value2
row3 column=cf:c, timestamp=1628589165039, value=value3
3 row(s)
Took 0.1208 seconds



hbase(main):009:0> get ‘test’,‘row1’
COLUMN CELL
cf:a timestamp=1628589131227, value=value1
1 row(s)
Took 0.0598 seconds



hbase(main):010:0> create ‘score’,‘name’,‘class’,‘course’
Created table score
Took 1.2638 seconds
=> Hbase::Table - score
hbase(main):011:0> list
TABLE
score
test
2 row(s)
Took 0.0327 seconds
=> [“score”, “test”]



hbase(main):011:0> list
TABLE
score
test
2 row(s)
Took 0.0327 seconds
=> [“score”, “test”]



hbase(main):013:0> put ‘score’,‘610213’,‘name:’,‘Tom’
Took 0.0402 seconds
hbase(main):014:0> put ‘score’,‘610213’,‘class:class’,‘163Cloud’
Took 0.0154 seconds
hbase(main):015:0> put ‘score’,‘610213’,‘course:python’,‘79’
Took 0.0069 seconds
hbase(main):016:0> put ‘score’,‘610215’,‘name’,‘John’
Took 0.0117 seconds
hbase(main):017:0> put ‘score’,‘610215’,‘class:class’,‘173BigData’
Took 0.0073 seconds
hbase(main):018:0> put ‘score’,‘610215’,‘course:java’,‘70’
Took 0.0073 seconds
hbase(main):019:0> put ‘score’,‘610215’,‘course:java’,‘80’
Took 0.0341 seconds
hbase(main):020:0> put ‘score’,‘610215’,‘course:python’,‘86’



hbase(main):029:0> get ‘score’,‘610215’
COLUMN CELL
class:class timestamp=1628589541910, value=173BigData
course:java timestamp=1628589560187, value=80
course:python timestamp=1628589566187, value=86
name: timestamp=1628589533493, value=John
1 row(s)
Took 0.0240 seconds
hbase(main):030:0> get ‘score’,‘610215’,‘course’
COLUMN CELL
course:java timestamp=1628589560187, value=80
course:python timestamp=1628589566187, value=86
1 row(s)
Took 0.0354 seconds
hbase(main):031:0> get ‘score’,‘610215’,‘course:java’
COLUMN CELL
course:java timestamp=1628589560187, value=80
1 row(s)
Took 0.0153 seconds
hbase(main):032:0> scan ‘score’,‘610215’
ROW COLUMN+CELL



hbase(main):033:0> scan ‘score’
ROW COLUMN+CELL
610213 column=class:class, timestamp=1628589517565, value=163Cloud
610213 column=course:python, timestamp=1628589525397, value=79
610213 column=name:, timestamp=1628589509092, value=Tom
610215 column=class:class, timestamp=1628589541910, value=173BigData
610215 column=course:java, timestamp=1628589560187, value=80
610215 column=course:python, timestamp=1628589566187, value=86
610215 column=name:, timestamp=1628589533493, value=John
2 row(s)
Took 0.0275 seconds
hbase(main):034:0> scan ‘score’,{COLUMNS=>‘course’}
ROW COLUMN+CELL
610213 column=course:python, timestamp=1628589525397, value=79
610215 column=course:java, timestamp=1628589560187, value=80
610215 column=course:python, timestamp=1628589566187, value=86
2 row(s)
Took 0.0214 seconds
hbase(main):035:0> scan ‘score’,{COLUMN=>‘course:java’}
ROW COLUMN+CELL
610215 column=course:java, timestamp=1628589560187, value=80
1 row(s)
Took 0.0138 seconds



hbase(main):036:0> alter ‘score’,NAME=>‘address’
Updating all regions with the new schema…
1/1 regions updated.
Done.
Took 3.0328 seconds
hbase(main):037:0> alter ‘score’,NAME=>‘address’,METHOD=>‘delete’
Updating all regions with the new schema…
1/1 regions updated.
Done.
Took 2.7102 seconds



(1)增加列族
alter ‘score’,NAME=>'address
(2)删除列族
alter ‘score’,NAME=>‘address’,METHOD=>‘delete’
(3)删除表
disable ‘score’
drop ‘score’


stop-hbase.sh  #停止hbase

hbase在数据表的应用

要在desktop机使用hbase,就要把hbase、zookeeper和hadoop、jdk的包复制过来,还要配置好环境变量,最后在三台节点机逐一开启

1、从ddai-master中复制需要的包并配置环境变量

hadoop@ddai-desktop:/opt$ scp -r hadoop@ddai-master:/opt/hbase-2.2.6 /opt
hadoop@ddai-desktop:/opt$ scp -r hadoop@ddai-master:/opt/apache-zookeeper-3.5.8-bin /opt
hadoop@ddai-desktop:~$ vim /home/hadoop/.profile 
hadoop@ddai-desktop:~$ source /home/hadoop/.profile

hbase的部署应用_ubuntu_18

2、eclipse中执行编程操作

创建一个SearchScore项目,并添加jar包

hbase的部署应用_ubuntu_19

hbase的部署应用_hadoop_20

hbase的部署应用_hadoop_21

添加“$HBASE_HOME/lib”下所有jar包

hbase的部署应用_大数据_22

添加完成

hbase的部署应用_hbase_23

在项目的src上建成一个SearchScore类

hbase的部署应用_大数据_24

编写代码

import java.io.IOException;
import java.util.ArrayList;
import java.util.Iterator;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;
import org.apache.hadoop.hbase.filter.CompareFilter.CompareOp;
import org.apache.hadoop.hbase.filter.Filter;
import org.apache.hadoop.hbase.filter.RegexStringComparator;
import org.apache.hadoop.hbase.filter.RowFilter;
import org.apache.hadoop.hbase.util.Bytes;
public class SearchScore {
static HBaseAdmin hbaseAdmin;
static Configuration hbaseConf;
// 获取一个单元值,入口参数:表、行键、列族、列
public Result getColumnRecord(String tableName, String rowkey,
String family, String qualifier) throws IOException {
// HTable table = new HTable(this.hbaseConf, tableName);
//2020.10.27
Configuration conf = new Configuration();
conf.set("hbase.zookeeper.quorum", "ddai-master,ddai-slave1,ddai-slave2");
hbaseConf = HBaseConfiguration.create(conf);
Connection connection = ConnectionFactory.createConnection(hbaseConf);
Table table = connection.getTable(TableName.valueOf("score"));
//
Get get = new Get(rowkey.getBytes());
get.addColumn(family.getBytes(), qualifier.getBytes());
Result rs = table.get(get);
return rs;
}
// 按行键过滤,入口参数:表、行键正则字符串
public List<Result> getFilterRecord(String tableName, String rowRegexString)
throws IOException {
// HTable table = new HTable(this.hbaseConf, tableName);
//new hbase 2020.10.27
Configuration conf = new Configuration();
conf.set("hbase.zookeeper.quorum", "ddai-master,ddai-slave1,ddai-slave2");
hbaseConf = HBaseConfiguration.create(conf);
Connection connection = ConnectionFactory.createConnection(hbaseConf);
Table table = connection.getTable(TableName.valueOf("score"));
Scan scan = new Scan();
Filter filter = new RowFilter(CompareOp.EQUAL,
new RegexStringComparator(rowRegexString));
scan.setFilter(filter);
ResultScanner scanner = table.getScanner(scan);
List<Result> list = new ArrayList<Result>();
for (Result r : scanner) {
list.add(r);
}
scanner.close();
return list;
}
public static void main(String[] args) throws IOException {
Configuration conf = new Configuration();
conf.set("hbase.zookeeper.quorum", "ddai-master,ddai-slave1,ddai-slave2");
hbaseConf = HBaseConfiguration.create(conf);
//hbaseAdmin = new HBaseAdmin(hbaseConf);
//2020.10.27
Connection connection = ConnectionFactory.createConnection(conf);
Admin admin= connection.getAdmin();
Table table = connection.getTable(TableName.valueOf("score"));
//
SearchScore searchScore = new SearchScore();
Result rs = searchScore.getColumnRecord("score",
"610215", "class", "class");
for (Cell cell : rs.rawCells()) {
System.out.print("Rowkey:"+ Bytes.toString(rs.getRow()));
System.out.print(" Familiy-Quilifier:" +
Bytes.toString(CellUtil.cloneFamily(cell)) + "-" +
Bytes.toString(CellUtil.cloneQualifier(cell)));
System.out.println(" Value:" +
Bytes.toString(CellUtil.cloneValue(cell)));
}
System.out.println(" 610215的班级:" + Bytes.toString(rs.value()));
List<Result> list = null;
list = searchScore.getFilterRecord("score", "610213");
Iterator<Result> it = list.iterator();
while (it.hasNext()) {
rs = it.next();
String name = new String(rs.getValue(Bytes.toBytes("name"),
Bytes.toBytes("")));
String class1 = new String(rs.getValue(Bytes.toBytes("class"),
Bytes.toBytes("class")));
String java = new String(rs.getValue(Bytes.toBytes("course"),
Bytes.toBytes("java")));
String python = new String(rs.getValue(Bytes.toBytes("course"),
Bytes.toBytes("python")));
System.out.println("name:"+name+" class:"+class1
+" java:"+java+" python:"+python);
}
}
}

运行代码:

hbase的部署应用_hbase_25

以同样的操作建立一个HBaseDemo项目并创建HBaseDemo类,导入hbase/lib的jar包

hbase的部署应用_大数据_26

代码如下:

import java.io.IOException;
import java.util.ArrayList;
import java.util.List;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.Cell;
import org.apache.hadoop.hbase.CellUtil;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.MasterNotRunningException;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.ZooKeeperConnectionException;
import org.apache.hadoop.hbase.client.Admin;
import org.apache.hadoop.hbase.client.Connection;
import org.apache.hadoop.hbase.client.ConnectionFactory;
import org.apache.hadoop.hbase.client.Delete;
import org.apache.hadoop.hbase.client.Get;
import org.apache.hadoop.hbase.client.Put;
import org.apache.hadoop.hbase.client.Result;
import org.apache.hadoop.hbase.client.ResultScanner;
import org.apache.hadoop.hbase.client.Scan;
import org.apache.hadoop.hbase.client.Table;

import org.apache.hadoop.hbase.util.Bytes;

public class HBaseDemo{
public static Configuration conf=new Configuration();
static {
conf=HBaseConfiguration.create();
conf.addResource("hbase-site.xml");
}

public static void createTable(String tablename,String columnFamily)
throws MasterNotRunningException,IOException,ZooKeeperConnectionException{
Connection conn = ConnectionFactory.createConnection(conf);
Admin admin=conn.getAdmin();
try {
if(admin.tableExists(TableName.valueOf(tablename))) {
System.out.println(tablename+"already exists");
}else {
TableName tableName=TableName.valueOf(tablename);
HTableDescriptor tableDesc=new HTableDescriptor(tableName);
tableDesc.addFamily(new HColumnDescriptor(columnFamily));
admin.createTable(tableDesc);
System.out.println(tablename+"created succeed");
}
}finally {
admin.close();
conn.close();
}
}
public static void putData(String tableName,String row,String columnFamily,String column,String data)
throws IOException{
Connection conn=ConnectionFactory.createConnection(conf);
Table table=conn.getTable(TableName.valueOf(tableName));
try {
Put put=new Put(Bytes.toBytes(row));
put.addColumn(Bytes.toBytes(columnFamily), Bytes.toBytes(column), Bytes.toBytes(data));
table.put(put);
}finally {
table.close();
conn.close();
}
}
public static void putFamily(String tableName,String columnFamily) throws IOException{
Connection conn=ConnectionFactory.createConnection(conf);
Admin admin=conn.getAdmin();
try {
if(!admin.tableExists(TableName.valueOf(tableName))) {
System.out.println(tableName+"not exists");
}else {
admin.disableTable(TableName.valueOf(tableName));

HColumnDescriptor cfl=new HColumnDescriptor(columnFamily);
admin.addColumn(TableName.valueOf(tableName),cfl);
admin.enableTable(TableName.valueOf(tableName));
System.out.println(TableName.valueOf(tableName)+","+columnFamily+"put succeed");
}
}finally {
admin.close();
conn.close();
}
}
public static void getData(String tableName,String row,String columnFamily,String column) throws IOException{
Connection conn=ConnectionFactory.createConnection(conf);
Table table=conn.getTable(TableName.valueOf(tableName));
try {
Get get=new Get(Bytes.toBytes(row));
Result result=table.get(get);
byte[] rb=result.getValue(Bytes.toBytes(columnFamily), Bytes.toBytes(column));
String value=new String(rb,"UTF-8");
System.out.println(value);
}finally {
table.close();
conn.close();
}
}
public static void scanAll(String tableName) throws IOException{
Connection conn=ConnectionFactory.createConnection(conf);
Table table=conn.getTable(TableName.valueOf(tableName));
try {
Scan scan=new Scan();
ResultScanner resultScanner=table.getScanner(scan);
for(Result result:resultScanner) {
List<Cell> cells=result.listCells();
for(Cell cell:cells) {
String row=new String(result.getRow(),"UTF-8");
String family=new String(CellUtil.cloneFamily(cell),"UTF-8");
String qualifier=new String(CellUtil.cloneQualifier(cell),"UTF-8");
String value=new String(CellUtil.cloneValue(cell),"UTF-8");
System.out.println("[row:"+row+"],[family:"+family+"],[qualifier:"+qualifier+"],[value:"+value+"]");
}
}
}finally {
table.close();
conn.close();
}
}
public static void delData(String tableName,String rowKey) throws IOException{
Connection conn=ConnectionFactory.createConnection(conf);
Table table=conn.getTable(TableName.valueOf(tableName));
try {
List<Delete> list=new ArrayList<Delete>();
Delete del=new Delete(rowKey.getBytes());
list.add(del);
table.delete(list);
System.out.println("delete record"+rowKey+"ok");

}finally {
table.close();
conn.close();
}
}
public static void deleteColumn(String tableName,String rowKey,String familyName,String columnName) throws IOException{
Connection conn=ConnectionFactory.createConnection(conf);
Table table=conn.getTable(TableName.valueOf(tableName));
try {
Delete del=new Delete(Bytes.toBytes(rowKey));
del.addColumn(Bytes.toBytes(familyName), Bytes.toBytes(columnName));
List<Delete> list=new ArrayList<Delete>(1);
list.add(del);
table.delete(list);
System.out.println("[table:"+tableName+"],[row:"+rowKey+"],[family:"+familyName+"],[qualifier:"+columnName+"]");
}finally {
table.close();
conn.close();
}
}
public static void deleteFamily(String tableName,String rowKey,String familyName) throws IOException{
Connection conn=ConnectionFactory.createConnection(conf);
Table table=conn.getTable(TableName.valueOf(tableName));
try {
Delete del=new Delete(Bytes.toBytes(rowKey));
del.addFamily(Bytes.toBytes(familyName));
List<Delete> list=new ArrayList<Delete>(1);
list.add(del);
table.delete(list);
System.out.println("[table:"+tableName+"],[row:"+rowKey+"],[family:"+familyName+"]");
}finally {
table.close();
conn.close();
}
}
public static void deleteTable(String tableName)throws IOException,MasterNotRunningException,
ZooKeeperConnectionException{
Connection conn=ConnectionFactory.createConnection(conf);
Admin admin=conn.getAdmin();
try {
admin.disableTable(TableName.valueOf(tableName));
admin.deleteTable(TableName.valueOf(tableName));
System.out.println("delete table"+tableName+"ok");
}finally {
admin.close();
conn.close();
}
}
public static void main(String[] args) {
conf.set("hbase.zookeeper.quorum", "ddai-master,ddai-slave1,ddai-slave2");
System.out.println("start...");
String tableName="bus_load";
try {
createTable(tableName,"load");
putData(tableName,"row_1","load","class","183bigdata");
putData(tableName,"row_1","load","data","2020-10-28");
putData(tableName,"row_1","load","name","zhangsan");
putData(tableName,"row_1","load","telephone","13888888888");

scanAll(tableName);
}catch(Exception ex) {
ex.printStackTrace();
}
System.err.println("end...");
}
}