一、环境说明

个人理解:

zookeeper可以独立搭建集群,hbase本身不能独立搭建集群需要和hadoop和hdfs整合

集群环境至少需要3个节点(也就是3台服务器设备):1个Master,2个Slave,节点之间局域网连接,可以相互ping通,下面举例说明,配置节点IP分配如下:

IP     角色

192.168.1.228 master

192.168.1.229 slave1

192.168.1.230 slave2

三个节点均使用rhel 6.5系统,为了便于维护,集群环境配置项最好使用相同用户名、用户密码、相同Hadoop、Hbase、zookeeper目录结构。

注:

主机名和角色最好保持一致,如果不同也没关系,只需要在/etc/hosts中配置好对应关系即可

可以通过编辑/etc/sysconfig/network文件来修改 hostname

软件包下载准备:

hadoop-2.7.3.tar.gz

hbase-1.2.5-bin.tar.gz

zookeeper-3.4.6.tar.gz

jdk-8u111-linux-x64.rpm

因为是测试环境此次都使用root来操作,如果是生产环境建议使用其他用户如hadoop,需要给目录授权为hadoop

chown -R hadoop.hadoop /data/yunva

二、准备工作
2.1 安装JDK

[root@slave1 yunva]#  rpm -qa | grep java

tzdata-java-2016c-1.el6.noarch

java-1.8.0-openjdk-1.8.0.91-1.b14.el6.x86_64------

java_cup-0.10k-5.el6.x86_64

pki-java-tools-9.0.3-49.el6.noarch

java-1.8.0-openjdk-headless-1.8.0.91-1.b14.el6.x86_64------

java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64------

java-1.7.0-openjdk-1.7.0.99-2.6.5.1.el6.x86_64------

libvirt-java-0.4.9-1.el6.noarch

libguestfs-java-1.20.11-17.el6.x86_64

java-1.6.0-openjdk-1.6.0.38-1.13.10.4.el6.x86_64------

 

 

[root@master hadoop-2.7.3]# yum -y remove java-1.6.0-openjdk-1.6.0.38-1.13.10.4.el6.x86_64

[root@master hadoop-2.7.3]# yum -y remove java-1.7.0-openjdk-1.7.0.99-2.6.5.1.el6.x86_64

[root@master hadoop-2.7.3]# yum -y remove java-1.8.0-openjdk-headless-1.8.0.91-1.b14.el6.x86_64

[root@master hadoop-2.7.3]# yum -y remove java-1.8.0-openjdk-1.8.0.91-1.b14.el6.x86_64  删除失败

 

[root@master hadoop-2.7.3]# yum -y remove java-1.5.0-gcj-1.5.0.0-29.1.el6.x86_64

 

[root@master hadoop-2.7.3]# rpm -qa | grep java

tzdata-java-2016c-1.el6.noarch

[root@master yunva]# rpm -ivh jdk-8u111-linux-x64.rpm
修改配置文件 vim /etc/profile:

export JAVA_HOME=/usr/java/jdk1.8.0_111/

export PATH=$JAVA_HOME/bin:$PATH

export HADOOP_HOME=/data/yunva/hadoop-2.7.3

export HADOOP_INSTALL=$HADOOP_HOME

export HADOOP_MAPRED_HOME=$HADOOP_HOME

export HADOOP_COMMON_HOME=$HADOOP_HOME

export HADOOP_HDFS_HOME=$HADOOP_HOME

export YARN_HOME=$HADOOP_HOME

export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native

export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

#export HADOOP_SSH_OPTS="-p 48490"   # 非默认ssh的22号端口需要添加此项,表示端口为48490

然后重新加载配置文件使之生效:我实验机器无法连接48490端口,所以暂时注释了本行,使用了22端口
# source /etc/profile 

2.2 添加Hosts映射关系
分别在三个节点上添加hosts映射关系:
# vim /etc/hosts
添加的内容如下:

192.168.1.228 master

192.168.1.229 slave1

192.168.1.230 slave2

2.3 集群之间SSH无密码登陆

Rhel默认安装了ssh,如果没有你需要先安装ssh 。

集群环境的使用必须通过ssh无密码登陆来执行,本机登陆本机必须无密码登陆,主机与从机之间必须可以双向无密码登陆,从机与从机之间无限制。

各节点生成Keys:

cd ~

mkdir ~/.ssh

chmod 700 ~/.ssh

ssh-keygen -t rsa

ssh-keygen -t dsa

在节点master上进行互信配置:

touch ~/.ssh/authorized_keys

cd ~/.ssh

 ssh master cat ~/.ssh/id_rsa.pub >> authorized_keys

 ssh slave1 cat ~/.ssh/id_rsa.pub >> authorized_keys

 ssh slave2 cat ~/.ssh/id_rsa.pub >> authorized_keys

 ssh master cat ~/.ssh/id_dsa.pub >> authorized_keys

 ssh slave1 cat ~/.ssh/id_dsa.pub >> authorized_keys

 ssh slave2 cat ~/.ssh/id_dsa.pub >> authorized_keys

在master把存储公钥信息的验证文件传送到slave1 slave2上

[root@master .ssh]# cd /root/.ssh

[root@master .ssh]# scp authorized_keys slave1:'/root/.ssh'

root@slave1's password:

authorized_keys                               100% 4762     4.7KB/s   00:00   

[root@master .ssh]# scp authorized_keys slave2:'/root/.ssh'

root@slave2's password:

authorized_keys                               100% 4762     4.7KB/s   00:00   

[root@master .ssh]#设置验证文件的权限

在每一个节点执行:

chmod 600 ~/.ssh/authorized_keys

用 ssh-add 指令将加进来

[root@master .ssh]# exec /usr/bin/ssh-agent $SHELL

[root@master .ssh]# ssh-add

Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa)

Identity added: /root/.ssh/id_dsa (/root/.ssh/id_dsa)验证ssh配置是否正确

以root用户在所有节点分别执行:

ssh master date

ssh slave1 date

ssh slave2 date

注意:第一次会:[root@slave1 ~]# ssh slave2 date

The authenticity of host 'slave2 (192.168.1.230)' can't be established.

RSA key fingerprint is 87:fb:ce:65:da:8e:d4:de:62:cb:5a:d0:22:b4:90:5a.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added 'slave2,192.168.1.230' (RSA) to the list of known hosts.

 

三、Hadoop集群安装配置

这里会将hadoop、hbase、zookeeper的安装包都解压到/data/yunva/文件夹下,并重命名

三个节点:

mkdir -p /data/yunva/

把所有文件复制到上文件夹

chmod -R 777 /data/yunva/

cd /data/yunva/

chmod -R 777 *

tar -zxvf hadoop-2.7.3.tar.gz(master节点)

安装目录如下:

/data/yunva/hadoop-2.7.3

/data/yunva/hbase-1.2.5

/data/yunva/zookeeper-3.4.6

3.1 修改hadoop配置

配置文件都在/data/yunva/hadoop-2.7.3/etc/hadoop/目录下

[root@master yunva]# cd /data/yunva/hadoop-2.7.3/etc/hadoop/

3.1.1 core-site.xml

修改master配置:

vi core-site.xml

<configuration>

    <property>

        <name>fs.default.name</name>

        <value>hdfs://master:9000</value>

    </property>

</configuration>

 

3.1.2 hadoop-env.sh

[root@master ~]# vi /data/yunva/hadoop-2.7.3/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_111/

 

3.1.3 hdfs-site.xml

修改master配置:

[root@master yunva]# cd /data/yunva/hadoop-2.7.3/etc/hadoop/

vi hdfs-site.xml

# 创建hadoop的数据和用户目录

# mkdir -p /data/yunva/hadoop-2.7.3/hadoop/name

# mkdir -p /data/yunva/hadoop-2.7.3/hadoop/data

 

 

<configuration>

   <property>

        <name>dfs.name.dir</name>

        <value>/data/yunva/hadoop-2.7.3/hadoop/name</value>

    </property>

    <property>

        <name>dfs.data.dir</name>

        <value>/data/yunva/hadoop-2.7.3/hadoop/data</value>

    </property>

    <property>

        <name>dfs.replication</name>

        <value>3</value>

    </property>

</configuration>

 

3.1.4 mapred-site.xml

修改master配置:

cd /data/yunva/hadoop-2.7.3/etc/hadoop/

# mv mapred-site.xml.template mapred-site.xml

vi mapred-site.xml

<configuration>

    <property>

        <name>mapred.job.tracker</name>

        <value>master:9001</value>

    </property>

</configuration>

 

3.1.5 修改slaves文件,localhost改为(修改master配置):

# vi /data/yunva/hadoop-2.7.3/etc/hadoop/slaves

slave1

slave2

注意:三台机器上都进行相同的配置,都放在相同的路径下。

使用scp命令进行从本地到远程(或远程到本地)的轻松文件传输操作:

 

3.1.6

scp -r /data/yunva/hadoop-2.7.3/     slave1:/data/yunva

scp -r /data/yunva/hadoop-2.7.3/     slave2:/data/yunva

 

3.2 启动hadoop集群

进入master的/data/yunva/hadoop-2.7.3/目录,执行以下操作:

cd /data/yunva/hadoop-2.7.3/

# bin/hadoop namenode -format

格式化namenode,第一次启动服务前执行的操作,以后不需要执行。

 

然后启动hadoop(关闭防火墙,然后connect to address 127 0 0 1 Connection refused):

请检查防火墙设置,必须打开相应端口或者关闭防火墙:

service iptables status

service iptables stop

chkconfig iptables off 

 

cd /data/yunva/hadoop-2.7.3/

# sbin/start-all.sh

或者cd /data/yunva/hadoop-2.7.3/

./start-all.sh

后一种方法会有启动日志生成

 

通过jps命令能看到除jps外有3个进程:

 [root@master hadoop-2.7.3]# jps

6640 NameNode

6837 SecondaryNameNode

7336 Jps

7035 ResourceManager

 

 

 

 

 

Rhel 6.5环境下Zookeeper-3.4.6集群环境部署

 

【系统】Rhel 6.5集群部署

【软件】准备好jdk环境,此次我们的环境是jdk1.8.0_111   zookeeper-3.4.6.tar.gz

【步骤】

1. 准备条件

如果有内部dns或者外网有域名,则直接使用域名

如果没有需要修改/etc/hosts文件,或者直接使用IP

集群规划

主机类型 IP地址 域名

192.168.1.228 master

192.168.1.229 slave1

192.168.1.230 slave2

注意:zookeeper因为有主节点和从节点的关系,所以部署的集群台数最好为奇数个,否则可能出现脑裂导致服务异常

2. 安装

下载地址:http://archive.apache.org/dist/zookeeper/zookeeper-3.4.6/

解压

tar -zxf zookeeper-3.4.6.tar.gz 

cd /data/yunva/zookeeper-3.4.6

拷贝配置文件,修改完成后分发给其他节点

[root@master zookeeper-3.4.6]# cd /data/yunva/zookeeper-3.4.6/conf/

 

cp zoo_sample.cfg zoo.cfg

vi /data/yunva/zookeeper-3.4.6/conf/zoo.cfg

or   vi zoo.cfg

 

tickTime=2000

initLimit=10

syncLimit=5

dataDir=/data/yunva/zookeeper-3.4.6/data

dataLogDir=/data/yunva/zookeeper-3.4.6/logs

clientPort=2181

server.1=master:2888:3888

server.2=slave1:2888:3888

server.3=slave2:2888:3888

主要参数解释如下:

tickTime->心跳的间隔时间

initLimit->初始化连接时最长能忍受多少个心跳时间间隔数

syncLimit->这个配置项标识Leader 与Follower 之间发送消息,请求和 应答时间长度,最长不能超过多少个tickTime的时间长度,总的时间长度就是5*2000=10 秒

dataDir->zookeeper数据目录

dataLogDir->zookeeper日志目录

clientPort->客户端链接zookeeper的端口

server.1=hadoop4:2888:3888->servers是固定字段,1表示hadoop4的id(注意这个id)hadoop4表示的是一个节点的主机名(这儿可以是ip),2888表示leader和follower之间通信的端口(默认是这个,可以改),3888表示没有leader的时候,通过投票选举leader的端口。

 

3.每个节点创建data和Log文件夹

[root@master zookeeper-3.4.6]# mkdir -p /data/yunva/zookeeper-3.4.6/data

[root@master zookeeper-3.4.6]# mkdir -p /data/yunva/zookeeper-3.4.6/logs

错误:创建文件夹时,logs后多加了一个空格,怎么启动都启动不起来!!!!!!!!

 

4、在zoo.cfg中的dataDir指定的目录下,新建myid文件。

例如:$ZK_INSTALL/data下,新建myid。在myid文件中输入1。表示为server.1。 

如果为snapshot/d_2,则myid文件中的内容为 2,依此类推。 

[root@master zookeeper-3.4.6]# cd /data/yunva/zookeeper-3.4.6/data

[root@master data]# vi myid

[root@master data]# 1

 

scp -r /data/yunva/zookeeper-3.4.6/ root@192.168.1.229:/data/yunva/

scp -r /data/yunva/zookeeper-3.4.6/ root@192.168.1.230:/data/yunva/

 

slave1

[root@master zookeeper-3.4.6]# cd /data/yunva/zookeeper-3.4.6/data

[root@master data]# vi myid

[root@master data]# 2

 

slave2

[root@master zookeeper-3.4.6]# cd /data/yunva/zookeeper-3.4.6/data

[root@master data]# vi myid

[root@master data]# 3

 

请检查防火墙设置,必须打开相应端口或者关闭防火墙:

service iptables status

service iptables stop

chkconfig iptables off 

 

启动:在集群中的每台主机上执行如下命令

cd /data/yunva/zookeeper-3.4.6

bin/zkServer.sh start 

或者

cd /data/yunva/zookeeper-3.4.6/bin

./zkServer.sh start

后一张方法会有日志生成

先启动slave1.slave2,再启动master,否则会报错:

java.net.ConnectException: 拒绝连接 (Connection refused)

              at java.net.PlainSocketImpl.socketConnect(Native Method)

              at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350)

              at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206)

              at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188)

              at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)

              at java.net.Socket.connect(Socket.java:589)

              at org.apache.zookeeper.server.quorum.QuorumCnxManager.connectOne(QuorumCnxManager.java:368)

              at org.apache.zookeeper.server.quorum.QuorumCnxManager.toSend(QuorumCnxManager.java:341)

              at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.process(FastLeaderElection.java:449)

              at org.apache.zookeeper.server.quorum.FastLeaderElection$Messenger$WorkerSender.run(FastLeaderElection.java:430)

              at java.lang.Thread.run(Thread.java:745)

2019-01-16 08:52:14,767 [myid:1] - WARN  [WorkerSender[myid=1]:QuorumCnxManager@382] - Cannot open channel to 3 at election address slave2/192.168.1.230:3888

cd /data/yunva/zookeeper-3.4.6/bin

./zkServer.sh start 

后一种启动方式有启动日志:/data/yunva/zookeeper-3.4.6/bin/zookeeper.out

查看状态,可以看到其中一台为主节点,其他两台为从节点:

./zkServer.sh status

 

主节点:

./zkServer.sh status

JMX enabled by default

Using config: /data/yunva/zookeeper-3.4.6/bin/../conf/zoo.cfg

Mode: leader

从属节点:

./zkServer.sh status

JMX enabled by default

Using config: /data/yunva/zookeeper-3.4.6/bin/../conf/zoo.cfg

Mode: follower

 

停止:

bin/zkServer.sh stop

 

连接:

bin/zkCli.sh -server zookeeper1:2181 

bin/zkCli.sh -server zookeeper2:2181 

bin/zkCli.sh -server zookeeper3:2181 

[root@master bin]# ./zkCli.sh -server slave1:2181

Connecting to slave1:2181

2019-01-16 09:07:32,560 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT

2019-01-16 09:07:32,563 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=master

2019-01-16 09:07:32,563 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_111

2019-01-16 09:07:32,565 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation

2019-01-16 09:07:32,565 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_111/jre

2019-01-16 09:07:32,565 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/data/yunva/zookeeper-3.4.6/bin/../build/classes:/data/yunva/zookeeper-3.4.6/bin/../build/lib/*.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/data/yunva/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/data/yunva/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/data/yunva/zookeeper-3.4.6/bin/../conf:

2019-01-16 09:07:32,565 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

2019-01-16 09:07:32,565 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp

2019-01-16 09:07:32,566 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>

2019-01-16 09:07:32,566 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux

2019-01-16 09:07:32,566 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64

2019-01-16 09:07:32,566 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_64

2019-01-16 09:07:32,566 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root

2019-01-16 09:07:32,566 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root

2019-01-16 09:07:32,566 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/data/yunva/zookeeper-3.4.6/bin

2019-01-16 09:07:32,568 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=slave1:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7aec35a

Welcome to ZooKeeper!

2019-01-16 09:07:32,613 [myid:] - INFO  [main-SendThread(slave1:2181):ClientCnxn$SendThread@975] - Opening socket connection to server slave1/192.168.1.229:2181. Will not attempt to authenticate using SASL (unknown error)

JLine support is enabled

2019-01-16 09:07:32,672 [myid:] - INFO  [main-SendThread(slave1:2181):ClientCnxn$SendThread@852] - Socket connection established to slave1/192.168.1.229:2181, initiating session

[zk: slave1:2181(CONNECTING) 0] 2019-01-16 09:07:32,707 [myid:] - INFO  [main-SendThread(slave1:2181):ClientCnxn$SendThread@1235] - Session establishment complete on server slave1/192.168.1.229:2181, sessionid = 0x26854303e720000, negotiated timeout = 30000

 

WATCHER::

 

WatchedEvent state:SyncConnected type:None path:null

 

[zk: slave1:2181(CONNECTED) 0]

 

 

报错:

遇到的问题:(1)启动顺序的问题,现在是先启动slave1 slave2 ,然后启动master;(2)仅仅slave1 slave2能做leader肯定有问题,最后查找原因!!!

 

五、HBase集群安装配置

tar -zxvf hbase-1.2.5-bin.tar.gz

配置文件目录/data/yunva/hbase-1.2.5/conf

cd /data/yunva/hbase-1.2.5/conf

5.1

vi hbase-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_111/  # 如果jdk路径不同需要单独配置

export HBASE_CLASSPATH=/data/yunva/hadoop-2.7.3/etc/hadoop/

export HBASE_MANAGES_ZK=false

#export HBASE_SSH_OPTS="-p 48490"  # ssh端口非默认22需要修改

 

5.2 hbase-site.xml(保持一致)

vi hbase-site.xml

<configuration>

    <property>

        <name>hbase.rootdir</name>

        <value>hdfs://master:9000/hbase</value>

    </property>

    <property>

        <name>hbase.master</name>

        <value>master</value>

    </property>

    <property>

        <name>hbase.cluster.distributed</name>

        <value>true</value>

    </property>

    <property>

        <name>hbase.zookeeper.property.clientPort</name>

        <value>2181</value>

    </property>

    <property>

        <name>hbase.zookeeper.quorum</name>

        <value>master,slave1,slave2</value>

    </property>

    <property>

        <name>zookeeper.session.timeout</name>

        <value>60000000</value>

    </property>

    <property>

        <name>dfs.support.append</name>

        <value>true</value>

    </property>

</configuration>

 

5.3 更改 regionservers

vi regionservers

在 regionservers 文件中添加slave列表:

slave1

slave2

 

5.4 分发并同步安装包

将整个hbase安装目录都拷贝到所有slave服务器:

$ scp  -r /data/yunva/hbase-1.2.5  slave1:/data/yunva/

$ scp  -r /data/yunva/hbase-1.2.5  slave2:/data/yunva/

 

六、启动集群

1. 启动hadoop--- master节点

cd /data/yunva/hadoop-2.7.3/

# sbin/start-all.sh

或者cd /data/yunva/hadoop-2.7.3/sbin

./start-all.sh

后一种方法会有启动日志生成

 

关闭hadoop:

cd /data/yunva/hadoop-2.7.3/

# sbin/stop-all.sh

或者

cd /data/yunva/hadoop-2.7.3/sbin

./stop-all.sh

后一种方法会有启动日志生成

 

2. 启动ZooKeeper—所有节点

启动:在集群中的每台主机上执行如下命令

cd /data/yunva/zookeeper-3.4.6

bin/zkServer.sh start 

或者

cd /data/yunva/zookeeper-3.4.6/bin

./zkServer.sh start

 

停止zookeeper:

cd /data/yunva/zookeeper-3.4.6

bin/zkServer.sh stop

或者

cd /data/yunva/zookeeper-3.4.6/bin

./zkServer.sh stop

 

3. 启动hbase—master节点

/data/yunva/hbase-1.2.5/bin/start-hbase.sh

或者

cd /data/yunva/hbase-1.2.5/bin/

./start-hbase.sh

关闭hbase—master节点

cd /data/yunva/hbase-1.2.5/bin/

./stop-hbase.sh

[root@master bin]# ./start-hbase.sh

starting master, logging to /data/yunva/hbase-1.2.5/bin/../logs/hbase-root-master-master.out

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

slave1: starting regionserver, logging to /data/yunva/hbase-1.2.5/bin/../logs/hbase-root-regionserver-slave1.out

slave2: starting regionserver, logging to /data/yunva/hbase-1.2.5/bin/../logs/hbase-root-regionserver-slave2.out

slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

slave1: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0

slave2: Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0

 

4. 启动后,master上进程和slave进程列表

[root@master ~]# jps

7761 ZooKeeperMain   # zookeeper进程

3748 SecondaryNameNode   # hadoop进程

6359 QuorumPeerMain

3914 ResourceManager  # hadoop进程

9531 Jps

7021 HMaster  # hbase master进程

3534 NameNode   # hadoop master进程

 

[root@slave1 bin]# jps

5360 HRegionServer   # hbase slave进程

3440 DataNode  # hadoop slave进程

7237 Jps

4812 QuorumPeerMain

5. 进入hbase shell进行验证

[root@master hbase-1.2.5]# pwd

cd /data/yunva/hbase-1.2.5

[root@master hbase-1.2.5]# bin/hbase shell

2019-01-16 09:32:57,266 WARN  [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/data/yunva/hbase-1.2.5/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/data/yunva/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]

HBase Shell; enter 'help<RETURN>' for list of supported commands.

Type "exit<RETURN>" to leave the HBase Shell

Version 1.2.5, rd7b05f79dee10e0ada614765bb354b93d615a157, Wed Mar  1 00:34:48 CST 2017

 

hbase(main):001:0> list

TABLE                                                                                                                                                                     

0 row(s) in 0.2070 seconds

 

=> []

hbase(main):002:0> create 'scores', 'grade', 'course'

0 row(s) in 2.3980 seconds

 

=> Hbase::Table - scores

hbase(main):003:0>

 

6. 进入zookeeper shell进行验证

[root@master yunva]# cd /data/yunva/ zookeeper-3.4.6

[root@master zookeeper-3.4.6]# ls

bin          conf     dist-maven       ivy.xml      logs                  README.txt  zookeeper-3.4.6.jar      zookeeper-3.4.6.jar.sha1

build.xml    contrib  docs             lib          NOTICE.txt            recipes     zookeeper-3.4.6.jar.asc  zookeeper.out

CHANGES.txt  data     ivysettings.xml  LICENSE.txt  README_packaging.txt  src         zookeeper-3.4.6.jar.md5

[root@master zookeeper-3.4.6]# bin/zkCli.sh -server

Error: no argument found for option -server

Connecting to localhost:2181

2019-01-16 09:35:44,723 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.6-1569965, built on 02/20/2014 09:09 GMT

2019-01-16 09:35:44,726 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=master

2019-01-16 09:35:44,726 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.8.0_111

2019-01-16 09:35:44,728 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation

2019-01-16 09:35:44,728 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/java/jdk1.8.0_111/jre

2019-01-16 09:35:44,728 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/data/yunva/zookeeper-3.4.6/bin/../build/classes:/data/yunva/zookeeper-3.4.6/bin/../build/lib/*.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/slf4j-log4j12-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/slf4j-api-1.6.1.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/netty-3.7.0.Final.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/log4j-1.2.16.jar:/data/yunva/zookeeper-3.4.6/bin/../lib/jline-0.9.94.jar:/data/yunva/zookeeper-3.4.6/bin/../zookeeper-3.4.6.jar:/data/yunva/zookeeper-3.4.6/bin/../src/java/lib/*.jar:/data/yunva/zookeeper-3.4.6/bin/../conf:

2019-01-16 09:35:44,728 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib

2019-01-16 09:35:44,728 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp

2019-01-16 09:35:44,728 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>

2019-01-16 09:35:44,729 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux

2019-01-16 09:35:44,729 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64

2019-01-16 09:35:44,729 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=2.6.32-431.el6.x86_64

2019-01-16 09:35:44,729 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root

2019-01-16 09:35:44,729 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root

2019-01-16 09:35:44,729 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/data/yunva/zookeeper-3.4.6

2019-01-16 09:35:44,730 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@7aec35a

Welcome to ZooKeeper!

2019-01-16 09:35:44,759 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@975] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)

JLine support is enabled

2019-01-16 09:35:44,824 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@852] - Socket connection established to localhost/127.0.0.1:2181, initiating session

2019-01-16 09:35:44,834 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1235] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x16854303e4a0002, negotiated timeout = 30000

 

WATCHER::

 

WatchedEvent state:SyncConnected type:None path:null

[zk: localhost:2181(CONNECTED) 0] ls /

[zookeeper, hbase]

[zk: localhost:2181(CONNECTED) 1] ls /hbase

[replication, meta-region-server, rs, splitWAL, backup-masters, table-lock, flush-table-proc, region-in-transition, online-snapshot, master, running, recovering-regions, draining, namespace, hbaseid, table]

[zk: localhost:2181(CONNECTED) 2]

如果访问默认的http管理端口页面可以看到集群的情况
hadoop:
http://localhost:8088/cluster/cluster

http://192.168.1.228:8088/cluster/cluster

Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境_zookeeper

hbase:
http://localhost:16010/master-status

http://192.168.1.228:16010/master-statusHadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境_zookeeper_02

 

hdfs:

http://localhost:50070/dfshealth.html#tab-overview

http://192.168.1.228:50070/dfshealth.html#tab-overview

Hadoop2.7.3+HBase1.2.5+ZooKeeper3.4.6搭建分布式集群环境_jar_03