centos7安装Ambari 2.7.5 + HDP3.1.5
- 准备
- 1.安装包准备:
- 2.服务器配置
- 3.配置静态IP
- 4.配置主机名
- 5.关闭防火墙及selinux
- 6.配置ssh互信
- 7.安装pssh工具(非必须)
- 8. 配置ntp时钟同步
- 9.设置swap
- 10.关闭透明大页面
- 11.安装http服务
- 12.配置操作系统本地repo
- 13.安装Java
- 安装maven
- 安装Ambari&HDP
- 1. 配置Ambari、HDP、libtirpc-devel本地源
- 2. 安装mariadb
- 3. 安装和配置ambari-server
- 4. 启动ambari服务
- 5. 所有节点安装ambari-agent
- 6. 所有节点安装libtirpc-devel
- 部署集群
- 测试
准备
1.安装包准备:
Ambari2.7.5、HDP31.5、libtirpc-devel:
链接:https://pan.baidu.com/s/1kB55rOWSkak1TWwrQKMAcA 提取码:dc46
–来自百度网盘超级会员V3的分享
centos7操作系统:
链接:https://pan.baidu.com/s/1vzvio5rAMsbYF0-0vBs5dw 提取码:xij9
2.服务器配置
centos7环境(但是安装Kerberos后spark起不来,想所有服务都跑起来主16g,从8g)
hadoop101 5g+60g
hadoop102 4g+50g
hadoop102 4g+50g
3.配置静态IP
vim /etc/sysconfig/network-scripts/ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="a91fc218-477c-4842-a475-488a59d11835"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.10.103
NETMASK=255.255.255.0
GATEWAY=192.168.10.2
DNS1=192.168.10.2
需要改的字段:
BOOTPROTO="static"
IPADDR=192.168.10.103
NETMASK=255.255.255.0
GATEWAY=192.168.10.2
DNS1=192.168.10.2
重启网络使配置生效
systemctl restart network
ip a 查看是否生效
同理配置其他服务器IP
4.配置主机名
- 修改主机名
vim /etc/hostname
分别将3台服务器hostname分别设置为hadoop101.kang.com
、hadoop102.kang.com
、hadoop103.kang.com
- 修改hosts文件
vim /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.10.100 hadoop100.kang.com hadoop100
192.168.10.101 hadoop101.kang.com hadoop101
192.168.10.102 hadoop102.kang.com hadoop102
192.168.10.103 hadoop103.kang.com hadoop103
3台机器都要修改
5.关闭防火墙及selinux
3台服务器上分别执行以下操作,关闭防火墙并配置开机不自动启动
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service
关闭3台服务器selinux
临时关闭,不用重启服务器
setenforce 0
为了重启后依然关闭,配置如下文件
vim /etc/sysconfig/selinux
修改
SELINUX=disabled
重启
后验证是否禁用成功
[root@hadoop101 ~]# sestatus -v
SELinux status: disabled
6.配置ssh互信
- 在hadoop101上执行 ssh-keygen,一直回车即可
[root@hadoop101 ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:3CgxWUPFDOYeJe8CVD4Spo32n5DAE/+Ir0IV7yllBwk root@hadoop101.kang.com
The key's randomart image is:
+---[RSA 2048]----+
| E.++O=o |
| ..OoB =o |
| BoO.= . |
| ..=+@.* |
| ..+*oS o |
| . ..o+ o |
| . .. o |
| . . |
| .. |
+----[SHA256]-----+
- 将ssh私钥拷贝到其他节点
ssh-copy-id hadoop101
ssh-copy-id hadoop102
ssh-copy-id hadoop103
- 验证免密是否配置成功
[root@hadoop101 ~]# ssh hadoop101
Last login: Wed Nov 10 14:45:40 2021 from 192.168.10.1
[root@hadoop101 ~]# logout
Connection to hadoop101 closed.
[root@hadoop101 ~]# ssh hadoop102
Last login: Wed Nov 10 14:45:44 2021 from 192.168.10.1
[root@hadoop102 ~]# logout
Connection to hadoop102 closed.
[root@hadoop101 ~]# ssh hadoop103
Last login: Wed Nov 10 14:45:49 2021 from 192.168.10.1
7.安装pssh工具(非必须)
pssh基于Python编写的并发在多台服务器上批量执行命令的工具,它支持文件并行复制、远程并行执行命令、杀掉远程主机上的进程等,这里介绍安装及常用命令
- 我将安装包放到/opt/bao目录下
mkdir /opt/bao
cd /opt/bao
wget http://peak.telecommunity.com/dist/ez_setup.py
wget https://pypi.python.org/packages/60/9a/8035af3a7d3d1617ae2c7c174efa4f154e5bf9c24b36b623413b38be8e4a/pssh-2.3.1.tar.gz
- 解压到/opt/src目录下
mkdir /opt/src
tar -zxvf pssh-2.3.1.tar.gz -C /opt/src
- 更改包名,并cd到包里
cd /opt/src
mv pssh-2.3.1 pssh
cd pssh
- build &install
python setup.py build
python setup.py install
- 查看安装
pssh --version
2.3.1
- 创建nodes文档,添加需要批处理的服务器节点
为了以后使用方便,在根目录下创建,文件名可随意,方便使用原则
vim /node.list
root@192.168.10.101:22
root@192.168.10.102:22
root@192.168.10.103:22
- 使用实例
[root@hadoop101 pssh]# pssh -h /node.list -i -P 'date'
192.168.10.101: Wed Nov 10 15:31:04 CST 2021
[1] 15:31:04 [SUCCESS] root@192.168.10.101:22
Wed Nov 10 15:31:04 CST 2021
192.168.10.102: Wed Nov 10 15:31:04 CST 2021
[2] 15:31:04 [SUCCESS] root@192.168.10.102:22
Wed Nov 10 15:31:04 CST 2021
192.168.10.103: Wed Nov 10 15:31:04 CST 2021
[3] 15:31:04 [SUCCESS] root@192.168.10.103:22
Wed Nov 10 15:31:04 CST 2021
8. 配置ntp时钟同步
- 卸载系统原装的chrony
yum -y remove chronyd
- 所有节点安装NTP服务
pssh -h /node.list -i 'yum -y install ntp'
- 选择hadoop101服务器作为NTP Server
将如下配置vim /etc/ntp.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
修改为
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 127.127.1.0
fudge 127.127.1.0 stratum 10
- 其它节点做如下配置
vim /etc/ntp.conf
将
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
修改为
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.10.101
- 在每台服务器上启动ntpd服务,并配置服务开机自启动
[root@hadoop101 pssh]# pssh -h /node.list -i 'systemctl restart ntpd'
[1] 15:55:08 [SUCCESS] root@192.168.10.102:22
[2] 15:55:08 [SUCCESS] root@192.168.10.101:22
[3] 15:55:08 [SUCCESS] root@192.168.10.103:22
[root@hadoop101 pssh]# pssh -h /node.list -i 'systemctl enable ntpd.service'
[1] 15:55:17 [SUCCESS] root@192.168.10.101:22
Stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[2] 15:55:17 [SUCCESS] root@192.168.10.102:22
Stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
[3] 15:55:17 [SUCCESS] root@192.168.10.103:22
Stderr: Created symlink from /etc/systemd/system/multi-user.target.wants/ntpd.service to /usr/lib/systemd/system/ntpd.service.
- 验证所有节点NTP是否同步成功
[root@hadoop101 pssh]# pssh -h /node.list -i 'ntpq -p'
[1] 15:55:29 [SUCCESS] root@192.168.10.101:22
remote refid st t when poll reach delay offset jitter
==============================================================================
*LOCAL(0) .LOCL. 10 l 20 64 1 0.000 0.000 0.000
[2] 15:55:29 [SUCCESS] root@192.168.10.102:22
remote refid st t when poll reach delay offset jitter
==============================================================================
hadoop101.kang. .INIT. 16 u 20 64 0 0.000 0.000 0.000
[3] 15:55:29 [SUCCESS] root@192.168.10.103:22
remote refid st t when poll reach delay offset jitter
==============================================================================
hadoop101.kang. LOCAL(0) 11 u 20 64 1 0.219 21.408 0.000
可见每台服务器都指向 hadoop101.kang. com(后面没显示出来)表示同步成功
9.设置swap
[root@hadoop101 pssh]# pssh -h /node.list -i 'echo vm.swappiness = 1 >> /etc/sysctl.conf'
[root@hadoop101 pssh]# pssh -h /node.list -i 'sysctl vm.swappiness=1'
[root@hadoop101 pssh]# pssh -h /node.list -i 'sysctl -p'
修改/etc/sysctl.conf(前面已经执行添加,这时可以查看是否添加成功)
vim /etc/sysctl.conf
vm.swappiness = 1
10.关闭透明大页面
由于透明超大页面已知会导致意外的节点重新启动并导致RAC出现性能问题,因此Oracle强烈建议禁用
[root@hadoop101 pssh]# pssh -h /node.list -i "echo never > /sys/kernel/mm/transparent_hugepage/defrag "
[root@hadoop101 pssh]# pssh -h /node.list -i "echo never > /sys/kernel/mm/transparent_hugepage/enabled"
设置开自关闭,将如下脚本添加到/etc/rc.d/rc.local文件中并同步到所有节点
vim /etc/rc.d/rc.local
if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi
同步到所有节点
pscp -h /node.list /etc/rc.d/rc.local /etc/rc.d/rc.local
1
修改服务器的/etc/rc.d/rc.local文件的权限以实现开机执行
pssh -h /node.list -i "chmod +x /etc/rc.d/rc.local"
11.安装http服务
安装apache的httpd服务主要用于搭建OS、Ambari和hdp的yum源。在集群服务器中选择一台服务器来安装httpd服务,命令如下:
安装HTTP服务
[root@hadoop101 pssh]# yum -y install httpd
启动停止并设置开机自启
[root@hadoop101 pssh]# systemctl start httpd
[root@hadoop101 pssh]# systemctl stop httpd
[root@hadoop101 pssh]# systemctl enable httpd.service
Created symlink from /etc/systemd/system/multi-user.target.wants/httpd.service to /usr/lib/systemd/system/httpd.service.
[root@hadoop101 pssh]# systemctl start httpd
浏览器访http://hostname问能出现如下既成功:
12.配置操作系统本地repo
- 将centos7系统镜像上传到/media目录
ls /media
CentOS-7-x86_64-DVD-1804.iso
- 挂载操作系统iso文件
[root@hadoop101 ~]# cd /media/
[root@hadoop101 media]# mkdir iso
[root@hadoop101 media]# mount -o loop CentOS-7-x86_64-DVD-1804.iso /media/iso/
mount: /dev/loop0 is write-protected, mounting read-only
查看是否挂载成功
[root@hadoop101 media]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 58G 5.5G 53G 10% /
devtmpfs 477M 0 477M 0% /dev
tmpfs 488M 0 488M 0% /dev/shm
tmpfs 488M 7.7M 480M 2% /run
tmpfs 488M 0 488M 0% /sys/fs/cgroup
/dev/sda1 297M 107M 191M 36% /boot
tmpfs 98M 0 98M 0% /run/user/0
/dev/loop0 4.2G 4.2G 0 100% /media/iso
- 在/var/www/html目录下创建iso目录,并将/media/iso目录文件发送至/var/www/html/iso/目录下
[root@hadoop101 media]# mkdir /var/www/html/iso
[root@hadoop101 media]# cp -r /media/iso/* /var/www/html/iso/
[root@hadoop101 media]# ls /var/www/html/iso/
CentOS_BuildTag EFI EULA GPL images isolinux LiveOS Packages repodata RPM-GPG-KEY-CentOS-7 RPM-GPG-KEY-CentOS-Testing-7 TRANS.TBL
- 在浏览器输入http://hadoop101/iso/即可看到iso目录下文件
- 添加配置文件/etc/yum.repos.d/redhat7.6.repo,配置操作系统yum源
[root@hadoop101 media]# cat /etc/yum.repos.d/redhat7.6.repo
[redhat_os_repo]
name=redhat7.6_repo
baseurl=http://hadoop101/iso/
enabled=true
gpgcheck=false
同步到其他节点
pscp -h /node.list /etc/yum.repos.d/redhat7.6.repo /etc/yum.repos.d/redhat7.6.repo
- 查看yum源是否配置成功,如下可见redhat7.6.repo配置成功
pssh -h /node.list -i 'yum clean all'
pssh -h /node.list -i 'yum repolist'
安装了很多次,之前没问题,最近一直出现问题
在Hadoop101节点手动执行yum repolist
[root@hadoop101 media]# yum repolist
Loaded plugins: fastestmirror
Existing lock /var/run/yum.pid: another copy is running as pid 11391.
Another app is currently holding the yum lock; waiting for it to exit...
The other application is: yum
Memory : 24 M RSS (887 MB VSZ)
Started: Wed Nov 10 18:33:43 2021 - 01:30 ago
State : Sleeping, pid: 11391
^C
显示yum 被锁,杀掉所有节点yum进程,重新执行pssh -h /node.list -i 'yum repolist'
就ok了
13.安装Java
- 将上传的jdk压缩包解压,移到相应目录
tar -zxvf jdk-8u212-linux-x64.tar.gz
mkdir /usr/local/java
mv jdk1.8.0_212/* /usr/local/java/
- 将jdk文件传到集群各节点
pscp -r -h /node.list /usr/local/java /usr/local/
3.配置Java环境变量(所有节点)
vim /root/.bashrc
export JAVA_HOME=/usr/local/java
export PATH=$PATH:$JAVA_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JRE_HOME=$JAVA_HOME/jre
4.是配置文件生效并检查(所有节点)
source /root/.bashrc
[root@hadoop101 ~]# java -version
java version "1.8.0_212"
Java(TM) SE Runtime Environment (build 1.8.0_212-b10)
Java HotSpot(TM) 64-Bit Server VM (build 25.212-b10, mixed mode)
安装maven
- 解压maven包并移到相应目录
tar -zxvf apache-maven-3.8.3-bin.tar.gz
mkdir -p /opt/src/maven
mv apache-maven-3.8.3/* /opt/src/maven
- 配置环境变量,并使其生效
vim /root/.bashrc
# set maven home
export PATH=$PATH:/opt/src/maven/bin
source /root/.bashrc
- 验证
[root@hadoop101 ~]# mvn -version
Apache Maven 3.8.3 (ff8e977a158738155dc465c6a97ffaf31982d739)
Maven home: /opt/src/maven
Java version: 1.8.0_212, vendor: Oracle Corporation, runtime: /usr/local/java/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "3.10.0-862.el7.x86_64", arch: "amd64", family: "unix"
安装Ambari&HDP
1. 配置Ambari、HDP、libtirpc-devel本地源
解压文件
tar -zxvf ambari-2.7.5.0-centos7.tar.gz -C /var/www/html/
tar -zxvf HDP-3.1.5.0-centos7-rpm.tar.gz -C /var/www/html/
tar -zxvf HDP-GPL-3.1.5.0-centos7-gpl.tar.gz -C /var/www/html/
tar -zxvf HDP-UTILS-1.1.0.22-centos7.tar.gz -C /var/www/html/
查看:
[root@hadoop101 html]# ll /var/www/html/
total 0
drwxr-xr-x 3 root root 21 Nov 10 21:58 ambari
drwxr-xr-x 3 1001 users 21 Dec 18 2019 HDP
drwxr-xr-x 3 1001 users 21 Dec 18 2019 HDP-GPL
drwxr-xr-x 3 1001 users 21 Aug 13 2018 HDP-UTILS
drwxr-xr-x 8 root root 220 Nov 10 18:26 iso
设置设置用户组和授权
[root@hadoop101 html]# cd /var/www/html/
[root@hadoop101 html]# chown -R root:root HDP*
[root@hadoop101 html]# chmod -R 755 HDP*
[root@hadoop101 html]# ll
total 0
drwxr-xr-x 3 root root 21 Nov 10 21:58 ambari
drwxr-xr-x 3 root root 21 Dec 18 2019 HDP
drwxr-xr-x 3 root root 21 Dec 18 2019 HDP-GPL
drwxr-xr-x 3 root root 21 Aug 13 2018 HDP-UTILS
drwxr-xr-x 8 root root 220 Nov 10 18:26 iso
创建libtirpc-devel本地源
mkdir /var/www/html/libtirpc
cd /var/www/html/libtirpc
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libtirpc-0.2.4-0.16.el7.x86_64.rpm
wget http://mirror.centos.org/centos/7/os/x86_64/Packages/libtirpc-devel-0.2.4-0.16.el7.x86_64.rpm
createrepo .
我的虚拟机是最小化安装,所以须安装createrepo
[root@hadoop101 libtrpc]# createrepo .
-bash: createrepo: command not found
安装createrepo
yum -y install createrepo
制作本地源
- 配置ambari.repo
vim /etc/yum.repos.d/ambari.repo
[Ambari-2.7.5.0]
name=Ambari-2.7.5.0
baseurl=http://hadoop101/ambari/centos7/2.7.5.0-72/
gpgcheck=0
enabled=1
priority=1
- 配置HDP和HDP-TILS
vim /etc/yum.repos.d/HDP.repo
[HDP-3.1.5.0]
name=HDP Version - HDP-3.1.5.0
baseurl=http://hadoop101/HDP/centos7/3.1.5.0-152/
gpgcheck=0
enabled=1
priority=1
[HDP-UTILS-1.1.0.22]
name=HDP-UTILS Version - HDP-UTILS-1.1.0.22
baseurl=http://hadoop101/HDP-UTILS/centos7/1.1.0.22/
gpgcheck=0
enabled=1
priority=1
[HDP-GPL-3.1.5.0]
name=HDP-GPL Version - HDP-GPL-3.1.5.0
baseurl=http://hadoop101/HDP-GPL/centos7/3.1.5.0-152
gpgcheck=0
enabled=1
priority=1
- 配置libtirpc.repo
vim /etc/yum.repos.d/libtirpc.repo
[libtirpc_repo]
name=libtirpc-0.2.4-0.16
baseurl=http://hadoop101/libtirpc/
gpgcheck=0
enabled=1
priority=1
- 拷贝到其他节点
pscp -h /node.list /etc/yum.repos.d/* /etc/yum.repos.d/
- 查看源
pssh -h /node.list -i 'yum clean all'
pssh -h /node.list -i 'yum repolist'
2. 安装mariadb
查看当前mysql和mariadb的包
rpm -qa |grep -i mysql
rpm -qa |grep -i mariadb
卸载旧版本
rpm -e --nodeps 旧包
安装MariaDB服务器
yum install mariadb-server
启动并设置开机启动
systemctl enable mariadb
systemctl start mariadb
初始化
/usr/bin/mysql_secure_installation
[...]
Enter current password for root (enter for none):
OK, successfully used password, moving on...
[...]
Set root password? [Y/n] Y
New password:123456
Re-enter new password:123456
[...]
Remove anonymous users? [Y/n] Y
[...]
Disallow root login remotely? [Y/n] N
[...]
Remove test database and access to it [Y/n] Y
[...]
Reload privilege tables now? [Y/n] Y
[...]
All done! If you've completed all of the above steps, your MariaDB 18 installation should now be secure.
Thanks for using MariaDB!
为MariaDB安装MySQL JDBC驱动程序
tar -zxvf mysql-connector-java-5.1.40.tar.gz
cd mysql-connector-java-5.1.40
mkdir /usr/share/java/
mv mysql-connector-java-5.1.40-bin.jar /usr/share/java/mysql-connector-java.jar
创建需要的数据库
如果需要ranger,编辑以下⽂件:
vim /etc/my.cnf
log_bin_trust_function_creators = 1
重启数据库并登录
systemctl restart mariadb
3. 安装和配置ambari-server
安装ambari-server
yum -y install ambari-server
复制mysql jdbc驱动到/var/lib/ambari-server/resources/
cp /usr/share/java/mysql-connector-java.jar /var/lib/ambari-server/resources/
配置/etc/ambari-server/conf/ambari.properties,添加如下行
vim /etc/ambari-server/conf/ambari.properties
server.jdbc.driver.path=/usr/share/java/mysql-connector-java.jar
执行
ambari-server setup --jdbc-db=mysql --jdbc-driver=/usr/share/java/mysql-connector-java.jar
初始化ambari-server
ambari-server setup
1) 提示是否自定义设置。输入:y
Customize user account for ambari-server daemon [y/n] (n)? y
(2)ambari-server 账号。
Enter user account for ambari-server daemon (root):
如果直接回车就是默认选择root用户
如果输入已经创建的用户就会显示:
Enter user account for ambari-server daemon (root):ambari
Adjusting ambari-server permissions and ownership...
(3)检查防火墙是否关闭
Adjusting ambari-server permissions and ownership...
Checking firewall...
WARNING: iptables is running. Confirm the necessary Ambari ports are accessible. Refer to the Ambari documentation for more details on ports.
OK to continue [y/n] (y)?
直接回车
(4)设置JDK。输入:2
Checking JDK...
Do you want to change Oracle JDK [y/n] (n)? y
[1] Oracle JDK 1.8 + Java Cryptography Extension (JCE) Policy Files 8
[2] Custom JDK
==============================================================================
Enter choice (1): 2
如果上面选择3自定义JDK,则需要设置JAVA_HOME。输入:/usr/local/java
WARNING: JDK must be installed on all hosts and JAVA_HOME must be valid on all hosts.
WARNING: JCE Policy files are required for configuring Kerberos security. If you plan to use Kerberos,please make sure JCE Unlimited Strength Jurisdiction Policy Files are valid on all hosts.
Path to JAVA_HOME: /usr/local/java
Validating JDK on Ambari Server...done.
Completing setup...
(5)数据库配置。选择:y
Configuring database...
Enter advanced database configuration [y/n] (n)? y
(6)选择数据库类型。输入:3
Configuring database...
==============================================================================
Choose one of the following options:
[1] - PostgreSQL (Embedded)
[2] - Oracle
[3] - MySQL/ MariaDB
[4] - PostgreSQL
[5] - Microsoft SQL Server (Tech Preview)
[6] - SQL Anywhere
==============================================================================
Enter choice (3): 3
(7)设置数据库的具体配置信息,根据实际情况输入,如果和括号内相同,则可以直接回车。如果想重命名,就输入。
Hostname (localhost):hadoop101
Port (3306): 3306
Database name (ambari): ambari
Username (ambari): ambari
Enter Database Password (bigdata):ambari123
Re-Enter password: ambari123
(8)将Ambari数据库脚本导入到数据库
WARNING: Before starting Ambari Server, you must run the following DDL against the database to create the schema: /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql 这个sql后面会用到,导入数据库
Proceed with configuring remote database connection properties [y/n] (y)? y
登录mariadb创建ambari安装所需要的库
设置的账号后面配置ambari-server的时候会用到!
mysql -u root -p
CREATE DATABASE ambari;
use ambari;
CREATE USER 'ambari'@'%' IDENTIFIED BY 'ambari123';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'%';
CREATE USER 'ambari'@'localhost' IDENTIFIED BY 'ambari123';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'localhost';
CREATE USER 'ambari'@'hadoop101' IDENTIFIED BY 'ambari123';
GRANT ALL PRIVILEGES ON *.* TO 'ambari'@'hadoop101';
source /var/lib/ambari-server/resources/Ambari-DDL-MySQL-CREATE.sql
show tables;
use mysql;
select host,user from user where user='ambari';
CREATE DATABASE hive;
use hive;
CREATE USER 'hive'@'%' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'%';
CREATE USER 'hive'@'localhost' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'localhost';
CREATE USER 'hive'@'hadoop101' IDENTIFIED BY 'hive';
GRANT ALL PRIVILEGES ON *.* TO 'hive'@'hadoop101';
CREATE DATABASE oozie;
use oozie;
CREATE USER 'oozie'@'%' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'%';
CREATE USER 'oozie'@'localhost' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'localhost';
CREATE USER 'oozie'@'hadoop101' IDENTIFIED BY 'oozie';
GRANT ALL PRIVILEGES ON *.* TO 'oozie'@'hadoop101';
FLUSH PRIVILEGES;
4. 启动ambari服务
[root@hadoop101 ~]# ambari-server start
Using python /usr/bin/python
Starting ambari-server
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start............................................
Server started listening on 8080
DB configs consistency check: no errors and warnings were found.
Ambari Server 'start' completed successfully.
5. 所有节点安装ambari-agent
pssh -h /node.list -i 'yum -y install ambari-agent'
pssh -h /node.list -i 'systemctl start ambari-agent'
6. 所有节点安装libtirpc-devel
pssh -h /node.list -i 'yum -y install libtirpc-devel'
部署集群
- 登录界面:http://hadoop101:8080
事先修改C:\Windows\System32\drivers\etc\hosts文件
192.168.10.101 hadoop101 hadoop101.kang.com
192.168.10.102 hadoop102 hadoop102.kang.com
192.168.10.103 hadoop103 hadoop102.kang.com
默认管理员账户登录, 账户:admin 密码:admin
- 选择版本,配置yum源
1)选择Launch Install Wizard
2)配置集群名称
3)选择版本并修改本地源地址
选HDP-3.1(Default Version Definition);
选Use Local Repository;
选redhat7:
HDP-3.1:http://hadoop101/HDP/centos7/3.1.5.0-152/
HDP-3.1-GPL: http://hadoop101/HDP-GPL/centos7/3.1.5.0-152/
HDP-UTILS-1.1.0.22: http://hadoop101/HDP-UTILS/centos7/1.1.0.22/ - 配置节点和密钥
下载主节点的/root/.ssh/id_rsa,并上传!点击下一步,进入确认主机界面
也可直接cat /root/.ssh/id_rsa 粘贴即可 - 密钥验证成功:
- 勾选需要安装的服务(由于我的内存有限,只勾选了必要的服务)
- 按照默认配置(分配服务master…略…)
- 分配服务slaves(可根据自己需求勾选)
- 设置相关服务的密码(其他配置默认)
Grafana Admin:admin
Hive Database:hive
Activity Explorer’s Admin:admin
8.连接数据库
- 编辑配置,默认即可
- 等待安装测试
- 安装成功(应该是绿色的,由于内存不足导致的,原因是虚拟机都是3g内存,忘了修改,问题不大)
- 关掉了不需要用的服务,因为我只想测试hbase集成Kerberos。
测试
我搭建主要是测试hbase集成Kerberos,所以在未安装Kerberos前进入hbase shell测试一下
[root@hadoop101 ~]# hbase shell
#查看有哪些表
hbase(main):001:0> list
TABLE
0 row(s)
Took 0.4744 seconds
=> []
#建表
hbase(main):002:0> create 'Student', {NAME => 'Stulnfo', VERSIONS => 3}, {NAME =>'Grades', BLOCKCACHE => true}
Created table Student
Took 1.4011 seconds
=> Hbase::Table - Student
#查看表是否创建成功
hbase(main):003:0> list
TABLE
Student
1 row(s)
Took 0.0149 seconds
=> ["Student"]
hbase(main):004:0>
ok ,all is well!