文章目录

  • 一、Vmware
  • 二、创建虚拟机
  • 1.安装centos7
  • 2.配置静态IP、修改主机名
  • 3.将刚刚的c0虚拟机克隆三台出来
  • 4.在四台机子上设置hosts,一下以c0为例
  • 5.配置 SSH 免密码登录登录
  • 6.关闭防火墙
  • 7.安装 NTP
  • 三、安装程序
  • 1.创建安装目录
  • 2.下载本文中用到的程序
  • 3.设置环境变量
  • 4.安装 Oracle JDK 1.8.0


一、Vmware

VMware虚拟机下载安装:
最新超详细VMware虚拟机下载与安装

百度云VMware-workstation-full-15.1.0-13591040.exe和密钥:
链接:https://pan.baidu.com/s/10IiV0chbvdr3EcfjzHt1BA 提取码:z2wk

二、创建虚拟机

1.安装centos7

①打开VMware,点击新建虚拟机。
②选择典型类型,下一步。
③选择稍后安装操作系统,下一步。
④客户机操作系统–Linux版本,CentOS 7 64位。
⑤命名虚拟机c0,选择位置。
⑥使用默认磁盘20G大小。
⑦不修改虚拟机硬件配置,完成创建虚拟机。
⑧选择创建好的虚拟机,点击编辑虚拟机。
⑨移除USB控制器、声卡、打印机(针对克隆,不移除克隆后可能产生冲突,无法正常开机)
⑩选择CD/DVD ,使用ISO映像文件,选择ISO文件位置。

之后开启虚拟机,安装虚拟机。
a)选择中文,时间
b)软件选择,选择GHOME桌面
c)设置root密码,实验建议设为123456
d)重启,接受许可,时区选择中国上海

2.配置静态IP、修改主机名

①虚拟机连接方式选择NAT模式。

②在VMware里,依次点击”编辑“ - ”虚拟网络编辑器“,如下图,我选择的是NAT模式:

vm部署hadoop vmware安装hadoop教程_主机名


vm部署hadoop vmware安装hadoop教程_Hadoop_02


这里需要记住的就是自己的网关192.168.157.2,下一步就是打开虚拟机了。

③修改网络配置
命令:
进入root模式,不用一遍遍输密码

su - root
cd /etc/sysconfig/network-scripts/
ls

看看自己的ifcfg-ensXXX是什么(每个机子可能不同),比如

sudo gedit ifcfg-ens32

原来的:

vm部署hadoop vmware安装hadoop教程_Hadoop_03


改后:

TYPE=Ethernet
BOOTPROTO=static
//这里改一下,默认的是dhcp,改成static表示静态的意思
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6INIT=yes
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
IPV6_ADDR_GEN_MODE=stable-privacy
NAME=ens33
UUID=fa3a4df3-e833-410a-9c95-94e3ee645d12
DEVICE=ens33
ONBOOT=yes
//这里改成yes,表示网卡设备开机启动

//下边的都是加上的
IPADDR=192.168.157.11
//IP地址自己写,只要和网关处于同一网段就行,如192.168.157.xxx        
PREFIXO=24  
NTSMASK=255.255.255.0  	//这里是子网掩码
GATEWAY=192.168.157.2 	//这里是网关
DNS1=114.114.114.114  
DNS2=8.8.8.8

vm部署hadoop vmware安装hadoop教程_hadoop_04


改完上面的内容之后,要重启网络。重启之后静态IP配置完成。

systemctl restart network

查看网络配置信息。

ifconfig

④修改主机名
查看主机名:hostname 将主机名修改成了c0(这里主机名字自己起):

hostnamectl set-hostname c0

之后重启该终端就好啦。

3.将刚刚的c0虚拟机克隆三台出来

克隆之后的三台centos分别进行上述的修改静态IP、修改主机名操作。

vm部署hadoop vmware安装hadoop教程_主机名_05

四台机子的IP与主机名对应如下:

192.168.157.11	c0
192.168.157.12	c1
192.168.157.13	c2
192.168.157.14	c3

(注意以上是我设置的IP和对于的名字,在自己设置的时候,要将自己四台机子的IP和主机名要对应上,在之后需要用到的)

4.在四台机子上设置hosts,一下以c0为例

sudo gedit /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.157.11	c0
192.168.157.12	c1
192.168.157.13	c2
192.168.157.14	c3

(看到这里就是第4步中自己设置的IP和主机名,加到hosts文件之后 。记着四台机子都要这样的操作)

5.配置 SSH 免密码登录登录

①每一台机器都单独生成密钥,每一台机子都打一遍,所有的打完①步骤之后才能进入②步骤

ssh-keygen

一路按回车到最后

②将 ssh-keygen 生成的密钥,分别复制到其他三台机器,以下以 c0 为例
【每个机子都这么打一遍】

  • 命令:
rm -rf ~/.ssh/known_hosts
clear
ssh-copy-id c0
ssh-copy-id c1
ssh-copy-id c2
ssh-copy-id c3
  • c0的完整实例:
[root@c0 ~]# rm -rf ~/.ssh/known_hosts
[root@c0 ~]# clear
[root@c0 ~]# ssh-copy-id c0
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c0 (10.0.0.100)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c0's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c0'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c1 (10.0.0.101)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c1's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c1'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c2 (10.0.0.102)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c2's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c2'"
and check to make sure that only the key(s) you wanted were added.

[root@c0 ~]# ssh-copy-id c3
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'c3 (10.0.0.103)' can't be established.
ECDSA key fingerprint is SHA256:O8y8TBSZfBYiHPvJPPuAd058zkfsOfnBjvnf/3cvOCQ.
ECDSA key fingerprint is MD5:da:3c:29:65:f2:86:e9:61:cb:39:57:5b:5e:e2:77:7c.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@c3's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'c3'"
and check to make sure that only the key(s) you wanted were added.

测试密钥是否配置成功,可以在任意机器上执行以下命令:

for N in $(seq 0 3); do ssh c$N hostname; done;

vm部署hadoop vmware安装hadoop教程_Hadoop_06

6.关闭防火墙

每一台机器上运行以下命令:

systemctl stop firewalld && systemctl disable firewalld

示例:

# c0
[root@c0 ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

7.安装 NTP

安装 NTP 时间同步工具,并启动 NTP

for N in $(seq 0 3); do ssh c$N yum install ntp -y; done;

每一台机器上,设置 NTP 开机启动

systemctl enable ntpd && systemctl start ntpd

依次查看每台机器上的时间:

for N in $(seq 0 3); do ssh c$N date; done;
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N date; done;
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:48 CST 2019
Sat Feb  9 18:11:49 CST 2019
Sat Feb  9 18:11:49 CST 2019

三、安装程序

1.创建安装目录

创建要用到的目录结构

  • 所有的程序都统一在/home/work/_app 目录,
  • 所有下载的源码在 /home/work/_src目录 ,
  • 所有的数据在 /home/work/_data 目录,
  • 所有的日志在 /home/work/_logs 目录。

【只在c0上打就行了】:

for N in $(seq 0 3); do ssh c$N mkdir /home/work/{_src,_app,_logs,_data} -p; done;
for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/{hadoop-3.1.2,zookeeper-3.4.14} -p; done;
for N in $(seq 0 3); do ssh c$N mkdir /home/work/_logs/{hadoop-3.1.2,zookeeper-3.4.14} -p; done;

for N in $(seq 0 1); do ssh c$N mkdir /home/work/_data/hadoop-3.1.2/{journalnode,ha-name-dir-shared} -p; done;

for N in $(seq 0 3); do ssh c$N mkdir /home/work/_logs/hbase-1.4.10 -p; done;
for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/hbase-1.4.10 -p; done;

for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/hive-2.3.4/{scratchdir,tmpdir} -p; done;
for N in $(seq 0 3); do ssh c$N mkdir /home/work/_logs/hive-2.3.4 -p; done;
# 创建 Hadoop3.1.2 和 Zookeeper3.4.14 需要的目录
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/{_src,_app,_logs,_data} -p; done;
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/{hadoop-3.1.2,zookeeper-3.4.14} -p; done;
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_logs/{hadoop-3.1.2,zookeeper-3.4.14} -p; done;

## 在 Hadoop3.1.2 的 NameNode 上创建 HA 共享目录
[root@c0 ~]# for N in $(seq 0 1); do ssh c$N mkdir /home/work/_data/hadoop-3.1.2/{journalnode,ha-name-dir-shared} -p; done;

# 创建 Hbase1.4.10 需要的目录
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_logs/hbase-1.4.10 -p; done;
[root@c0 ~]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/hbase-1.4.10 -p; done;

# 创建 Hive2.3.4 需要的目录
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_data/hive-2.3.4/{scratchdir,tmpdir} -p; done;
[root@c0 _src]# for N in $(seq 0 3); do ssh c$N mkdir /home/work/_logs/hive-2.3.4 -p; done;

2.下载本文中用到的程序

本实验中我已经将下载好的软件包放到了拷给你们文件里_src下,只需在Windows下拖到centos虚拟机下粘贴即可。
本文中用到的软件都是编译好的,所以不需要安装,解压以后,mv 到相应的目录,可以直接运行命令启动。

链接:https://pan.baidu.com/s/1FSX8KcUY_jLNw9ewMMD-6Q 提取码:

ub2l

【只对c0进行操作】
以下操作在/home/work/_src目录下。

安装过程

假设我们把文件拖到了/home/v1文件下(不知道为什么我的直接拖不到/home/work/_src):

cd /home/v1
sudo mv *.gz *.jar *.rpm /home/work/_src

Hadoop3.1.2:

cd /home/work/_src
tar -xzvf hadoop-3.1.2.tar.gz
mv hadoop-3.1.2 /home/work/_app/

Hbase1.4.10:

cd /home/work/_src
tar -xzvf hbase-1.4.10-bin.tar.gz
mv hbase-1.4.10 /home/work/_app/

Hive2.3.4:

cd /home/work/_src
tar -xzvf hive-2.3.4-bin.tar.gz
mv apache-hive-2.3.4-bin /home/work/_app/hive-2.3.4

Zookeeper3.4.14:

cd /home/work/_src
tar -xzvf zookeeper-3.4.14.tar.gz
mv zookeeper-3.4.14 /home/work/_app/

3.设置环境变量

【在每一台机器上】设置环境变量,运行以下命令:

# Hadoop 3.1.2
echo "export HADOOP_HOME=/home/work/_app/hadoop-3.1.2" >> /etc/bashrc
echo "export HADOOP_LOG_DIR=/home/work/_logs/hadoop-3.1.2" >> /etc/bashrc
echo "export HADOOP_MAPRED_HOME=\$HADOOP_HOME" >> /etc/bashrc
echo "export HADOOP_COMMON_HOME=\$HADOOP_HOME" >> /etc/bashrc
echo "export HADOOP_HDFS_HOME=\$HADOOP_HOME" >> /etc/bashrc
echo "export HADOOP_CONF_DIR=\$HADOOP_HOME/etc/hadoop" >> /etc/bashrc

# Zookeeper 3.4.14
echo "export ZOOKEEPER_HOME=/home/work/_app/zookeeper-3.4.14" >> /etc/bashrc

# JAVA 
echo "export JAVA_HOME=/opt/jdk1.8.0_221" >> /etc/bashrc
echo "export JRE_HOME=/opt/jdk1.8.0_221/jre" >> /etc/bashrc

# HBase 1.4.10
echo "export HBASE_HOME=/home/work/_app/hbase-1.4.10" >> /etc/bashrc

# Hive 2.3.4
echo "export HIVE_HOME=/home/work/_app/hive-2.3.4" >> /etc/bashrc
echo "export HIVE_CONF_DIR=\$HIVE_HOME/conf" >> /etc/bashrc

# Path
echo "export PATH=\$PATH:\$JAVA_HOME/bin:\$JRE_HOME/bin:\$HADOOP_HOME/bin:\$HADOOP_HOME/sbin:\$ZOOKEEPER_HOME/bin:\$HBASE_HOME/bin:\$HIVE_HOME/bin:\$SCALA_HOME/bin:\$SPARK_HOME/bin:\$SPARK_HOME/sbin" >> /etc/bashrc
source /etc/bashrc

4.安装 Oracle JDK 1.8.0

【在c0上】复制c0的jdk文件到其他机子:

for N in $(seq 1 3); do scp -r /home/work/_src/jdk-8u221-linux-x64.tar.gz c$N:/home/work/_src/; done;

【以下操作在每一台机器上都要安装】

cd /home/work/_src
tar -xzvf jdk-8u221-linux-x64.tar.gz
mv jdk1.8.0_221 /opt/

接下来让我们使用 alternatives 命令在您的系统上配置 Java。

alternatives --install /usr/bin/java java /opt/jdk1.8.0_221/bin/java 2
alternatives --config java

处在/opt下的列在第3位,因此输入3并按 Enter 键(看看自己的是几,每个人的电脑都不一样

[root@c0 _src]# alternatives --config java

共有 3 个提供“java”的程序。

  选项    命令
-----------------------------------------------
 + 1           java-1.7.0-openjdk.x86_64 (/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.221-2.6.18.1.el7.x86_64/jre/bin/java)
*  2           java-1.8.0-openjdk.x86_64 (/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.222.b03-1.el7.x86_64/jre/bin/java)
   3           /opt/jdk1.8.0_221/bin/java

按 Enter 保留当前选项[+],或者键入选项编号:3

JAVA 8已成功安装在您的系统上。我们还建议使用替代方法设置javac 和 jar 命令路径

alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_221/bin/jar 2
alternatives --install /usr/bin/jar jar /opt/jdk1.8.0_221/bin/jar 2
alternatives --set jar /opt/jdk1.8.0_221/bin/jar
alternatives --set javac /opt/jdk1.8.0_221/bin/javac

java 和 javac 二进制文件在 PATH 环境变量下可用。您可以在系统中的任何位置使用它们。
让我们通过执行以下命令检查系统上安装的 Java 运行时环境(JRE)版本。

java -version
[root@c0 _src]# java -version
java version "1.8.0_221"
Java(TM) SE Runtime Environment (build 1.8.0_221-b09)
Java HotSpot(TM) 64-Bit Server VM (build 25.221-b09, mixed mode)