目录

​一、环境准备​

​1、spark官网​

​2、下载地址​

​3、官方文档​

​4、SSH免密配置​

​5、Scala2.12安装​

​6、Docker Hadoop集群搭建​

​二、解压安装​

​1、下载spark​

​2、解压文件​

​3、创建软链接​

​三、修改配置文件​

​1、slaves配置​

​2、spark-env.sh配置​

​3、metrics.properties配置​

​4、spark-defaults.conf配置​

​四、环境变量配置​

​1、环境变量配置​

​2、环境变量立即生效​

​五、HDFS上传Spark jar包​

​1、创建HDFS spark jar路径​

​2、上传spark jar包到hdfs ​

​ 六、启动spark​

​1、启动spark master(hadoop01)​

​2、启动spark 备用master(hadoop02)​

​3、在master上启动日志服务(hadoop01)​

​七、Spark环境测试​

​1、spark shell命令​

​2、本地模式测试​

​3、指定Mater测试​

​4、Spark On Yarn模式运行​

​5、Spark Kill Application​

​6、Master Web UI​

​7、HistoryServer WebUI​


一、环境准备

1、spark官网

​Apache Spark™ - Unified Engine for large-scale data analytics​

2、下载地址

​Index of /dist/spark​

3、官方文档

​Overview - Spark 3.2.0 Documentation​

4、SSH免密配置

​大数据入门之 ssh 免密码登录_qq262593421的博客-CSDN博客​

5、Scala2.12安装

​Linux 安装 scala2.12.11_qq262593421的博客-CSDN博客​

6、Docker Hadoop集群搭建

二、解压安装

1、下载spark

Spark 2.4.0:​​https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz​

Spark 3.0.0:​​https://archive.apache.org/dist/spark/spark-3.0.0/spark-3.0.0-bin-hadoop3.2.tgz​

注意:Spark2.4.0依赖Scala2.11环境,Spark3.0.0依赖Scala2.12环境,这里适用2.4.0和3.0.0两个版本

wget -p /usr/local/hadoop/ https://archive.apache.org/dist/spark/spark-2.4.0/spark-2.4.0-bin-hadoop2.7.tgz
wget -p /usr/local/hadoop/ https://archive.apache.org/dist/spark/spark-3.0.0/spark-3.0.0-bin-hadoop3.2.tgz

2、解压文件

tar zxpf spark-2.4.0-bin-hadoop2.7.tgz -C /usr/local/hadoop
tar zxpf spark-3.0.0-bin-hadoop3.2.tgz -C /usr/local/hadoop

3、创建软链接

ln -s /usr/local/hadoop/spark-2.4.0-bin-hadoop2.7 /usr/local/hadoop/spark
ln -s /usr/local/hadoop/spark-3.0.0-bin-hadoop3.2 /usr/local/hadoop/spark


三、修改配置文件

1、slaves配置

echo 'hadoop01
hadoop02
hadoop03' > /usr/local/hadoop/spark/conf/slaves

2、spark-env.sh配置

cp /usr/local/hadoop/spark/conf/spark-env.sh.template /usr/local/hadoop/spark/conf/spark-env.sh
echo '
export JAVA_HOME=/usr/java/jdk1.8
export SCALA_HOME=/usr/local/hadoop/scala
export MYSQL_HOME=/usr/local/mysql
export CLASSPATH=.:/usr/java/jdk1.8/lib/dt.jar:/usr/java/jdk1.8/lib/tools.jar
export SPARK_HOME=/usr/local/hadoop/spark
export HADOOP_HOME=/usr/local/hadoop/hadoop
export HBASE_HOME=/usr/local/hadoop/hbase
export GEOMESA_HBASE_HOME=/usr/local/hadoop/geomesa-hbase
export ZOO_HOME=/usr/local/hadoop/zookeeper

export SPARK_WORKING_MEMORY=16G
# export SPARK_MASTER_IP=hadoop01
export HADOOP_CONF_DIR=/usr/local/hadoop/hadoop/etc/hadoop/
export YARN_CONF_DIR=/usr/local/hadoop/hadoop/etc/hadoop/
export SPARK_LOCAL_DIRS=/home/spark/tmp

export SPARK_HISTORY_OPTS="
-Dspark.history.ui.port=18080
-Dspark.history.fs.logDirectory=hdfs://ns1/spark/directory
-Dspark.history.retainedApplications=30"

SPARK_MASTER_WEBUI_PORT=8989

export SPARK_DAEMON_JAVA_OPTS="
-Dspark.deploy.recoveryMode=ZOOKEEPER
-Dspark.deploy.zookeeper.url=hadoop01,hadoop02,hadoop03
-Dspark.deploy.zookeeper.dir=/spark" ' >> /usr/local/hadoop/spark/conf/spark-env.sh

3、metrics.properties配置

cp /usr/local/hadoop/spark/conf/metrics.properties.template /usr/local/hadoop/spark/conf/metrics.properties
echo '*.sink.csv.directory=/home/spark/tmp/csv/' >> /usr/local/hadoop/spark/conf/metrics.properties

4、spark-defaults.conf配置

cp /usr/local/hadoop/spark/conf/spark-defaults.conf.template /usr/local/hadoop/spark/conf/spark-defaults.conf
echo '
spark.local.dir /home/spark/tmp

spark.eventLog.enabled true
spark.eventLog.dir hdfs://ns1/spark/directory
spark.yarn.jars hdfs://ns1/spark/jars/*.jar
spark.serializer org.apache.spark.serializer.KryoSerializer' >> /usr/local/hadoop/spark/conf/spark-defaults.conf

四、环境变量配置

1、环境变量配置

echo '
## spark config
export SPARK_HOME=/usr/local/hadoop/spark
export PATH=$PATH:$SPARK_HOME/bin' >> /etc/profile

2、环境变量立即生效

source /etc/profile
echo $SPARK_HOME

五、HDFS上传Spark jar包

1、创建HDFS spark jar路径

hadoop fs -mkdir -p /spark/jars

2、上传spark jar包到hdfs 

hadoop fs -put $SPARK_HOME/jars/* /spark/jars/

 六、启动spark

1、启动spark master(hadoop01)

$SPARK_HOME/sbin/start-all.sh

2、启动spark 备用master(hadoop02)

$SPARK_HOME/sbin/start-master.sh

3、在master上启动日志服务(hadoop01)

hadoop fs -mkdir -p /spark/directory
$SPARK_HOME/sbin/start-history-server.sh

七、Spark环境测试

1、spark shell命令

spark-shell

2、本地模式测试

spark-submit \
--class org.apache.spark.examples.SparkPi \
--master local[*] \
${SPARK_HOME}/examples/jars/spark-examples_*.jar \
10

3、指定Mater测试

spark-submit \
--class org.apache.spark.examples.SparkPi \
--master spark://hadoop01:7077 \
${SPARK_HOME}/examples/jars/spark-examples_*.jar \
10

4、Spark On Yarn模式运行

spark-submit \
--class org.apache.spark.examples.SparkPi \
--master yarn \
--deploy-mode cluster \
${SPARK_HOME}/examples/jars/spark-examples_*.jar \
10

5、Spark Kill Application

yarn application -kill <applicationId>

6、Master Web UI

​http://hadoop01:8989 ​​​ ​​http://127.0.0.1:8991​

Docker搭建Spark集群_docker

​http://hadoop02:8989​​​  ​​http://127.0.0.1:8992​

Docker搭建Spark集群_docker_02

7、HistoryServer WebUI

​http://hadoop01:18080​​​ ​​ http://127.0.0.1:18081​

Docker搭建Spark集群_docker_03