Spark是一种快速、通用、可扩展的大数据计算引擎,是MapReduce的替代方案,而且兼容HDFS、Hive,可融入Hadoop的生态系统,以弥补MapReduce的不足。

Spark特点:

1、快 ,与MapReduce相比,Spark基于内存的运算要快100倍以上;

2、易用 ,支持Java、Python和Scala的API,还支持超过80种高级算法;

3、通用,可以用于批处理、交互式查询(Spark SQL)、实时流处理(Spark Streaming)、机器学习(Spark MLlib)和图计算(GraphX);

4、兼容性,可以非常方便地与其他的开源产品进行融合;

一、集群安装

1、关闭防火墙

2、安装jdk

3、上传spark安装包

mkdir   /bigData
tar -zxvf spark-1.6.1-bin-hadoop2.6.tgz

4、修改spark-env.sh.template

cd conf/
mv spark-env.sh.template spark-env.sh
vi spark-env.sh

在该配置文件中添加如下配置

export JAVA_HOME=/usr/java/jdk1.7.0_80
export SPARK_MASTER_IP=hadoop01
export SPARK_MASTER_PORT=7077

5、修改slaves.template文件

mv slaves.template slave
vi slaves

在该文件中添加子节点所在的位置(Worker节点)

hadoop02
hadoop03

6、将配置好的Spark拷贝到其他节点上

scp -r /bigData/ hadoop02:/
scp -r /bigData/ hadoop03:/

7、启动集群

hadoop01机器上,sbin目录下

./start-all.sh
jsp

hadoop01上会出现master进程,hadoop02和hadoop03上会出现worker进程

8、登录Spark管理界面查看集群状态(主节点)

大数据之Spark_hadoop

二、启动Spark Shell

spark-shell是Spark自带的交互式Shell程序,方便用户进行交互式编程,用户可以在该命令行下用scala编写spark程序。

/bigData/spark-1.6.1-bin-hadoop2.6/bin/spark-shell --master spark://hadoop01:7077 --executor-memory 1g --total-executor-cores 1

–master spark://hadoop01:7077 指定Master的地址

–executor-memory 1g 指定每个worker可用内存为1G

–total-executor-cores 1 指定整个集群使用的cup核数为1个

spark shell中编写WordCount

在/home/hadoop-2.6.4/sbin 下启动hdfs集群启动hdfs

./start-dfs.sh

向hdfs上传文件到hdfs://hadoop01:9000/input

大数据之Spark_scala_02

spark shell中编写spark程序

sc.textFile("hdfs://hadoop01:9000//wordcount/input").flatMap(_.split(" "))
.map((_,1)).reduceByKey(_+_).saveAsTextFile("hdfs://hadoop01:9000/wordcount/out")

三、spark程序

1、用蒙特·卡罗算法求PI

/bigData/spark-1.6.1-bin-hadoop2.6/bin/spark-submit --class org.apache.spark.examples.SparkPi --master spark://hadoop01:7077 --executor-memory 1g --total-executor-cores 1 /bigData/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar 100

2、编写WordCount程序

1、创建一个maven项目

2、配置Maven的pom.xml

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>cn.itcast.spark</groupId>
<artifactId>spark-mvn</artifactId>
<version>1.0-SNAPSHOT</version>

<properties>
<maven.compiler.source>1.7</maven.compiler.source>
<maven.compiler.target>1.7</maven.compiler.target>
<encoding>UTF-8</encoding>
<scala.version>2.10.6</scala.version>
<scala.compat.version>2.10</scala.compat.version>
</properties>

<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>

<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.10</artifactId>
<version>1.5.2</version>
</dependency>

<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.10</artifactId>
<version>1.5.2</version>
</dependency>

<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.6.2</version>
</dependency>
</dependencies>

<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<testSourceDirectory>src/test/scala</testSourceDirectory>
<plugins>
<plugin>
<groupId>net.alchim31.maven</groupId>
<artifactId>scala-maven-plugin</artifactId>
<version>3.2.0</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
<configuration>
<args>
<arg>-make:transitive</arg>
<arg>-dependencyfile</arg>
<arg>${project.build.directory}/.scala_dependencies</arg>
</args>
</configuration>
</execution>
</executions>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-surefire-plugin</artifactId>
<version>2.18.1</version>
<configuration>
<useFile>false</useFile>
<disableXmlReport>true</disableXmlReport>
<includes>
<include>**/*Test.*</include>
<include>**/*Suite.*</include>
</includes>
</configuration>
</plugin>

<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>2.3</version>
<executions>
<execution>
<phase>package</phase>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<filters>
<filter>
<artifact>*:*</artifact>
<excludes>
<exclude>META-INF/*.SF</exclude>
<exclude>META-INF/*.DSA</exclude>
<exclude>META-INF/*.RSA</exclude>
</excludes>
</filter>
</filters>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
</project>

3、将src/main/java和src/test/java分别修改成src/main/scala和src/test/scala,与pom.xml中的配置保持一致

4、新建一个scala class,类型为Object

5、编写spark程序

package cn.itcast.spark
import org.apache.spark.{SparkContext, SparkConf}

object WordCount {
def main(args: Array[String]) {
//创建SparkConf()并设置App名称
val conf = new SparkConf().setAppName("WC")
//创建SparkContext,该对象是提交spark App的入口
val sc = new SparkContext(conf)
//使用sc创建RDD并执行相应的transformation和action
sc.textFile(args(0)).flatMap(_.split(" ")).map((_, 1)).reduceByKey(_+_, 1).sortBy(_._2, false).saveAsTextFile(args(1))
//停止sc,结束该任务
sc.stop()
}
}

6、打包

7、启动hdfs和Spark集群

8、使用spark-submit命令提交Spark应用

/bigData/spark-1.6.1-bin-hadoop2.6/bin/spark-submit --class cn.itcast.spark.WordCount --master spark://hadoop01:7077 --executor-memory 1G --total-executor-cores 2 /root/spark-mvn-1.0-SNAPSHOT.jar hdfs://hadoop01:9000/input hdfs://hadoop019000/wordcount/out2