在spark集群上跑一个程序首先保证下面进程开启

  • zookeeper
  • hdfs
  • spark

首先是父类的依赖

<properties>
        <scala.version>2.11.8</scala.version>
        <spark.version>2.2.2</spark.version>
        <hadoop.version>2.7.6</hadoop.version>
    </properties>
    <modules>
        <module>spark-core-study</module>
        <module>spark-common</module>
    </modules>
    <dependencyManagement>
        <dependencies>
            <dependency>
                <groupId>junit</groupId>
                <artifactId>junit</artifactId>
                <version>4.12</version>
            </dependency>

            <!-- scala去除
            <dependency>
                <groupId>org.scala-lang</groupId>
                <artifactId>scala-library</artifactId>
                <version>${scala.version}</version>
            </dependency>  -->
            <!-- sparkcore -->
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-core_2.11</artifactId>
                <version>${spark.version}</version>
            </dependency>
            <!-- sparksql -->
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-sql_2.11</artifactId>
                <version>${spark.version}</version>
            </dependency>
            <!-- sparkstreaming -->
            <dependency>
                <groupId>org.apache.spark</groupId>
                <artifactId>spark-streaming_2.11</artifactId>
                <version>${spark.version}</version>
            </dependency>
        </dependencies>
    </dependencyManagement>

然后是子类的依赖

 <dependencies>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.11</artifactId>
            <!--
                scope的范围
                    compile:    默认,在源码,编译,运行期都需要
                    provided:   只在写代码,编译器有效,运行期无效,因为系统已经提供了
                    test:       只在src/test/目录下面有效
                    runtime:    在源代码,编译器无效,在运行期有效,比如JDBC
            -->
            <scope>provided</scope>
        </dependency>

        <!-- 因为sparkcore内部已经集成了scala-library,所以就不需要在导入该包了
            <dependency>
                <groupId>org.scala-lang</groupId>
                <artifactId>scala-library</artifactId>
            </dependency>-->
    </dependencies>

WordCount

package blog

import org.apache.log4j.{Level, Logger}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}

/**
  * WordCount
  */
object WordCount {
  def main(args: Array[String]): Unit = {
    //提示语,如果没有参数就报错,直接退出jvm
    if (args == null || args.length < 1) {
      println(
        """
          |parameter errors! Usage: <input>
          |input: input file path
        """.stripMargin
        //stripMargin是用来分行显示的
      )
      System.exit(-1)
    }
    val Array(input) = args
    //打印需要的日志
    Logger.getLogger("org.apache.hadoop").setLevel(Level.INFO)
    Logger.getLogger("org.apache.spark").setLevel(Level.INFO)
    Logger.getLogger("org.spark_project").setLevel(Level.INFO)
    val conf = new SparkConf()
      //这里名字可以随便去
      .setAppName("WordCount")
    //加上下面这句话就是在单个节点跑,Web UI上不会有显示
    //.setMaster("local[*]")
    //SparkContext为spark的入口
    val sc = new SparkContext(conf)
    //获取每一行
    val lines: RDD[String] = sc.textFile(input)
    //这里我们可以看一下spark将文件分成了几个区
    println("##############################partition num of lines is:" + lines.getNumPartitions)
    //flatMap进行过滤,\\s+表示切到空格以及多个空格
    val words: RDD[String] = lines.flatMap(line => line.split("\\s+"))
    //map端直接调用
    val pairs: RDD[(String, Int)] = words.map(word => count(word))
    //将出现多次的结果在reduce端相加
    val retRDD: RDD[(String, Int)] = pairs.reduceByKey((v1, v2) => v1 + v2)
    //控制台打印
    retRDD
      .collect() //
      .foreach(println)
    sc.stop()

  }

  //默认出现一次的单词为word, 1
  def count(word: String): (String, Int) = (word, 1)
}

接着开始打jar包,我这里使用的是idea,点击左上角的File,选择Project Structure
Spark打包运行wordcount_spark
然后创建一个空的jar
Spark打包运行wordcount_Spark_02
给jar包起一个名
Spark打包运行wordcount_spark_03
选中要打包的代码
Spark打包运行wordcount_Spark_04
添加到左边去,然后点击OK
Spark打包运行wordcount_Spark_05
接着Build -> Build Artifacts
Spark打包运行wordcount_Spark_06
然后直接Bulid即可
Spark打包运行wordcount_Spark_07
可以看到idea左边的窗口多出了一个out文件夹
Spark打包运行wordcount_spark_08
然后打开文件所在位置
Spark打包运行wordcount_spark_09
将文件上传到Linux,然后随便找一个文件夹创建一个脚本

vi spark-submit-wc-standalone.sh

插入以下内容,文件的路径以及其他参数可以进行修改

#!/bin/sh

SPARK_HOME=/home/hadoop/apps/spark

${SPARK_HOME}/bin/spark-submit \
--class blog.WordCount \
--master spark://hadoop01:7077 \
--deploy-mode client \
--total-executor-cores 2 \
--executor-cores 1 \
--executor-memory 600M \
/home/hadoop/jars/spark/spark-wc.jar \
hdfs://bd1906/data/spark/hello.txt

这是文件内容,一定要保证hdfs上有这个文件

hello you
hello you
hello me
hello   you
hello you
hello me
This page outlines the steps for getting a Storm cluster up and running

如果没有的话可以上传一下

hdfs dfs -put hello.txt /data/spark/hello.txt

然后在当前目录运行脚本

./spark-submit-wc-standalone.sh

如果出现了权限不允许的错误,运行以下命令再运行脚本,如果没有可忽略

chmod 777 ./spark-submit-wc-standalone.sh

成功输出结果
Spark打包运行wordcount_Spark_10
Spark打包运行wordcount_Spark_11
我们可以访问一下spark的web页面http://hadoop01:8080
可以看到Completed Applications有记录
Spark打包运行wordcount_Spark_12
点进去,在stderr中可以看到详细信息
Spark打包运行wordcount_Spark_13