Spark整理(5)

一,Shrak介绍

Shark是基于Spark计算框架之上且兼容Hive语法的SQL执行引擎。由于底层的计算采用了Spark,性能比MapReduce的HIve普通快2倍以上,当数据全部Load在内存的话,将快10倍以上,因此Shrak可以交互式查询应用服务来使用。除了基于Spark的特性外,Shark是完全兼容Hive的语法,表结构以及UDF函数等,已有的HiveSql可以直接迁移到Shark,Shark底层依赖Hive的解析器,查询优化器,但 正是由于Shark的整体设计架构对Hive的依赖性太强。难以支持其长远发展,比如不能和Spark的其他组件很多的集成,无法满足Saprk的一栈式解决大数据处理的需求。

二,SparkSql介绍

2.1 介绍

spark 按某个字段分组 排序_sql

Hive是Shark的前身,Shark是SparkSql的前身,SparkSql产生的根本原因是其完全脱离了Hive的限制。

  • SparkSql支持查询原生的RDD。RDD是Spark平台的核心概念,是Spark能够高效的处理大数据的各种场景的基础。
  • 能够在Scala(java等…)中写Sql语句。支持简单的sql语法检查,能够在Scala中写HIve语句访问Hive数据,并将结果取回作为RDD使用
2.2 Spark on Hive 和 Hive on Spark
  • Spark on Hive Hive只作为存储角色,Spark负责sql解析优化,执行。
  • Hive on Spark Hive即作为存储又负责sql的解析优化,Spark负责执行。
2.3 DataFrame

spark 按某个字段分组 排序_spark_02

DataFrame也是一个分布式数据容器。与RDD类似,然而DataFrame更像传统数据库的二维表格,除了数据以外,还掌握数据的结构信息,即schema。同时,与Hive类似,DataFrame也支持嵌套数据类型(struct,array和map).从API易用性的角度上看,DataFrame APi提供的是一套高层的关系操作。比函数式的RDD API要更加友好,门槛更低。

DataFrame底层封装的是RDD,只不过RDD的泛型是Row类型。

2.4 SparkSql的数据源

SparkSql的数据源可以是JSON类型的字符串,JDBC,Parquet,Hive, HDFS等…

spark 按某个字段分组 排序_sql_03

2.5 SparkSql底层架构

首先拿到sql解析后,解析一批未被解决的逻辑计划,再经过分析得到分析后的逻辑计划,再经过一批优化规则转换成一批最佳优化的逻辑计划,再经过SparkPlanner的策略转化成一批物理计划,随后经过消费模型转换成一个个的Spark任务执行。

spark 按某个字段分组 排序_sql_04

2.6 谓词下推(predicate Pushdown)

spark 按某个字段分组 排序_apache_05

在最后进行join,把产生大量数据的操作放在之后执行,达到性能优化的效果,就叫做谓词下推。

三,创建DataFrame

3.1 读取 json格式的文件创建DataFrame

注意:

  • json文件中的json数据不能嵌套json格式数据
  • DataFrame是一个一个Row类型的RDD
  • 可以两种方式读取json格式文件
  • df.show() 默认显示20行数据
  • DataFrame原生API可以操作DataFrame(不方便)
  • 注册成临时表时,表中的列默认按ascii顺序显示列

java代码:

package com.shsxt.spark.sql.java;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.rdd.RDD;
import org.apache.spark.sql.Column;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SQLContext;

public class DataFrame_Json {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("json");

        JavaSparkContext sc = new JavaSparkContext(conf);

        SQLContext sqlContext = new SQLContext(sc);

//        DataFrame dataFrame = sqlContext.read().json("./data/json");

        DataFrame dataFrame = sqlContext.read().format("json").load("./data/json");

//        dataFrame.show();

//        System.out.println("===================");

//        dataFrame.printSchema();

//        dataFrame.select("age").show();

//        dataFrame.registerTempTable("people");

//        dataFrame.filter(Column.unapply("age"))

//        dataFrame.select(dataFrame.col("name"),dataFrame.col("age").plus(10).alias("addage")).show();

//        dataFrame.select(dataFrame.col("name"),dataFrame.col("age")).
//                where(dataFrame.col("age").gt(19)).show();
//
//        dataFrame.groupBy(dataFrame.col("name")).count().show();


        dataFrame.registerTempTable("people");

        DataFrame df = sqlContext.sql("select age from people where age >20");

        df.show();

        sc.stop();

    }
}

scala代码:

package com.shsxt.spark.sql.scala

import org.apache.spark.sql.{DataFrame, SQLContext}
import org.apache.spark.{SparkConf, SparkContext}


object DataFrame_JsonFile {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf()
    conf.setMaster("local").setAppName("jsonFile")

    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

    val df = sqlContext.read.json("./data/json")

//    df.show()

   // df.select(df.col("age")).where(df.col("age").gt(20)).show()

//    df.groupBy(df.col("name")).count().show()

    //注册临时表
    df.registerTempTable("user")

    val df1: DataFrame = sqlContext.sql("select name, count(1) from user group by name")

    df1.show()

    sc.stop()

  }

}
3.2 通过json格式的RDD创建DataFrame

java代码:

package com.shsxt.spark.sql.java;


import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;

import java.util.Arrays;

public class DF_JsonRDD {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("json");

        JavaSparkContext sc = new JavaSparkContext(conf);

        SQLContext sqlContext = new SQLContext(sc);

        JavaRDD<String> nameRDD = sc.parallelize(Arrays.asList(
                "{\"name\":\"zhangsan\",\"age\":\"18\"}",
                "{\"name\":\"lisi\",\"age\":\"19\"}",
                "{\"name\":\"wangwu\",\"age\":\"20\"}"
        ));

        JavaRDD<String> scoreRDD = sc.parallelize(Arrays.asList(
                "{\"name\":\"zhangsan\",\"score\":\"100\"}",
                "{\"name\":\"lisi\",\"score\":\"200\"}",
                "{\"name\":\"wangwu\",\"score\":\"300\"}"
        ));

        DataFrame nameDF = sqlContext.read().json(nameRDD);
        DataFrame scoreDF = sqlContext.read().json(scoreRDD);

        nameDF.registerTempTable("name");
        scoreDF.registerTempTable("score");

        DataFrame resultDF = sqlContext.sql("select name.name, name.age, score.score from " +
                "name,score where name.name=score.name");

        resultDF.show();

        sc.stop();
    }
}

scala代码:

package com.shsxt.spark.sql.scala

import org.apache.spark.sql.{DataFrame, SQLContext}
import org.apache.spark.{SparkConf, SparkContext}

object DataFrame_JsonFile {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf()
    conf.setMaster("local").setAppName("jsonFile")

    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

    val df = sqlContext.read.json("./data/json")

//    df.show()

   // df.select(df.col("age")).where(df.col("age").gt(20)).show()

//    df.groupBy(df.col("name")).count().show()

    //注册临时表
    df.registerTempTable("user")

    val df1: DataFrame = sqlContext.sql("select name, count(1) from user group by name")

    df1.show()

    sc.stop()

  }

}
3.3 非json格式的RDD创建DataFrame
3.3.1 通过反射方式将非json格式的RDD转换成DataFrame(不建议使用)
  • 自定义类要进行序列化
  • 自定义类的访问级别是Public
  • RDD转换成DataFrame后会根据映射将字段按Assci码排序
  • 将DataFrame转换成RDD时获取字段两种方式,一种是df.getInt(0)下标获取,另一种是df.getAs(“列名”)获取

java代码:

package com.shsxt.spark.sql.java;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SQLContext;

/**
 * 通过反射的方式将非Json格式的RDD转换成DataFrame
 * 注意: 这种方式不推荐使用
 */
public class CreateDFWithReflect {

    public static void main(String args[]){

        /**
         * 注意:
         * 1.自定义类要实现序列化接口
         * 2.自定义类访问级别必须是public
         * 3.RDD转成DataFrame会把自定义类中字段的名称按 assci码排序
         */

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("json");

        JavaSparkContext sc = new JavaSparkContext(conf);

        SQLContext sqlContext = new SQLContext(sc);

        JavaRDD<String> linesRDD = sc.textFile("./data/person.txt");

        JavaRDD<Person> mapRDD = linesRDD.map(new Function<String, Person>() {
            @Override
            public Person call(String line) throws Exception {
                Person p1 = new Person();
                p1.setId(line.split(",")[0]);
                p1.setName(line.split(",")[1]);
                p1.setAge(Integer.valueOf(line.split(",")[2]));

                return p1;
            }
        });

        DataFrame df = sqlContext.createDataFrame(mapRDD, Person.class);

//        df.show();
//        df.printSchema();
//
//        System.out.println("============================");
//
//        df.select(df.col("name"),df.col("age")).where(df.col("age").gt(18)).show();


        /**
         * 将DataFrame转成 JavaRDD
         * 注意:
         * 1.可以使用 row.getInt(0)...  通过下标获取返回Row类型的数据,但要注意列字段的顺序
         * 2.可以使用  row.getAs('列名') 来获取对应的列值
         */
        JavaRDD<Row> javaRDD = df.javaRDD();

        javaRDD.map(new Function<Row, Person>() {

            @Override
            public Person call(Row row) throws Exception {
                Person p1 = new Person();
                p1.setId((String) row.getAs("id"));
                p1.setName((String) row.getAs("name"));
                p1.setAge((Integer) row.getAs("age"));

                return p1;
            }
        }).foreach(new VoidFunction<Person>() {
            @Override
            public void call(Person person) throws Exception {
                System.out.println(person);
            }
        });


        sc.stop();

    }
}

scala代码:

package com.shsxt.spark.sql.scala

import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{Row, SQLContext}

case class Person(id:String,name:String,age:Integer)

object CreateDFWithReflect {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf()
    conf.setMaster("local").setAppName("reflect")

    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

    val linesRDD = sc.textFile("./data/person.txt")

    //将RDD隐式转换成DataFrame
    import sqlContext.implicits._

    val mapRDD: RDD[Person] = linesRDD.map(x=>{
      val p1 = new Person(x.split(",")(0),x.split(",")(1),Integer.valueOf(x.split(",")(2)))
      p1
    })

    val df = mapRDD.toDF()

    df.show()

    val rdd: RDD[Row] = df.rdd

    rdd.map(x=>{
      Person(x.getAs("id"),x.getAs("name"),x.getAs("age"))
    }).foreach(println)

    sc.stop()
  }

}
3.3.2 动态创建 Schema 将非 json 格式的 RDD 转换成 DataFrame

java代码:

package com.shsxt.spark.sql.java;


import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.types.DataType;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
import org.bouncycastle.util.Integers;
import sun.misc.FDBigInteger;

import java.util.Arrays;
import java.util.List;

/**
 * 动态创建Schema将非 json格式RDD转换成DataFrame
 */
public class CreateDataFrameWithStruct {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("json");

        JavaSparkContext sc = new JavaSparkContext(conf);

        SQLContext sqlContext = new SQLContext(sc);

        JavaRDD<String> linesRDD = sc.textFile("./data/person.txt");

        JavaRDD<Row> mapRDD = linesRDD.map(new Function<String, Row>() {

            @Override
            public Row call(String s) throws Exception {

                return RowFactory.create(s.split(",")[0],
                        s.split(",")[1], Integer.valueOf(s.split(",")[2]));
            }
        });

        /**
         * 动态创建DataFrame中的元数据,一般来说这里的字段来源字符串
         * 也可以来自外部数据库
         */
        List<StructField> asList = Arrays.asList(
                DataTypes.createStructField("id", DataTypes.StringType, true),
                DataTypes.createStructField("name", DataTypes.StringType, true),
                DataTypes.createStructField("age", DataTypes.IntegerType, true)
        );

        StructType schema = DataTypes.createStructType(asList);

        DataFrame df = sqlContext.createDataFrame(mapRDD, schema);

        df.printSchema();

        df.show();

        System.out.println("================================");

        JavaRDD<Row> javaRDD = df.javaRDD();

        javaRDD.foreach(new VoidFunction<Row>() {
            @Override
            public void call(Row row) throws Exception {
                System.out.println((String) row.getAs("id"));
                System.out.println((String) row.getAs("name"));
                System.out.println((Integer) row.getAs("age"));
                System.out.println(row);
            }
        });

        sc.stop();
    }
}

scala代码:

package com.shsxt.spark.sql.scala

import org.apache.spark.rdd.RDD
import org.apache.spark.sql.types.{DataTypes, StructType}
import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{DataFrame, Row, RowFactory, SQLContext}


object CreateDataFrameWithSchema {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf()
    conf.setMaster("local").setAppName("jsonRDD")

    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

    val linesRDD = sc.textFile("./data/person.txt")

    val rowRDD = linesRDD.map(x=>{
      RowFactory.create(x.split(",")(0),x.split(",")(1),Integer.valueOf(x.split(",")(2)))
    })

    val arr = Array(
      DataTypes.createStructField("id",DataTypes.StringType,true),
      DataTypes.createStructField("name",DataTypes.StringType,true),
      DataTypes.createStructField("age",DataTypes.IntegerType,true)
    )

    val structType: StructType = DataTypes.createStructType(arr)

    val dataFrame: DataFrame = sqlContext.createDataFrame(rowRDD,structType)

    dataFrame.printSchema()

    dataFrame.show()

    println("==========================")

    val rdd: RDD[Row] = dataFrame.rdd

    rdd.foreach(println)

    sc.stop()

  }

}
3.4 读取parquet文件创建DF

注意:

  • 可以将DataFrame存储成parquet文件。保存成parquet文件的方式有两种
df.write().mode(SaveMode.Overwrite)format("parquet")
 .save("./sparksql/parquet");
df.write().mode(SaveMode.Overwrite).parquet("./sparksql/parquet");
  • SaveMode指定文件保存时的模式
  1. override 覆盖
  2. Append 追加
  3. ErrorIfExists 如果存在报错
  4. Ignore 如果存在就忽略

java代码:

package com.shsxt.spark.sql.java;


import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.SaveMode;

public class CreateDataFrameWithParquet {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("json");

        JavaSparkContext sc = new JavaSparkContext(conf);

        SQLContext sqlContext = new SQLContext(sc);

        JavaRDD<String> liensRDD = sc.textFile("./data/json");

        DataFrame df = sqlContext.read().json(liensRDD);

        //将dataFrame 保存为 parquet文件
        df.write().format("parquet").mode(SaveMode.Overwrite).save("./data/parquet");

        sc.stop();


    }
}

scala代码:

package com.shsxt.spark.sql.scala

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{DataFrame, SQLContext, SaveMode}

object CreateDataFrameWithParquet {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf()
    conf.setMaster("local").setAppName("jsonRDD")

    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

    val jsonRDD = sc.textFile("./data/json")

    val df: DataFrame = sqlContext.read.json(jsonRDD)

    //将DF保存为 parquet文件
    //df.write.mode(SaveMode.Overwrite).parquet("./data/parquet")
//    df.write.mode(SaveMode.Overwrite).format("parquet").save("./data/parquet")

    //读取parquet文件
    val df1: DataFrame = sqlContext.read.parquet("./data/parquet")
//    sqlContext.read.format("parquet").load("./data/parquet")

    df1.printSchema()
    df1.show()

    sc.stop()

  }
}
3.5 读取JDBC数据创建DF(mysql为例)

java代码:

package com.shsxt.spark.sql.java;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.SaveMode;
import java.util.HashMap;
import java.util.Map;
import java.util.Properties;

public class CreateDataFrameWithMySql {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("mysql");
        conf.set("spark.sql.shuffle.partitions","1");

        JavaSparkContext sc = new JavaSparkContext(conf);

        SQLContext sqlContext = new SQLContext(sc);

        Map<String, String> options = new HashMap<>();
        options.put("url","jdbc:mysql://127.0.0.1:3306/spark");
        options.put("driver","com.mysql.jdbc.Driver");
        options.put("user","root");
        options.put("password","123456");
        options.put("dbtable","person");

        DataFrame df = sqlContext.read().format("jdbc").options(options).load();

//        df.printSchema();
//        df.show();

        //将DataFrame结果保存到mysql中

        Properties prop = new Properties();
        prop.setProperty("user","root");
        prop.setProperty("password","123456");

        /**
         * Save Mode
         * OverWrite  覆盖
         * Append 追加
         * ErrorIfExists  如果存在就报错
         * Ignore 如果存在就忽略
         */

        df.write().mode(SaveMode.Append).jdbc("jdbc:mysql://127.0.0.1:3306/spark","person",prop);
        sc.stop();

    }
}

scala代码:

package com.shsxt.spark.sql.scala

import java.util.Properties

import org.apache.spark.{SparkConf, SparkContext}
import org.apache.spark.sql.{DataFrame, SQLContext, SaveMode}

object CreateDataFrameWithMySql {

  def main(args: Array[String]): Unit = {

    val conf = new SparkConf()
    conf.setMaster("local").setAppName("mysql")

    val sc = new SparkContext(conf)
    val sqlContext = new SQLContext(sc)

    val reader = sqlContext.read.format("jdbc")
    reader.option("url","jdbc:mysql://127.0.0.1:3306/spark")
    reader.option("driver","com.mysql.jdbc.Driver")
    reader.option("user","root")
    reader.option("password","123456")
    reader.option("dbtable","person")

    val person: DataFrame = reader.load()

    person.registerTempTable("person")

    val result: DataFrame = sqlContext.sql("select * from person where id=1")

    result.show()

    //将DataFrame写入到mysql表中
    val props = new Properties()
    props.setProperty("user","root")
    props.setProperty("password","123456")

    result.write.mode(SaveMode.Append).jdbc("jdbc:mysql://127.0.0.1:3306/spark","person",props)

    sc.stop()

  }

}
3.6 读取Hive中的数据加载成DataFrame
  • HiveContext是SqlContext的子类,连接Hive建议使用HiveContext.

idea代码连接Hive,开启Hive远程模式(客户端服务端分离),如图服务端配置

spark 按某个字段分组 排序_apache_06

idea代码配置:hive客户端配置,如图

spark 按某个字段分组 排序_spark 按某个字段分组 排序_07

hive-site.xml内容:

<configuration>  
  
<property>  
  <name>hive.metastore.uris</name>  
  <value>thrift://192.168.150.103:9083</value>
</property>
  
</configuration>

hive的数据存储在HDFS上(元数据信息存储在mysql),所有HDFS的配置也得拷贝到idea上

  • 启动程序步骤

1.启动mysql,根据自己的配置

2.启动HDFS (先启动zookeeper集群)

3.启动Hive,metastore

./hive --service metastore

4.运行代码

代码如下:

package com.shsxt.spark.sql.java;


import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SaveMode;
import org.apache.spark.sql.hive.HiveContext;

public class CreateDataFrameWithHive {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("localHive");

        JavaSparkContext sc = new JavaSparkContext(conf);

        //HiveContext是 SQLContext的子类
        HiveContext hiveSC = new HiveContext(sc);
		
        //spark 实例库  需要提前手动创建
        hiveSC.sql("use spark");
        hiveSC.sql("drop table if exists student_infos");

        //在Hive中创建student_infos表
        hiveSC.sql("CREATE TABLE IF NOT EXISTS student_infos (name STRING,age INT) " +
                "row format delimited fields terminated by '\t' ");
        hiveSC.sql("load data local inpath './data/student_infos' into table student_infos");

        hiveSC.sql("drop table if exists student_scores");
        hiveSC.sql("create table if not exists student_scores (name STRING, score INT) " +
                "row format delimited fields terminated by '\t'");

        hiveSC.sql("load data local inpath './data/student_scores' into table student_scores");

        /**
         * 查询表生成DataFrame
         */
        DataFrame goodStuDF = hiveSC.sql("select s1.name, s1.age, s2.score from student_infos s1 join " +
                "student_scores s2 on s1.name=s2.name " +
                "where s2.score >= 80");

        hiveSC.sql("drop table if exists good_stu_infos");

        goodStuDF.registerTempTable("goodStu");

        DataFrame result = hiveSC.sql("select * from goodStu");

        result.show();


        /**
         * 将结果保存到hive表中 good_stu_infos
         */
        goodStuDF.write().mode(SaveMode.Overwrite).saveAsTable("good_stu_infos");

        Row[] goodStuInfos = hiveSC.table("good_stu_infos").collect();

        for (Row row : goodStuInfos){
            System.out.println(row);
        }

        sc.stop();

    }
}

四,存储DataFrame

  • 将DataFrame存储成parquet文件
  • 将DataFrame存储到JDBC数据库
  • 将DataFrame存储到Hive表

五,自定义函数UDF,UDAF

5.1 UDF用户自定义函数
package com.shsxt.spark.sql.udf_udaf;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.api.java.UDF2;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class UDF {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("udf");

        JavaSparkContext sc = new JavaSparkContext(conf);

//        HiveContext hiveSc = new HiveContext(sc);

        SQLContext sqlSC = new SQLContext(sc);

        JavaRDD<String> listRDD = sc.parallelize(Arrays.asList("zhangsan", "lisi", "zhangsan", "wangwu",
                "zhangsan", "lisi", "wangwu"));

        JavaRDD<Row> rowRDD = listRDD.map(new Function<String, Row>() {
            @Override
            public Row call(String v1) throws Exception {
                return RowFactory.create(v1);
            }
        });

        /**
         * 动态创建schema方式加载DF
         */
        List<StructField> list = new ArrayList<StructField>();

        list.add(DataTypes.createStructField("name",DataTypes.StringType,true));
        StructType schema = DataTypes.createStructType(list);

        DataFrame df = sqlSC.createDataFrame(rowRDD, schema);

        df.registerTempTable("user");

        /**
         * 根据UDF函数参数的个数  决定是实现哪一个UDF  UDF1  UDF2  UDF3
         */
        sqlSC.udf().register("StrLen", new UDF2<String, Integer, Integer>() {

            @Override
            public Integer call(String s, Integer integer) throws Exception {
                return s.length() - integer;
            }
        },DataTypes.IntegerType);

        sqlSC.sql("select name, StrLen(name,2) as length from user").show();

        sc.stop();
    }
}
5.2 UDAF:用户自定义聚合函数
  • 实现UDAF函数,需要自定义类实现UserDefinedAggregateFunction类
package com.shsxt.spark.sql.udf_udaf;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.RowFactory;
import org.apache.spark.sql.SQLContext;
import org.apache.spark.sql.expressions.MutableAggregationBuffer;
import org.apache.spark.sql.expressions.UserDefinedAggregateFunction;
import org.apache.spark.sql.types.DataType;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

/**
 * UDAF 用户自定义聚合函数
 */
public class UDAF {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("UDAF");
        conf.set("spark.sql.shuffle.partitions","1");

        JavaSparkContext sc = new JavaSparkContext(conf);
        SQLContext sqlSc = new SQLContext(sc);

        JavaRDD<String> parallelize = sc.parallelize(
                Arrays.asList("zhangsan", "lisi", "wangwu", "zhangsan", "zhangsan", "lisi","zhangsan",
                        "lisi", "wangwu", "zhangsan", "zhangsan", "lisi"),2);

        JavaRDD<Row> rowRDD = parallelize.map(new Function<String, Row>() {

            @Override
            public Row call(String v1) throws Exception {
                return RowFactory.create(v1);
            }
        });

        List<StructField> fields = new ArrayList<StructField>();
        fields.add(DataTypes.createStructField("name", DataTypes.StringType, true));
        StructType schema = DataTypes.createStructType(fields);
        DataFrame df = sqlSc.createDataFrame(rowRDD, schema);
        df.registerTempTable("user");

        /**
         * 注册一个UDAF函数,实现统计相同值的个数
         * 注意: 这里可以自定义一个类继承 UserDefuneAggregateFunction类也是可以的
         */
        sqlSc.udf().register("StringCount", new UserDefinedAggregateFunction() {

            /**
             * 指定输入字段的字段及类型
             * @return
             */
            @Override
            public StructType inputSchema() {
                return DataTypes.createStructType(Arrays.asList(DataTypes.createStructField("name",
                        DataTypes.StringType,true)));
            }

            /**
             * 指定UDAF函数计算后返回的结果类型
             * @return
             */
            @Override
            public DataType dataType() {
                return DataTypes.IntegerType;
            }

            /**
             * 确保一致性 一般用true,用以标记针对给定的一组输入,UDAF是否总是生成相同的结果。
             * @return
             */
            @Override
            public boolean deterministic() {
                return true;
            }

            /**
             * 更新 可以认为一个一个的将组内的字段值传递进来 实现拼接的逻辑
             * Buffer.getInt(0) 获取的是上一次聚合后的值
             * 相当于map端的combiner combiner就是对每一个map task的处理结果进行一次小聚合
             * 大的聚合发生在reduce端
             * 这里即是:在进行聚合的时候,每当有新的值进来,对分组后的聚合如何进行计算
             * @param buffer
             * @param input
             */
            @Override
            public void update(MutableAggregationBuffer buffer, Row input) {
                buffer.update(0,buffer.getInt(0)+1);
                System.out.println("update.....buffer"+buffer.toString()+"  |row"+input.toString());
            }

            /**
             * 在进行聚合操作的时候所要处理的数据的结果的类型
             * @return
             */
            @Override
            public StructType bufferSchema() {
                return DataTypes.createStructType(Arrays.asList(DataTypes.
                        createStructField("buffer",DataTypes.IntegerType,true)));
            }

            /**
             * 合并 update操作,可能是针对一个分组内的部分数据,在某个节点上发生的 但是可能一个分组内的数据,会分布在多个节点上处理
             * 此时就要用merge操作,将各个节点上分布式拼接好的串,合并起来
             * buffer1.getInt(0) : 大聚合的时候 上一次聚合后的值
             * buffer2.getInt(0) : 这次计算传入进来的update的结果
             * 这里即是:最后在分布式节点完成后需要进行全局级别的Merge操作
             */
            @Override
            public void merge(MutableAggregationBuffer buffer1, Row buffer2) {
                buffer1.update(0,buffer1.getInt(0)+buffer2.getInt(0));
                System.out.println("merge.......buffer"+buffer1.toString()+"   |row"+buffer2.toString());
            }

            /**
             * 初始化一个内部的自己定义的值,在Aggregate(聚合)之前每组数据初始化结果
             * @param buffer
             */
            @Override
            public void initialize(MutableAggregationBuffer buffer) {
                buffer.update(0,0);
            }

            /**
             * 最后返回一个和DataType的类型要一致的类型,返回UDAF最后的计算结果
             * @param row
             * @return
             */
            @Override
            public Object evaluate(Row row) {
                return row.getInt(0);
            }
        });

        sqlSc.sql("select name, StringCount(name) as number from user group by name").show();

        sc.stop();
    }
}

六,开窗函数

row_number()开窗函数是按照某个字段分组,然后取另一字段的前几个的值,相当于分组取topN

如果SQL语句中使用到了开窗函数,那么这个SQL语句必须使用HiveContext来执行,HiveContext默认情况下无法在本地创建。

开窗函数格式: row_numer() over (partition by xxx order by yyy)

package com.shsxt.spark.sql.rowNumber;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.sql.DataFrame;
import org.apache.spark.sql.hive.HiveContext;

public class RowNumber {

    public static void main(String args[]){

        SparkConf conf = new SparkConf();
        conf.setMaster("local").setAppName("windowfun");

        JavaSparkContext sc = new JavaSparkContext(conf);

        HiveContext hiveSc = new HiveContext(sc);

        hiveSc.sql("use spark");
        hiveSc.sql("drop table if exists sales");

        hiveSc.sql("create table if not exists sales (riqi string,leibie string,jine Int) "
                + "row format delimited fields terminated by '\t'");

        hiveSc.sql("load data local inpath './data/sales.txt' into table sales");

        /**
         * 开窗函数格式:
         * row_number() over (partition by xxx order by xxx)
         */
        DataFrame result = hiveSc.sql("select riqi, leibie, jine from (" +
                "select riqi, leibie, jine, row_number() over" +
                "(partition by leibie order by jine desc) rank from sales ) as t" +
                "where t.rank<=3");
        result.show();

        sc.stop();

    }
}

sales.txt

1	A	1
1	A	2
1	B	3
1	A	4
1	C	5
1	B	6
1	C	7
1	A	8
1	D	9
1	F	10
2	B	11
2	D	12
2	A	13
2	E	14
2	F	15
2	F	16
2	A	17
2	G	18
2	B	19
2	C	20
3	B	21
3	F	22
3	A	23
3	E	24
3	G	25
3	E	26
3	F	27
3	A	28
3	C	29
3	G	30
4	F	31
4	B	32
4	D	33
4	E	34
4	E	35
4	B	36
4	A	37
4	F	38
4	F	39
4	E	40
5	G	41
5	B	42
5	F	43
5	F	44
5	A	45
5	E	46
5	G	47
5	E	48
5	F	49
5	B	50
6	C	51
6	F	52
6	B	53
6	A	54
6	D	55
6	F	56
6	B	57
6	A	58
6	F	59
6	B	60
7	A	61
7	F	62
7	E	63
7	G	64
7	G	65
7	G	66
7	B	67
7	F	68
7	E	69
7	F	70
8	E	71
8	F	72
8	A	73
8	C	74
8	G	75
8	D	76
8	A	77
8	E	78
8	D	79
8	A	80
9	C	81
9	B	82
9	E	83
9	F	84
9	E	85
9	C	86
9	F	87
9	A	88
9	G	89
9	E	90
1	G	91
2	A	92
3	D	93
4	E	94
5	C	95
6	F	96
7	E	97
8	B	98
9	A	99

启动程序方式同 读取Hive加载成DataFrame方式