Flink和各种组件

  • enviroment
  • Source
  • flink + kafka (flink 消费 kafka 中的数据)
  • Transform
  • Transformation 的介绍
  • 复杂的方法
  • Sink
  • Kafka Sink
  • Redis Sink
  • Elasticsearch
  • JDBC 自定义 sink


FlinkSource如何指定分区策略_flink 的sink和source

enviroment

  1. getExecutionEnvironment
          创建一个执行环境,表示当前执行程序的上下文。如果程序是独立调的,则此方法返回本地执行文件;如果从命令行客户端调用程序以提交到集群,则次方法返回此集群的环境,也就是说,getExecutionEnvironment 会根据查询运行的方式决定返回什么样的运行环境,是最常用的一种创建执行环境的方式。
val env: ExecutionEnvironment = ExecutionEnvironment.getExecutionEnvironment

      如果设置并行度,会以 flink-conf.vaml 中的配置为准,默认为 1。

FlinkSource如何指定分区策略_kafka_02

  1. createLocalEnvironment
          返回本地执行环境,需要在调用时指定默认的并行度。
val env = StreamExecutionEnvironment.createLocalEnvironment(1)
  1. createRemoteEnvironment
          返回集群执行环境,将 Jar 提交到远程服务器。需要在调用时指定 JobManager 的 IP 和端口号,并指定要在集群中运行的 Jar 包。
val env = ExecutionEnvironment.createRemoteEnvironment("jobmanager-hostname", 6123,"C://jar//flink//wordcount.jar")

Source

flink + kafka (flink 消费 kafka 中的数据)

  1. 启动 zk,kafka。
  2. 创建 topic,以及启动生产者。

1 bin/kafka-topics.sh --create --partitions 3 --replication-factor 2 --topic testnew --bootstrap-server vm0:2181,vm1:2181,vm2:2181
2 bin/kafka-console-producer.sh --broker-list vm0:9092,vm1:9092,vm2:9092 --topic testnew

  1. pom 文件
<dependency>
<groupId>org.apache.flink</groupId>
<artifactId>flink-connector-kafka-0.8_2.11</artifactId>
<version>1.6.1</version>
</dependency>
  1. 代码实现
/**
  * flink 从kafka中读取数据  
  */
object KafkaCousumerToflink {
  def main(args: Array[String]): Unit = {
    //这种可有
    demo01
  }
  def demo01: Unit = {
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
    val properties = new Properties()
    properties.setProperty("bootstrap.servers", "vm2:9092")
    // only required for Kafka 0.8
    properties.setProperty("zookeeper.connect", "vm2:2181")
    properties.setProperty("group.id", "test")
    val stream = env
      .addSource(new FlinkKafkaConsumer08[String]("mdj", new SimpleStringSchema(), properties))
      .print()
    env.execute("KafkaCousumerToflink")
  }
}

Transform

Transformation 的介绍

Flink 提供了大量的算子操作

一.  Map

输入一个参数产生一个参数,map 的功能是对输入的参数进行转换操作。
val streamMap = stream.map { x => x * 2 }

二.  flatMap

输入一个参数,产生 0、1 或者多个输出,这个多用于拆分操作。
val streamFlatMap = stream.flatMap{
x => x.split(" ")
}

三.  filter

结算每个元素的布尔值,并返回为 true 的元素。
val streamFilter = stream.filter{
x => x == 1
}

四.  KeyBy

DataStream → KeyedStream:输入必须是Tuple类型,逻辑地将一个流拆分成不相交的分区,每个分区包含具有相同key的元素,在内部以hash的形式实现的。
注意:以下类型无法作为key。

  1. POJO类,且没有实现 hashCode 函数
  2. 任意形式的数组类型

五.  Distinct

去重

六.  join 和 outJoin

关联

七.  cross

求笛卡尔积

八.  reduce

滚动合并操作,合并当前元素和上一次合并的元素结果
//求各个渠道的累计个数
val value: DataStream[(Int, Int)] = env.fromElements((1, 2), (1, 3))
val kst: KeyedStream[(Int, Int), Tuple] = value.keyBy(0)
kst.reduce { (t1, t2) => (t1._1, t1._2 + t2._2) }.print().setParallelism(1)
env.execute()

九.  fold

用一个初始的一个值,与其每个元素进行滚动合并操作。
private def myFold(env: StreamExecutionEnvironment): Unit = {
val value: DataStream[(Int, Int)] = env.fromElements((1, 2), (1, 3))
val kst: KeyedStream[(Int, Int), Tuple] = value.keyBy(0)
val ds: DataStream[String] = kst.fold("")((str, i) => {
str + “-” + i
})
ds.print()
env.execute()
}

复杂的方法

  1. aggregation       KeyedStream --> DataStream:分组流数据的滚动聚合操作: min 和 minBy 的区别是 min 返回的是一个最小值,而 minBy 返回的是其字段中包含的最小值元素(同样原理使用与 max 和 maxBy)。
  2. window       KeyedStream --> DataStream:windows 是在一个分区的 KeyedStream 中定义的,windows 根据某些特性将每个 key 的数据进行分组(例如:在 5s 内到达的数据)。
  3. windowAll       DataStream --> AllWindowedStream:Windows 可以在一个常规的 DataStream 中定义,Windows 根据某些特性对所有的流(例如:5s内到达的数据)。这个操作在很多情况下都不是并行操作的,所有的记录都会聚集到一个 windowAll 的操作任务中。
  4. window apply       WindowedStream --> DataStream,AllWindowedStream --> DataStream:将一个通用的函数作为一个整体传递给window
  5. window reduce        WindowedStream --> DataStream:给窗口赋予一个reduce的功能,并返回一个reduce的结果。
  6. window fold        WindowedStream --> DataStream:给窗口赋予一个fold的功能,并返回一个fold后的结果
  7. aggregation on windows        WindowedStream --> DataStream:对 window 的元素做聚合操作,min 和 minBy 的区别是 min 返回的是最小值,而 minBy 返回的是包含最小值字段的元素。(同样原理适用于 max 和 maxBy )
  8. union

FlinkSource如何指定分区策略_apache_03

      DataStream --> DataStream:对两个或两个以上的 DataStream 做 union 操作,产生一个包含所有的 DataStream 元素的新 DataStream 。注意:如果将一个 DataStream 和自己做union操作,在新的 DataStream 中,将看到每个元素重复两次

private def myUnion(env: StreamExecutionEnvironment): Unit = {
  //myConnAndCoMap(env)
  val dsm: DataStream[Int] = env.fromElements(1, 3, 5)
  val dsm01: DataStream[Int] = env.fromElements(2, 4, 6)
  val unit: DataStream[Int] = dsm.union(dsm01)
  unit.print()
  env.execute()
}
  1. window join       DataStream,DataStream --> DataStream:根据给定的 key 和 window 对两个 DataStream 做 join 操作
  2. window coGroup       DataStream,DataStream --> DataStream:根据一个给定的 key 和 window 对两个 DataStream 做 CoGroups 操作
  3. connect

FlinkSource如何指定分区策略_FlinkSource如何指定分区策略_04

      DataStream,DataStream --> ConnectedStreams:连接两个保持她们类型的数据流。

12. coMap 、coFlatMap

FlinkSource如何指定分区策略_flink_05

      ConnectedStreams --> DataStream:作用于 connected 数据流上,功能与 map 和 flatMap 一样

//合并以后打印
private def myConnAndCoMap(env: StreamExecutionEnvironment): Unit = {
  env.setParallelism(1)
  val src: DataStream[Int] = env.fromElements(1, 3, 5)
  val stringMap: DataStream[String] = src.map(line => "x " + line)
  val result = stringMap.connect(src).map(new CoMapFunction[String, Int, String] {
    override def map2(value: Int): String = {
      "x " + (value + 1)
    }
    override def map1(value: String): String = {
      value
    }
  })
  result.print()
  env.execute()
}
  1. split

          DataStream --> SplitStream:根据某些特征把一个 DataStream 拆分成两个或多个 DataStream
  2. select

FlinkSource如何指定分区策略_flink_06

      SplitStream --> DataStream:从一个 SplitStream 中获取一个或多个 DataStream

private def selectAndSplit(env: StreamExecutionEnvironment): Unit = {
  val dsm: DataStream[Long] = env.fromElements(1l, 2l, 3l, 4l)
  val split:SplitStream[Long] = dsm.split(new OutputSelector[Long] {
    override def select(out: Long): lang.Iterable[String] = {
      val list = new util.ArrayList[String]()
      if (out % 2 == 0) {
        list.add("even")
      } else {
        list.add("odd")
      }
      list
    }
  })
  split.select("odd").print().setParallelism(1)
  env.execute()
}
  1. iterate      DataStream --> IterativeStream --> DataStream:在流程中创建一个反馈循环,将一个操作的输出重定向到之前的操作,这对于定义持续更新模型的算法来说很有意义的。
  2. extract timestamps      DataStream --> DataStream:提取记录中的时间戳来跟需要事件时间的 window 一起发挥作用.

Sink

Flink + kafka

      Flink 没有类似于 spark 中 foreach 方法,让用户进行迭代的操作。虽有对外的输出操作都要利用 Sink 完成。最后通过类似如下方式完成整个任务最终输出操作。

myDstream.addSink(new MySink(xxxx))

      官方提供了一部分的框架的 sink。除此以外,需要用户自定义实现 sink。

FlinkSource如何指定分区策略_flink 的sink和source_07

Kafka Sink

创建 maven 项目,引入 pom 依赖。

<!-- https://mvnrepository.com/artifact/org.apache.flink/flink-connector-kafka-0.11 -->
<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-kafka-0.8_2.11</artifactId>
    <version>1.6.1</version>
</dependency>

代码实现

/**
  * 把数据写入到kafka中
  */
object KafkaProducerFromFlink {
  def main(args: Array[String]): Unit = {
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
    val dsm: DataStream[String] = env.fromElements("1", "2")
    val prop: Properties = new Properties()
    prop.setProperty("bootstrap.servers", "vm2:9092")
    val value: FlinkKafkaProducer08[String] = new FlinkKafkaProducer08("mdj", new SimpleStringSchema(), prop)
    dsm.addSink(value)
    env.execute()
  }
}

Redis Sink

创建 maven 工程,引入 pom

<!-- https://mvnrepository.com/artifact/org.apache.bahir/flink-connector-redis -->
<dependency>
    <groupId>org.apache.bahir</groupId>
    <artifactId>flink-connector-redis_2.11</artifactId>
    <version>1.0</version>
</dependency>

代码实现

import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.flink.streaming.connectors.redis.RedisSink
import org.apache.flink.streaming.connectors.redis.common.config.FlinkJedisPoolConfig
import org.apache.flink.streaming.connectors.redis.common.mapper.{RedisCommand, RedisCommandDescription, RedisMapper}
import org.apache.flink.streaming.api.scala._

object MyRedisUtil {
  def main(args: Array[String]): Unit = {
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
    env.fromCollection(List(("flink","redis"))).map( x=>(x._1,x._2+"" )).addSink(MyRedisUtil.getRedisSink()).setParallelism(1)
    env.execute("redissink")
  }
  val conf = new FlinkJedisPoolConfig.Builder().setHost("192.168.44.127").setPort(6379).build()

  def getRedisSink(): RedisSink[(String,String)] ={
    new RedisSink[(String,String)](conf,new MyRedisMapper)
  }

  class MyRedisMapper extends RedisMapper[(String,String)]{
    override def getCommandDescription: RedisCommandDescription = {
      //      new RedisCommandDescription(RedisCommand.HSET, "channel_count")
      new RedisCommandDescription(RedisCommand.SET ,"myset" )
    }

    override def getValueFromData(t: (String, String)): String = t._2

    override def getKeyFromData(t: (String, String)): String = t._1
  }
}

Elasticsearch

引入pom

<dependency>
    <groupId>org.apache.flink</groupId>
    <artifactId>flink-connector-elasticsearch6_2.11</artifactId>
    <version>1.7.0</version>
</dependency>

<dependency>
    <groupId>org.apache.httpcomponents</groupId>
    <artifactId>httpclient</artifactId>
    <version>4.5.3</version>
</dependency>

添加MyEsUtil

import java.util
import com.alibaba.fastjson.{JSON, JSONObject}
import org.apache.flink.api.common.functions.RuntimeContext
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.streaming.connectors.elasticsearch.{ElasticsearchSinkFunction, RequestIndexer}
import org.apache.flink.streaming.connectors.elasticsearch6.ElasticsearchSink
import org.apache.http.HttpHost
import org.elasticsearch.action.index.IndexRequest
import org.elasticsearch.client.Requests
import org.apache.flink.api.scala._
/**
  * flink数据下沉到es中
  */

object MyEsUtil {
  def main(args: Array[String]): Unit = {
    val esSink: ElasticsearchSink[String] = MyEsUtil.getElasticSearchSink("gmall0503_startup")
    //获取执行的环境
    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
    //得到数据
   val ds: DataStream[String] = env.fromCollection(List("key1", "value1"))
    //下沉数据报错(数据格式不正确造成的)
    ds.addSink(esSink)
    env.execute()
  }
  val httpHosts = new util.ArrayList[HttpHost]
  httpHosts.add(new HttpHost("vm0", 9200, "http"))
  httpHosts.add(new HttpHost("vm1", 9200, "http"))
  httpHosts.add(new HttpHost("vm2", 9200, "http"))
  def getElasticSearchSink(indexName: String): ElasticsearchSink[String] = {
    val esFunc = new ElasticsearchSinkFunction[String] {
      override def process(element: String, ctx: RuntimeContext, indexer: RequestIndexer): Unit = {
        println("试图保存:" + element)
        val jsonObj: JSONObject = JSON.parseObject(element)
        val indexRequest: IndexRequest = Requests.indexRequest().index(indexName).`type`("_doc").source(jsonObj)
        indexer.add(indexRequest)
        println("保存1条")
      }
    }
    val sinkBuilder = new ElasticsearchSink.Builder[String](httpHosts, esFunc)
    //刷新前缓冲的最大动作量
    sinkBuilder.setBulkFlushMaxActions(10)
    sinkBuilder.build()
  }
}

JDBC 自定义 sink

引入pom

<!-- https://mvnrepository.com/artifact/mysql/mysql-connector-java -->
<dependency>
    <groupId>mysql</groupId>
    <artifactId>mysql-connector-java</artifactId>
    <version>5.1.44</version>
</dependency>

<dependency>
    <groupId>com.alibaba</groupId>
    <artifactId>druid</artifactId>
    <version>1.1.10</version>
</dependency>

添加MyJdbcSink

import java.sql.{Connection, DriverManager, PreparedStatement}

import com.bw.StreamSink.Stuents
import org.apache.flink.api.common.functions.MapFunction
import org.apache.flink.configuration.Configuration
import org.apache.flink.streaming.api.datastream.{DataStream, SingleOutputStreamOperator}
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment
import org.apache.flink.streaming.api.functions.sink.RichSinkFunction
object MyJdbcSink {
  def main(args: Array[String]): Unit = {

    val env: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
   // val source: DataStream[String] = env.socketTextStream("vm2", 9999)
    val source: DataStream[String] = env.readTextFile("d:/person.txt")
    val map: SingleOutputStreamOperator[Stuents] = source.map(new MapFunction[String, Stuents]() {
      override def map(value: String): Stuents = {
        val split: Array[String] = value.split(",")
        val stu: Stuents = new Stuents
        println(split(0))
        stu.setId(split(0))
        stu.setName(split(1))
        stu.setAge(split(2).toInt)
        stu
      }
    })
    map.addSink(new SinkToMySql())

    env.execute("MyJdbcSink")
  }
  case class student(id: String, name: String,age:String)

  class SinkToMySql() extends RichSinkFunction[Stuents] {

    var conn: Connection = null;
    var ps: PreparedStatement = null

    val driver = "com.mysql.jdbc.Driver"
    val url: String = "jdbc:mysql://vm2:3306/myflink"

    val username = "root"
    val password = "123456"
    val maxActive = "20"

    //初始化的操作
    override def open(parameters: Configuration): Unit = {
      super.open(parameters)
      super.open(parameters)
      Class.forName("com.mysql.jdbc.Driver")
      conn = DriverManager.getConnection(url, username, password)
      conn.setAutoCommit(false)
    }
    //反复调用的函数
    override def invoke(value: Stuents): Unit = {
      val sql: String = "insert into student(name,age) values(?,?)"
      ps = conn.prepareStatement(sql)
      //ps.setString(0,value.getId)
      ps.setString(1,value.getName)
      ps.setString(2,value.getAge.toString)
      ps.execute()
      conn.commit()
    }
    override def close(): Unit = {
      super.close()
      if (conn != null) {
        conn.close()
      }
    }
  }
}