Flink CDC

1、CDC 简介

1.1 什么是CDC

CDC 是Change Data Capture(变更数据获取)的简称。核心思想是,监测捕获数据库变动(包括数据或数据表的插入、更新以及删除等),将这些变更按发生的顺序完整记录 下来,写入到消息中间件中以供其他服务进行订阅及消费。

1.2 CDC 的种类

CDC 主要分为基于查询基于 Binlog 两种方式,我们主要了解一下这两种之间的区别:

基于查询的 CDC

基于 Binlog 的CDC

开源产品

Sqoop、Kafka JDBC Source

Canal、Maxwell、Debezium(Flink里内置了)

执行模式

Batch

Streaming

是否可以捕获所有数据变化



延迟性

高延迟

低延迟

是否增加数据库压力



1.3 Flink-CDC

Flink 社区开发了 flink-cdc-connectors 组件,这是一个可以直接从 MySQL、PostgreSQL 等数据库直接读取全量数据增量变更数据的 source 组件。

目前也已开源,开源地址:https://github.com/ververica/flink-cdc-connectors

2、Flink CDC案例实操

2.1 环境配置

# 1.进入MySql配置文件
vim /etc/my.cnf

# 2.添加如下信息,cdc_test为要开启binlog的数据库名
server_id=1 #mysql5.7版本开启binlog强制需要添加该参数
log_bin=mysql-bin #表示开启binlog并指定binglog文件名
binlog_format=row #默认
expire_logs_days=7 #binlog保留天数
binlog-do-db=cdc_test #设置对哪个数据库开启binlog

# 3.保存退出后,重启MySQL(下面命令为CentOS6.9)
service mysqld restart

# 4.进入到/var/lib/mysql 目录下,输入以下命令,查看是否有结果
ll | grep mysql-bin

# 5.出现结果表示已经开启了binlog

2.2 DataStream 方式的应用

2.2.1 导入依赖

<dependencies>
	<dependency>
 		<groupId>org.apache.flink</groupId>
 		<artifactId>flink-java</artifactId>
 		<version>1.12.0</version>
 	</dependency>
 	
    <dependency>
 		<groupId>org.apache.flink</groupId>
 		<artifactId>flink-streaming-java_2.12</artifactId>
 		<version>1.12.0</version>
 	</dependency>
 
    <dependency>
 		<groupId>org.apache.flink</groupId>
 		<artifactId>flink-clients_2.12</artifactId>
 		<version>1.12.0</version>
 	</dependency>
 
    <dependency>
 		<groupId>org.apache.hadoop</groupId>
 		<artifactId>hadoop-client</artifactId>
 		<version>3.1.3</version>
 	</dependency>
 
    <dependency>
 		<groupId>mysql</groupId>
 		<artifactId>mysql-connector-java</artifactId>
 		<version>5.1.49</version>
	</dependency>

    <dependency>
 		<groupId>org.apache.flink</groupId>
 		<artifactId>flink-table-planner-blink_2.12</artifactId>
 		<version>1.12.0</version>
	</dependency>

    <!-- mysql-cdc -->
    <dependency>
    	<groupId>com.ververica</groupId>
        <artifactId>flink-connector-mysql-cdc</artifactId>
 		<version>2.0.0</version>
	</dependency>
 
    <dependency>
        <groupId>com.alibaba</groupId>
     	<artifactId>fastjson</artifactId>
     	<version>1.2.75</version>
 	</dependency>
</dependencies>
<build>
 	<plugins>
 		<plugin>
 			<groupId>org.apache.maven.plugins</groupId>
 			<artifactId>maven-assembly-plugin</artifactId>
 			<version>3.0.0</version>
 			<configuration>
 				<descriptorRefs>
 					<descriptorRef>jar-with-dependencies</descriptorRef>
 				</descriptorRefs>
 			</configuration>
 			<executions>
 				<execution>
 					<id>make-assembly</id>
 					<phase>package</phase>
 					<goals>
						<goal>single</goal>
 					</goals>
 				</execution>
 			</executions>
 		</plugin>
 	</plugins>
</build>

2.2.2 编写代码

简单测试

package com.pzb;

import com.ververica.cdc.connectors.mysql.MySqlSource;
import com.ververica.cdc.connectors.mysql.table.StartupOptions;
import com.ververica.cdc.debezium.DebeziumSourceFunction;
import com.ververica.cdc.debezium.StringDebeziumDeserializationSchema;
import org.apache.flink.streaming.api.datastream.DataStreamSource;
import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;

/**
 * @author 海绵先生
 * @Description TODO
 * @date 2023/1/12-20:03
 */
public class FlinkCDC {
    public static void main(String[] args) throws Exception {
        //1.获取Flink 执行环境
        StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
        env.setParallelism(1);

        //2.通过FlinkCDC构建SourceFunction
        /*
        * 格式:MySqlSource.<String>builder().[中间各参数].build()
        * */
        DebeziumSourceFunction<String> sourceFunction = MySqlSource.<String>builder()//<String>为最终返回类型,官方提供的返回类型为String
                .hostname("hadoop111")
                .port(3306)
                .username("root")
                .password("xxxx")
                .databaseList("cdc_test")// 监控的数据库,若只选了数据库参数,则监控所有表
                .tableList("cdc_test.user_info")//监控那张表,格式:数据库名.表名。因为databaseList参数是可以监控多个数据库的
                .deserializer(new StringDebeziumDeserializationSchema())
                .startupOptions(StartupOptions.initial())//startupOptions参数有5种方式:initial、earliest、latest、specificOffset、timestamp
                .build();

        // 获取数据
        DataStreamSource<String> dataStreamSource = env.addSource(sourceFunction);

        //打印数据
        dataStreamSource.print();

        env.execute("FlinkCDC");
    }
}
  • initial:第一次启动时 读取原表已有的历史数据, 操作类型为READ, 之后不断做检查点存储;第二次启动时 一定要指明检查点文件的具体位置, 这样就可以断点续传; 即使Flink宕机了, 重启后是从上次offset开始读, 而不是latest检查点在打包部署后才有用, 因为那样才可以指明检查点的具体位置
  • earliest:从BinLog第一行数据开始读, 最好先给这个数据库加上BinLog后, 再去读取创建数据库
  • latest:读取最新变更数据, 从Flink程序启动后开始算
  • specificOffset:指明BinLog文件位置和从哪个offset开始读;这个一般来说不怎么用, 因为本地没存offset的信息, 很难知道offset读到哪了
  • 可以从BinLog某一时刻的数据开始读

原文链接:

往监控的数据表里添加数据,查看结果:

<!-- 读取数据 -->
SourceRecord{sourcePartition={server=mysql_binlog_source}, sourceOffset={ts_sec=1673530792, file=mysql-bin.000001, pos=444, snapshot=true}} ConnectRecord{topic='mysql_binlog_source.cdc_test.user_info', kafkaPartition=null, key=Struct{id=1001}, keySchema=Schema{mysql_binlog_source.cdc_test.user_info.Key:STRUCT}, value=Struct{after=Struct{id=1001,name=zhangsan,sex=male},source=Struct{version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1673530792529,snapshot=true,db=cdc_test,table=user_info,server_id=0,file=mysql-bin.000001,pos=444,row=0},op=r,ts_ms=1673530792532}, valueSchema=Schema{mysql_binlog_source.cdc_test.user_info.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}

<!-- 此处省略一条数据 -->

<!-- 插入数据 -->
SourceRecord{sourcePartition={server=mysql_binlog_source}, sourceOffset={transaction_id=null, ts_sec=1673531033, file=mysql-bin.000001, pos=509, row=1, server_id=1, event=2}} ConnectRecord{topic='mysql_binlog_source.cdc_test.user_info', kafkaPartition=null, key=Struct{id=1003}, keySchema=Schema{mysql_binlog_source.cdc_test.user_info.Key:STRUCT}, value=Struct{after=Struct{id=1003,name=wangwu,sex=famale},source=Struct{version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1673531033000,db=cdc_test,table=user_info,server_id=1,file=mysql-bin.000001,pos=649,row=0},op=c,ts_ms=1673531029606}, valueSchema=Schema{mysql_binlog_source.cdc_test.user_info.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}

<!-- 修改数据 -->
SourceRecord{sourcePartition={server=mysql_binlog_source}, sourceOffset={transaction_id=null, ts_sec=1673531342, file=mysql-bin.000001, pos=803, row=1, server_id=1, event=2}} ConnectRecord{topic='mysql_binlog_source.cdc_test.user_info', kafkaPartition=null, key=Struct{id=1003}, keySchema=Schema{mysql_binlog_source.cdc_test.user_info.Key:STRUCT}, value=Struct{before=Struct{id=1003,name=wangwu,sex=famale},after=Struct{id=1003,name=wangwu,sex=male},source=Struct{version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1673531342000,db=cdc_test,table=user_info,server_id=1,file=mysql-bin.000001,pos=943,row=0},op=u,ts_ms=1673531338605}, valueSchema=Schema{mysql_binlog_source.cdc_test.user_info.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}

<!-- 删除数据 -->
SourceRecord{sourcePartition={server=mysql_binlog_source}, sourceOffset={transaction_id=null, ts_sec=1673531548, file=mysql-bin.000001, pos=1119, row=1, server_id=1, event=2}} ConnectRecord{topic='mysql_binlog_source.cdc_test.user_info', kafkaPartition=null, key=Struct{id=1002}, keySchema=Schema{mysql_binlog_source.cdc_test.user_info.Key:STRUCT}, value=Struct{before=Struct{id=1002,name=lisi,sex=male},source=Struct{version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1673531548000,db=cdc_test,table=user_info,server_id=1,file=mysql-bin.000001,pos=1259,row=0},op=d,ts_ms=1673531544601}, valueSchema=Schema{mysql_binlog_source.cdc_test.user_info.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}

Flink 和Flink CDC_Flink 和Flink CDC

修改则是u,删除为d

采用initial时,当程序重启时,历史数据还是会被消费,并且都是读取形式

2.3 Flink SQL 方式应用

//2.2通过FlinkSQL构建SourceFunction
        StreamTableEnvironment tableEnv = StreamTableEnvironment.create(env);
        tableEnv.executeSql("CREATE TABLE user_info ( " +
                " id STRING PRIMARY KEY, " +
                " name STRING, " +
                " sex STRING " +
                " ) WITH ( " +
                " 'connector' = 'mysql-cdc', " +
                " 'scan.startup.mode' = 'initial', " +
                " 'hostname' = 'hadoop111', " +
                " 'port' = '3306', " +
                " 'username' = 'root', " +
                " 'password' = '1234', " +
                " 'database-name' = 'cdc_test', " +
                " 'table-name' = 'user_info' " +
                ")");
        //3.查询数据并转换为流输出
        Table table = tableEnv.sqlQuery("select * from user_info");
        DataStream<Tuple2<Boolean, Row>> retractStream = tableEnv.toRetractStream(table, Row.class);
        retractStream.print();

操作结果:

/**********读取************/
(true,+I[1001, zhangsan, male])
(true,+I[1003, wangwu, female])
/*********插入***********/
(true,+I[1002, lisi, male])
/**********更新**********/
(false,-U[1002, lisi, male])
(true,+U[1002, lisi, female])
/*********删除**********/
(false,-D[1002, lisi, female])

2.3 自定义反序列化器

package com.pzb;

import com.alibaba.fastjson.JSONObject;
import com.ververica.cdc.debezium.DebeziumDeserializationSchema;
import io.debezium.data.Envelope;
import org.apache.flink.api.common.typeinfo.BasicTypeInfo;
import org.apache.flink.api.common.typeinfo.TypeInformation;
import org.apache.flink.util.Collector;
import org.apache.kafka.connect.data.Field;
import org.apache.kafka.connect.data.Schema;
import org.apache.kafka.connect.data.Struct;
import org.apache.kafka.connect.source.SourceRecord;

import java.util.List;


/**
 * @author 海绵先生
 * @Description TODO 关于DataStream方式的CDC,自定义反序列化
 * @date 2023/1/13-21:26
 */
/*官方默认String类型的数据样式
SourceRecord{sourcePartition={server=mysql_binlog_source}, sourceOffset={transaction_id=null, ts_sec=1673531342, file=mysql-bin.000001, pos=803, row=1, server_id=1, event=2}} ConnectRecord{topic='mysql_binlog_source.cdc_test.user_info', kafkaPartition=null, key=Struct{id=1003}, keySchema=Schema{mysql_binlog_source.cdc_test.user_info.Key:STRUCT}, value=Struct{before=Struct{id=1003,name=wangwu,sex=famale},after=Struct{id=1003,name=wangwu,sex=male},source=Struct{version=1.5.2.Final,connector=mysql,name=mysql_binlog_source,ts_ms=1673531342000,db=cdc_test,table=user_info,server_id=1,file=mysql-bin.000001,pos=943,row=0},op=u,ts_ms=1673531338605}, valueSchema=Schema{mysql_binlog_source.cdc_test.user_info.Envelope:STRUCT}, timestamp=null, headers=ConnectHeaders(headers=)}
* */
public class CustomerDeserializationSchema implements DebeziumDeserializationSchema<String> {

    /*TODO 明确自己想要的数据格式
    * (
    *   "db":"",
    *   "tableName":"",
    *   "before":{"id":"1001","name":""...},
    *   "after":{"id":"1001","name":""...},
    *   "op":""
    * )
    * */

    @Override
    public void deserialize(SourceRecord sourceRecord, Collector<String> collector) throws Exception {
        //创建JSON 对象用于封装结果数据
        JSONObject result = new JSONObject();

        //获取库名&表名
        String topic = sourceRecord.topic();//根据上面样式通过sourceRecord.键值,获取对应值
        //获取结果:topic='mysql_binlog_source.cdc_test.user_info'
        String[] fields = topic.split("\\.");//安装`.`进行分割(.需要转义)

        //添加对应的库名和表名键值对
        result.put("db",fields[1]);
        result.put("tableName",fields[2]);

        //获取before数据
        Struct value = (Struct) sourceRecord.value();//需要进行强转下,注意导的是:org.apache.kafka.connect.data.Struct 这个包
        Struct before = value.getStruct("before");//通过指定before,获取before字段的数据
        JSONObject beforeJson = new JSONObject();
        if (before != null){// before字段是有可能为空的(比如读取[op=r]、插入操作[op=c]...),所以要进行判断
            Schema schema = before.schema();//获取before 的schema信息
            List<Field> fieldList = schema.fields();//获取before里的全部字段

            for (Field field : fieldList){
                // 通过field.name()获取对应的字段名,before.get(field)根据字段名,获取对应的值
                beforeJson.put(field.name(), before.get(field));
            }
        }
        result.put("before",beforeJson);//把before信息添加进去

        //同理获取after
        Struct after = value.getStruct("after");
        JSONObject afterJson = new JSONObject();
        if (after != null){
            Schema schema = after.schema();
            List<Field> fieldList = schema.fields();

            for (Field field : fieldList){
                afterJson.put(field.name(), after.get(field));
            }
        }
        result.put("after",afterJson);

        //获取操作类型(operation不能直接通过sourceRecord获取)
        Envelope.Operation operation = Envelope.operationFor(sourceRecord);//注意导包:io.debezium.data.Envelope
        //将operation信息添加进去
        result.put("op",operation);

        //输出数据
        collector.collect(result.toJSONString());
    }

    @Override
    public TypeInformation<String> getProducedType() {
        // 返回类型
        return BasicTypeInfo.STRING_TYPE_INFO;
    }
}

自定义后的结果:

{"op":"READ","before":{},"after":{"sex":"male","name":"zhangsan","id":"1001"},"db":"cdc_test","tableName":"user_info"}
{"op":"READ","before":{},"after":{"sex":"male","name":"wangwu","id":"1003"},"db":"cdc_test","tableName":"user_info"}
{"op":"UPDATE","before":{"sex":"male","name":"wangwu","id":"1003"},"after":{"sex":"female","name":"wangwu","id":"1003"},"db":"cdc_test","tableName":"user_info"}
{"op":"CREATE","before":{},"after":{"sex":"male","name":"lisi","id":"1002"},"db":"cdc_test","tableName":"user_info"}
{"op":"DELETE","before":{"sex":"male","name":"lisi","id":"1002"},"after":{},"db":"cdc_test","tableName":"user_info"}

在自定义前一定要明白自己想要的是什么数据