〇、相关资料
一、datax
1.1 全量
1、pg->clickhouse
{
"content": [
{
"reader": {
"name": "postgresqlreader",
"parameter": {
"username": "postgres",
"password": "admin123456",
"connection": [
{
"querySql": [
"select * from ods.tb_name"
],
"jdbcUrl": [
"jdbc:postgresql://192.168.56.83:5432/bigdata"
]
}
]
}
},
"writer": {
"name": "clickhousewriter",
"parameter": {
"username": "default",
"password": "admin123456",
"column": [
"system_version",
"latest"
],
"connection": [
{
"table": [
"test_ljh.tb_name"
],
"jdbcUrl": "jdbc:clickhouse://192.168.56.81:8123/test_ljh"
}
]
}
}
}
],
"setting": {
"speed": {
"channel": 1,
"record": 1000
},
"errorLimit": {
"record": 0,
"percentage": 0
}
}
}
1.2 增量
1、pg->Mysql
{
"job": {
"setting": {
"speed": {
"channel": 1,
"record": 500,
"byte": 1000
},
"errorLimit": {
"percentage": 0,
"record": 0
}
},
"content": [
{
"reader": {
"name": "postgresqlreader",
"parameter": {
"connection": [
{
"jdbcUrl": [
"jdbc:postgresql://1sss6:5432/qqq"
],
"querySql": [
"select xx"
]
}
],
"password": "wae321",
"username": "sdwa"
}
},
"writer": {
"name": "mysqlwriter",
"parameter": {
"username": "234was",
"password": "Z2323dfsd1",
"column": [
"user_account",
"user_name",
"email",
"del_flag",
"status"
],
"connection": [
{
"jdbcUrl": "jdbc:mysql://1sad3215:2112/bouldeawfasnter?useUnicode=true&characterEncoding=utf8&allowLoadLocalInfile=false&autoDeserialize=false&allowLocalInfile=false&allowUrlInLocalInfile=false",
"table": [
"qwe2.12e3ea"
]
}
],
"writeMode": "update(user_account)"
}
}
}
]
}
}
1.3 其他操作
1、集成调度
输入SQL,前置语句&后置语句 分隔符
自定义模板
2、使用transform
{
"job": {
"content": [
{
"reader": {
"name": "postgresqlreader",
"parameter": {
"username": "postgres",
"password": "ew5t43y65565hrftg71",
"connection": [
{
"querySql": [
"select * from xxx"
],
"jdbcUrl": [
"jdbc:postgresql://198.151.51.44:5432/metabase"
]
}
]
}
},
"transformer": [
{
"name": "dx_replace",
"parameter": {
"columnIndex": 17,
"paras":["1","1","sedew"]
}
}
],
"writer": {
"name": "clickhousewriter",
"parameter": {
"username": "default",
"password": "admin123456",
"column": [
"system_version",
"latest"
],
"connection": [
{
"table": [
"wf4.fgdszgwe"
],
"jdbcUrl": "jdbc:clickhouse://198.151.51.44:8123/efes"
}
]
}
}
}
],
"setting": {
"speed": {
"channel": 1
},
"errorLimit": {
"record": 0
}
}
}
}
自定义实战:DataX二次开发-自定义transformer-字段汇总转json_哥们要飞的blog的技术博客_51CTO博客
二、kafka connector
2.1 Confluent-商业版
1、jdbc source&sink
参考:kafka JDBC/DB connector配置-Confluent&Debezium_哥们要飞的blog的技术博客_51CTO博客
2、rabbitmq source
{
"name": "RabbitMQSourceConnector1-ljh",
"config": {
"connector.class": "io.confluent.connect.rabbitmq.RabbitMQSourceConnector",
"tasks.max": "1",
"key.converter": "org.apache.kafka.connect.storage.StringConverter",
"value.converter": "org.apache.kafka.connect.converters.ByteArrayConverter",
"confluent.topic.bootstrap.servers": "advertised.listeners:9092",
"kafka.topic": "rabbitmq-test-ljh",
"rabbitmq.queue": "gemenyaofei",
"rabbitmq.host": "1ip8",
"rabbitmq.username": "admin",
"rabbitmq.password": "admin",
"nums.partition": 1,
"topic.creation.default.replication.factor": 1,
"topic.creation.default.partitions": 1,
"confluent.topic.replication.factor": 1
}
}
相关实战:基于Confluent实现发送RabbitMQ消息自动建表拉取数据_哥们要飞的blog的技术博客_51CTO博客
2.2 Debezium-log方案
1、jdbc sink
参考:kafka JDBC/DB connector配置-Confluent&Debezium_哥们要飞的blog的技术博客_51CTO博客
2、pg source
参考:kafka JDBC/DB connector配置-Confluent&Debezium_哥们要飞的blog的技术博客_51CTO博客
三、Flume
日志
四、Kettle
4.1 批量同步多张表
参考:Kettle:跨库(SQLServer->PostgreSQL)同步多张表数据的详细设计过程_哥们要飞的blog的技术博客_51CTO博客
五、Dbswitch
国产数据源
六、Sqoop
Hadoop和rdbms
七、OGG
Oracle数据同步