写在前面
DataX 是阿里巴巴集团内被广泛使用的异构数据源离线同步工具,致力于实现包括关系型数据库(MySQL、Oracle等)、HDFS、Hive、MaxCompute(原ODPS)、HBase、FTP等各种异构数据源之间稳定高效的数据同步功能。
DataX本身作为离线数据同步框架,采用Framework + plugin架构构建。将数据源读取和写入抽象成为Reader/Writer插件,纳入到整个同步框架中。目前已经有了比较全面的插件体系,主流的RDBMS数据库、NOSQL、大数据计算系统都已经接入。
设计理念
为了解决异构数据源同步问题,DataX将复杂的网状的同步链路变成了星型数据链路,DataX作为中间传输载体负责连接各种数据源。当需要接入一个新的数据源的时候,只需要将此数据源对接到DataX,便能跟已有的数据源做到无缝数据同步。
图1…:
框架设计
DataX本身作为离线数据同步框架,采用Framework + plugin架构构建。将数据源读取和写入抽象成为 Reader/Writer插件,纳入到整个同步框架中。
- Reader:Reader为数据采集模块,负责采集数据源的数据,将数据发送给Framework。
- Writer: Writer为数据写入模块,负责不断向Framework取数据,并将数据写入到目的端。
- Framework:Framework用于连接reader和writer,作为两者的数据传输通道,并处理缓冲,流控,并发,数据转换等核心技术问题。
优点
1、可靠的数据质量监控(让数据可以完整无损的传输到目的端)
2、丰富的数据转换功能
3、精准的速度控制
4、新版本DataX3.0提供了包括通道(并发)、记录流、字节流三种流控模式,可以随意控制你的作业速度,让你的作业在库可以承受的范围内达到最佳的同步速度。
5、强劲的同步性能
每一种读插件都有一种或多种切分策略,都能将作业合理切分成多个Task并行执行,单机多线程执行模型可以让DataX速度随并发成线性增长。
6、健壮的容错机制(多层次局部/全局的重试)
7、极简的使用体验
下载即可用、详细的日志信息。
官网 json 文档
json(mysql ==> mysql):
{
"job": {
"content": [
{
"reader": {
"name": "mysqlreader",
"parameter": {
"username": "root",
"password": "123456",
"column": ["name","age"],
"where": "age<100",
"connection": [
{
"table": [
"person"
],
"jdbcUrl": [
"jdbc:mysql://127.0.0.1:3306/test?characterEncoding=utf8"
]
}
]
}
},
"writer": {
"name": "mysqlwriter",
"parameter": {
"username": "root",
"password": "123456",
"column": ["name","age_true"],
"connection": [
{
"table": [
"person"
],
"jdbcUrl":"jdbc:mysql://127.0.0.1:3306/test?characterEncoding=utf8"
}
]
}
}
}
],
"setting": {
"speed": {
"channel": 1,
"byte": 104857600
},
"errorLimit": {
"record": 10,
"percentage": 0.05
}
}
}
组成部分:
它由三部分组成,分别是读,写和通用配置。
Reader部分
Writer部分
setting部分
job.setting.speed(流量控制)
Job支持用户对速度的自定义控制,channel的值可以控制同步时的并发数,byte的值可以控制同步时的速度
job.setting.errorLimit(脏数据控制)
Job支持用户对于脏数据的自定义监控和告警,包括对脏数据最大记录数阈值(record值)或者脏数据占比阈值(percentage值),当Job传输过程出现的脏数据大于用户指定的数量/百分比,DataX Job报错退出。
json(mysql ==> hdfs):
{
"job": {
"setting": {
"speed": {
"channel": 10,
"byte": 1000000,
"record": 100000
}
},
"content": [
{
"reader": {
"name": "mysqlreader",
"parameter": {
"username": "root",
"password": "root",
"connection": [
{
"querySql": [
"select db_id,on_line_flag from xxxx where db_id < 10;"
],
"jdbcUrl": [
"jdbc:mysql://127.0.0.1:3306/database"
]
}
],
"table": "xxxx"
}
},
"writer": {
"name": "hdfswriter",
"parameter": {
"defaultFS": "hdfs://shdc/",
"fileType": "text",
"path": "/user/hive/warehouse/ods_db.db/ods_zbt_ots_coin_account_record/dt=20191126/",
"fileName": "ods_zbt_ots_coin_account_record",
"column": [
{
"name": "id",
"type": "bigint"
},
{
"name": "userId",
"type": "bigint"
},
{
"name": "createTime",
"type": "bigint"
},
{
"name": "channel",
"type": "STRING"
},
{
"name": "coinType",
"type": "bigint"
},
{
"name": "uuid",
"type": "STRING"
},
{
"name": "bizCode",
"type": "string"
},
{
"name": "coinChangeNum",
"type": "bigint"
},
{
"name": "source",
"type": "STRING"
},
{
"name": "coinLeftNum",
"type": "bigint"
},
{
"name": "ext",
"type": "STRING"
},
{
"name": "updateTime",
"type": "bigint"
}
],
"writeMode": "append",
"fieldDelimiter": "\t",
"hadoopConfig": {
"dfs.nameservices": "shdc",
"dfs.ha.namenodes.shdc": "nn1,nn2",
"dfs.namenode.rpc-address.shdc.nn1": "dc-nn-01:8020",
"dfs.namenode.rpc-address.shdc.nn2": "dc-nn-02:8020",
"dfs.client.failover.proxy.provider.shdc": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
}
}
}
}]
}
}
json(hdfs ==> mysql):
{
"job": {
"setting": {
"speed": {
"channel": 3
}
},
"content": [{
"reader": {
"name": "hdfsreader",
"parameter": {
"path": "/user/hive/warehouse/ods_db.db/rpt_zbt_act_coins/dt=${vardate}/*",
"defaultFS": "hdfs://xxxx/",
"column": ["*"],
"fileType": "text",
"encoding": "UTF-8",
"fieldDelimiter": "\t",
"nullFormat": "\\N",
"hadoopConfig": {
"dfs.nameservices": "xxxx",
"dfs.ha.namenodes.xxxx": "nn1,nn2",
"dfs.namenode.rpc-address.xxxx.nn1": "dc-nn-01:8020",
"dfs.namenode.rpc-address.xxxx.nn2": "dc-nn-02:8020",
"dfs.client.failover.proxy.provider.xxxx": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
},
}
},
"writer": {
"name": "mysqlwriter",
"parameter": {
"writeMode": "insert",
"username": "name",
"password": "password",
"batchSize":"1024",
"column": ["logdt","act_code","act_name","coins","uv"],
"session": [],
"connection": [{
"table": ["xxxx"],
"jdbcUrl": "jdbc:mysql://xxx.xxx.xxx.xxx:3306/xx"
}]
}
}
}]
}
}
json (hive => hbase):
{
"job": {
"setting": {
"speed": {
"channel": 5
}
},
"content": [
{
"reader": {
"name": "hdfsreader",
"parameter": {
"path": "/user/hive/warehouse/tmp.db/zbt_open/*",
"defaultFS": "hdfs://shdct",
"column": [
{
"index": 0,
"type": "String"
},
{
"index": 1,
"type": "String"
},
{
"index": 2,
"type": "String"
}
],
"fileType": "text",
"encoding": "UTF-8",
"fieldDelimiter": "\t",
"nullFormat": "\\N",
"hadoopConfig": {
"dfs.nameservices": "shdct",
"dfs.ha.namenodes.shdct": "nn1,nn2",
"dfs.namenode.rpc-address.shdct.nn1": "test-dc-nn-01:8020",
"dfs.namenode.rpc-address.shdct.nn2": "test-dc-nn-02:8020",
"dfs.client.failover.proxy.provider.shdct": "org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider"
},
}
},
"writer": {
"name": "hbase11xwriter",
"parameter": {
"hbaseConfig": {
"hbase.rootdir": "hdfs://shdct/hbase",
"hbase.cluster.distributed": "true",
"hbase.zookeeper.quorum": "test-dc-dn-01:2181,test-dc-dn-02:2181,test-dc-dn-03:2181"
},
"table": "zbt_open",
"mode": "normal",
"rowkeyColumn": [
{
"index":1,
"type":"string"
}
],
"column": [
{
"index":0,
"name": "data:uid",
"type": "string"
},
{
"index":1,
"name": "data:device",
"type": "string"
},
{
"index":2,
"name": "data:qid",
"type": "string"
}
],
"encoding": "utf-8"
}
}
}
]
}
}
(注:hive:3.1.2、hbase:2.1.7、hadoop:3.1.2、DataX)
启动:
实例:
python /home/hadoop/datax/bin/datax.py /home/hadoop/Jerry/job.json/test.json
来源:CSDN
作者:夜古诚
链接:https://blog.csdn.net/Jerry_991/article/details/102780051