DataX-操作MySQL
Posted 嘣嘣嚓
tags:
篇首语:本文由小常识网(cha138.com)小编为大家整理,主要介绍了DataX-操作MySQL相关的知识,希望对你有一定的参考价值。
DataX操作mysql
一、 从MySQL读取
介绍
MysqlReader插件实现了从Mysql读取数据。在底层实现上,MysqlReader通过JDBC连接远程Mysql数据库,并执行相应的sql语句将数据从mysql库中SELECT出来。
不同于其他关系型数据库,MysqlReader不支持FetchSize.
实现原理
简而言之,MysqlReader通过JDBC连接器连接到远程的Mysql数据库,并根据用户配置的信息生成查询SELECT SQL语句,然后发送到远程Mysql数据库,并将该SQL执行返回结果使用DataX自定义的数据类型拼装为抽象的数据集,并传递给下游Writer处理。
对于用户配置Table、Column、Where的信息,MysqlReader将其拼接为SQL语句发送到Mysql数据库;对于用户配置querySql信息,MysqlReader直接将其发送到Mysql数据库。
json如下
{ "job": { "setting": { "speed": { "channel": 3 }, "errorLimit": { "record": 0, "percentage": 0.02 } }, "content": [{ "reader": { "name": "mysqlreader", "parameter": { "username": "root", "password": "123456", "column": [ "id", "name" ], "splitPk": "id", "connection": [{ "table": [ "datax_test" ], "jdbcUrl": [ "jdbc:mysql://192.168.1.123:3306/test" ] }] } }, "writer": { "name": "streamwriter", "parameter": { "print": true } } }] } }
参数说明--jdbcUrl
描述:描述的是到对端数据库的JDBC连接信息,使用JSON的数组描述,并支持一个库填写多个连接地址。之所以使用JSON数组描述连接信息,是因为阿里集团内部支持多个IP探测,如果配置了多个,MysqlReader可以依次探测ip的可连接性,直到选择一个合法的IP。如果全部连接失败,MysqlReader报错。 注意,jdbcUrl必须包含在connection配置单元中。对于阿里集团外部使用情况,JSON数组填写一个JDBC连接即可。
jdbcUrl按照Mysql官方规范,并可以填写连接附件控制信息。
必选:是
默认值:无
--username
描述:数据源的用户名
必选:是
默认值:无
--password
描述:数据源指定用户名的密码
必选:是
默认值:无
--table
描述:所选取的需要同步的表。使用JSON的数组描述,因此支持多张表同时抽取。当配置为多张表时,用户自己需保证多张表是同一schema结构,MysqlReader不予检查表是否同一逻辑表。注意,table必须包含在connection配置单元中。
必选:是
默认值:无
--column
描述:所配置的表中需要同步的列名集合,使用JSON的数组描述字段信息。用户使用*代表默认使用所有列配置,例如[\'*\']。
支持列裁剪,即列可以挑选部分列进行导出。
支持列换序,即列可以不按照表schema信息进行导出。
支持常量配置,用户需要按照Mysql SQL语法格式: ["id", "`table`", "1", "\'bazhen.csy\'", "null", "to_char(a + 1)", "2.3" , "true"] id为普通列名,`table`为包含保留在的列名,1为整形数字常量,\'bazhen.csy\'为字符串常量,null为空指针,to_char(a + 1)为表达式,2.3为浮点数,true为布尔值。
必选:是
默认值:无
--splitPk
描述:MysqlReader进行数据抽取时,如果指定splitPk,表示用户希望使用splitPk代表的字段进行数据分片,DataX因此会启动并发任务进行数据同步,这样可以大大提供数据同步的效能。
推荐splitPk用户使用表主键,因为表主键通常情况下比较均匀,因此切分出来的分片也不容易出现数据热点。
-- 目前splitPk仅支持整形数据切分,不支持浮点、字符串、日期等其他类型。如果用户指定其他非支持类型,MysqlReader将报错!
--如果splitPk不填写,包括不提供splitPk或者splitPk值为空,DataX视作使用单通道同步该表数据。
必选:否
默认值:空
--where
描述:筛选条件,MysqlReader根据指定的column、table、where条件拼接SQL,并根据这个SQL进行数据抽取。在实际业务场景中,往往会选择当天的数据进行同步,可以将where条件指定为gmt_create > $bizdate 。注意:不可以将where条件指定为limit 10,limit不是SQL的合法where子句。
where条件可以有效地进行业务增量同步。如果不填写where语句,包括不提供where的key或者value,DataX均视作同步全量数据。
必选:否
默认值:无
--querySql
描述:在有些业务场景下,where这一配置项不足以描述所筛选的条件,用户可以通过该配置型来自定义筛选SQL。当用户配置了这一项之后,DataX系统就会忽略table,column这些配置型,直接使用这个配置项的内容对数据进行筛选,例如需要进行多表join后同步数据,使用select a,b from table_a join table_b on table_a.id = table_b.id
当用户配置querySql时,MysqlReader直接忽略table、column、where条件的配置,querySql优先级大于table、column、where选项。
必选:否
默认值:无
mysqlreader类型转换表
请注意:
--除上述罗列字段类型外,其他类型均不支持。
--tinyint(1) DataX视作为整形。
--year DataX视作为字符串类型
--bit DataX属于未定义行为。
执行
FengZhendeMacBook-Pro:bin FengZhen$ ./datax.py /Users/FengZhen/Desktop/Hadoop/dataX/json/mysql/reader_all.json DataX (DATAX-OPENSOURCE-3.0), From Alibaba ! Copyright (C) 2010-2017, Alibaba Group. All Rights Reserved. 2018-11-18 16:22:04.599 [main] INFO VMInfo - VMInfo# operatingSystem class => sun.management.OperatingSystemImpl 2018-11-18 16:22:04.612 [main] INFO Engine - the machine info => osInfo: Oracle Corporation 1.8 25.162-b12 jvmInfo: Mac OS X x86_64 10.13.4 cpu num: 4 totalPhysicalMemory: -0.00G freePhysicalMemory: -0.00G maxFileDescriptorCount: -1 currentOpenFileDescriptorCount: -1 GC Names [PS MarkSweep, PS Scavenge] MEMORY_NAME | allocation_size | init_size PS Eden Space | 256.00MB | 256.00MB Code Cache | 240.00MB | 2.44MB Compressed Class Space | 1,024.00MB | 0.00MB PS Survivor Space | 42.50MB | 42.50MB PS Old Gen | 683.00MB | 683.00MB Metaspace | -0.00MB | 0.00MB 2018-11-18 16:22:04.638 [main] INFO Engine - { "content":[ { "reader":{ "name":"mysqlreader", "parameter":{ "column":[ "id", "name" ], "connection":[ { "jdbcUrl":[ "jdbc:mysql://192.168.1.123:3306/test" ], "table":[ "datax_test" ] } ], "password":"******", "splitPk":"id", "username":"root" } }, "writer":{ "name":"streamwriter", "parameter":{ "print":true } } } ], "setting":{ "errorLimit":{ "percentage":0.02, "record":0 }, "speed":{ "channel":3 } } } 2018-11-18 16:22:04.673 [main] WARN Engine - prioriy set to 0, because NumberFormatException, the value is: null 2018-11-18 16:22:04.678 [main] INFO PerfTrace - PerfTrace traceId=job_-1, isEnable=false, priority=0 2018-11-18 16:22:04.678 [main] INFO JobContainer - DataX jobContainer starts job. 2018-11-18 16:22:04.681 [main] INFO JobContainer - Set jobId = 0 2018-11-18 16:22:05.323 [job-0] INFO OriginalConfPretreatmentUtil - Available jdbcUrl:jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true. 2018-11-18 16:22:05.478 [job-0] INFO OriginalConfPretreatmentUtil - table:[datax_test] has columns:[id,name]. 2018-11-18 16:22:05.490 [job-0] INFO JobContainer - jobContainer starts to do prepare ... 2018-11-18 16:22:05.491 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] do prepare work . 2018-11-18 16:22:05.492 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] do prepare work . 2018-11-18 16:22:05.493 [job-0] INFO JobContainer - jobContainer starts to do split ... 2018-11-18 16:22:05.493 [job-0] INFO JobContainer - Job set Channel-Number to 3 channels. 2018-11-18 16:22:05.618 [job-0] INFO SingleTableSplitUtil - split pk [sql=SELECT MIN(id),MAX(id) FROM datax_test] is running... 2018-11-18 16:22:05.665 [job-0] INFO SingleTableSplitUtil - After split(), allQuerySql=[ select id,name from datax_test where (1 <= id AND id < 2) select id,name from datax_test where (2 <= id AND id < 3) select id,name from datax_test where (3 <= id AND id < 4) select id,name from datax_test where (4 <= id AND id <= 5) select id,name from datax_test where id IS NULL ]. 2018-11-18 16:22:05.666 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] splits to [5] tasks. 2018-11-18 16:22:05.667 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] splits to [5] tasks. 2018-11-18 16:22:05.697 [job-0] INFO JobContainer - jobContainer starts to do schedule ... 2018-11-18 16:22:05.721 [job-0] INFO JobContainer - Scheduler starts [1] taskGroups. 2018-11-18 16:22:05.744 [job-0] INFO JobContainer - Running by standalone Mode. 2018-11-18 16:22:05.758 [taskGroup-0] INFO TaskGroupContainer - taskGroupId=[0] start [3] channels for [5] tasks. 2018-11-18 16:22:05.765 [taskGroup-0] INFO Channel - Channel set byte_speed_limit to -1, No bps activated. 2018-11-18 16:22:05.766 [taskGroup-0] INFO Channel - Channel set record_speed_limit to -1, No tps activated. 2018-11-18 16:22:05.790 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] attemptCount[1] is started 2018-11-18 16:22:05.795 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] attemptCount[1] is started 2018-11-18 16:22:05.796 [0-0-0-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 2018-11-18 16:22:05.796 [0-0-1-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 2018-11-18 16:22:05.820 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] attemptCount[1] is started 2018-11-18 16:22:05.821 [0-0-2-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 2018-11-18 16:22:05.981 [0-0-0-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (1 <= id AND id < 2) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 1 test1 2018-11-18 16:22:06.030 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[0] is successed, used[241]ms 2018-11-18 16:22:06.033 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] attemptCount[1] is started 2018-11-18 16:22:06.034 [0-0-3-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 2018-11-18 16:22:06.041 [0-0-2-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (3 <= id AND id < 4) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 3 test3 2018-11-18 16:22:06.137 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[2] is successed, used[326]ms 2018-11-18 16:22:06.139 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] attemptCount[1] is started 2018-11-18 16:22:06.139 [0-0-4-reader] INFO CommonRdbmsReader$Task - Begin to read record by Sql: [select id,name from datax_test where id IS NULL ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 2018-11-18 16:22:06.157 [0-0-1-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (2 <= id AND id < 3) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 2 test2 2018-11-18 16:22:06.243 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[1] is successed, used[449]ms 2018-11-18 16:22:11.295 [0-0-3-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where (4 <= id AND id <= 5) ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 4 test4 5 test5 2018-11-18 16:22:11.393 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[3] is successed, used[5360]ms 2018-11-18 16:22:15.784 [job-0] INFO StandAloneJobContainerCommunicator - Total 0 records, 0 bytes | Speed 0B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 0.00% 2018-11-18 16:22:25.166 [0-0-4-reader] INFO CommonRdbmsReader$Task - Finished read record by Sql: [select id,name from datax_test where id IS NULL ] jdbcUrl:[jdbc:mysql://192.168.1.123:3306/test?yearIsDateType=false&zeroDateTimeBehavior=convertToNull&tinyInt1isBit=false&rewriteBatchedStatements=true]. 2018-11-18 16:22:25.413 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] taskId[4] is successed, used[19274]ms 2018-11-18 16:22:25.417 [taskGroup-0] INFO TaskGroupContainer - taskGroup[0] completed it\'s tasks. 2018-11-18 16:22:25.786 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 3B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00% 2018-11-18 16:22:25.786 [job-0] INFO AbstractScheduler - Scheduler accomplished all tasks. 2018-11-18 16:22:25.787 [job-0] INFO JobContainer - DataX Writer.Job [streamwriter] do post work. 2018-11-18 16:22:25.788 [job-0] INFO JobContainer - DataX Reader.Job [mysqlreader] do post work. 2018-11-18 16:22:25.788 [job-0] INFO JobContainer - DataX jobId [0] completed successfully. 2018-11-18 16:22:25.791 [job-0] INFO HookInvoker - No hook invoked, because base dir not exists or is a file: /Users/FengZhen/Desktop/Hadoop/dataX/datax/hook 2018-11-18 16:22:25.796 [job-0] INFO JobContainer - [total cpu info] => averageCpu | maxDeltaCpu | minDeltaCpu -1.00% | -1.00% | -1.00% [total gc info] => NAME | totalGCCount | maxDeltaGCCount | minDeltaGCCount | totalGCTime | maxDeltaGCTime | minDeltaGCTime PS MarkSweep | 0 | 0 | 0 | 0.000s | 0.000s | 0.000s PS Scavenge | 0 | 0 | 0 | 0.000s | 0.000s | 0.000s 2018-11-18 16:22:25.797 [job-0] INFO JobContainer - PerfTrace not enable! 2018-11-18 16:22:25.798 [job-0] INFO StandAloneJobContainerCommunicator - Total 5 records, 30 bytes | Speed 1B/s, 0 records/s | Error 0 records, 0 bytes | All Task WaitWriterTime 0.000s | All Task WaitReaderTime 0.000s | Percentage 100.00% 2018-11-18 16:22:25.799 [job-0] INFO JobContainer - 任务启动时刻 : 2018-11-18 16:22:04 任务结束时刻 : 2018-11-18 16:22:25 任务总计耗时 : 21s 任务平均流量 : 1B/s 记录写入速度 : 0rec/s 读出记录总数 : 5 读写失败总数 : 0
在控制台可看到结果输出
二、从MySQL按条件读取数据
json如下
{ "job": { "setting": { "speed": { "channel": 1 } }, "content": [{ "reader"使用 DataX 实现数据同步(高效的数据同步工具)