当前位置: 首页 > news >正文

Apache Paimon 使用 MySQL CDC 获取数据

Paimon支持使用(CDC)同步来自不同数据库的更改,此功能需要Flink及其CDC连接器。

准备 CDC Bundled Jar 依赖

flink-sql-connector-mysql-cdc-*.jar

同步表

在Flink DataStream中或通过flink run使用MySqlSyncTableAction,可以将MySQL中的一个或多个表同步到一个Paimon表中。

<FLINK_HOME>/bin/flink run \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_table--warehouse <warehouse-path> \--database <database-name> \--table <table-name> \[--partition_keys <partition_keys>] \[--primary_keys <primary-keys>] \[--type_mapping <option1,option2...>] \[--computed_column <'column-name=expr-name(args[, ...])'> [--computed_column ...]] \[--metadata_column <metadata-column>] \[--mysql_conf <mysql-cdc-source-conf> [--mysql_conf <mysql-cdc-source-conf> ...]] \[--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]] \[--table_conf <paimon-table-sink-conf> [--table_conf <paimon-table-sink-conf> ...]]
ConfigurationDescription
–warehouseThe path to Paimon warehouse.
–databaseThe database name in Paimon catalog.
–tableThe Paimon table name.
–partition_keysThe partition keys for Paimon table. If there are multiple partition keys, connect them with comma, for example “dt,hh,mm”.
–primary_keysThe primary keys for Paimon table. If there are multiple primary keys, connect them with comma, for example “buyer_id,seller_id”.
–type_mappingIt is used to specify how to map MySQL data type to Paimon type. Supported options:“tinyint1-not-bool”: maps MySQL TINYINT(1) to TINYINT instead of BOOLEAN.“to-nullable”: ignores all NOT NULL constraints (except for primary keys). This is used to solve the problem that Flink cannot accept the MySQL ‘ALTER TABLE ADD COLUMN column type NOT NULL DEFAULT x’ operation.“to-string”: maps all MySQL types to STRING.“char-to-string”: maps MySQL CHAR(length)/VARCHAR(length) types to STRING.“longtext-to-bytes”: maps MySQL LONGTEXT types to BYTES.“bigint-unsigned-to-bigint”: maps MySQL BIGINT UNSIGNED, BIGINT UNSIGNED ZEROFILL, SERIAL to BIGINT. You should ensure overflow won’t occur when using this option.
–computed_columnThe definitions of computed columns. The argument field is from MySQL table field name. See here for a complete list of configurations.
–metadata_column–metadata_column is used to specify which metadata columns to include in the output schema of the connector. Metadata columns provide additional information related to the source data, for example: --metadata_column table_name,database_name,op_ts. See its document for a complete list of available metadata.
–mysql_confThe configuration for Flink CDC MySQL sources. Each configuration should be specified in the format “key=value”. hostname, username, password, database-name and table-name are required configurations, others are optional. See its document for a complete list of configurations.
–catalog_confThe configuration for Paimon catalog. Each configuration should be specified in the format “key=value”. See here for a complete list of catalog configurations.
–table_confThe configuration for Paimon table sink. Each configuration should be specified in the format “key=value”. See here for a complete list of table configurations.

如果指定的Paimon表不存在,将自动创建该表。它的模式将从所有指定的MySQL表中派生出来。如果Paimon表已经存在,其模式将与所有指定MySQL表的模式进行比较。

示例1:将表同步到一个Paimon表中

<FLINK_HOME>/bin/flink run \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_table \--warehouse hdfs:///path/to/warehouse \--database test_db \--table test_table \--partition_keys pt \--primary_keys pt,uid \--computed_column '_year=year(age)' \--mysql_conf hostname=127.0.0.1 \--mysql_conf username=root \--mysql_conf password=123456 \--mysql_conf database-name='source_db' \--mysql_conf table-name='source_table1|source_table2' \--catalog_conf metastore=hive \--catalog_conf uri=thrift://hive-metastore:9083 \--table_conf bucket=4 \--table_conf changelog-producer=input \--table_conf sink.parallelism=4

如上示例所示,mysql_conf的表名支持正则表达式,以监控满足正则表达式的多个表。所有表的模式将合并到一个Paimon表模式中。

示例2:将分片同步到一个Paimon表中

可以使用正则表达式设置“数据库名称”来捕获多个数据库。典型的场景是,表“source_table”被拆分为数据库“source_db1”,“source_db2”…,然后将所有“source_table”的数据同步到一个Paimon表中。

<FLINK_HOME>/bin/flink run \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_table \--warehouse hdfs:///path/to/warehouse \--database test_db \--table test_table \--partition_keys pt \--primary_keys pt,uid \--computed_column '_year=year(age)' \--mysql_conf hostname=127.0.0.1 \--mysql_conf username=root \--mysql_conf password=123456 \--mysql_conf database-name='source_db.+' \--mysql_conf table-name='source_table' \--catalog_conf metastore=hive \--catalog_conf uri=thrift://hive-metastore:9083 \--table_conf bucket=4 \--table_conf changelog-producer=input \--table_conf sink.parallelism=4

同步数据库

通过在Flink DataStream或通过flink run使用MySqlSyncDatabaseAction,可以将整个MySQL数据库同步到一个Paimon数据库中。

<FLINK_HOME>/bin/flink run \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_database--warehouse <warehouse-path> \--database <database-name> \[--ignore_incompatible <true/false>] \[--merge_shards <true/false>] \[--table_prefix <paimon-table-prefix>] \[--table_suffix <paimon-table-suffix>] \[--including_tables <mysql-table-name|name-regular-expr>] \[--excluding_tables <mysql-table-name|name-regular-expr>] \[--mode <sync-mode>] \[--metadata_column <metadata-column>] \[--type_mapping <option1,option2...>] \[--mysql_conf <mysql-cdc-source-conf> [--mysql_conf <mysql-cdc-source-conf> ...]] \[--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]] \[--table_conf <paimon-table-sink-conf> [--table_conf <paimon-table-sink-conf> ...]]
ConfigurationDescription
–warehouseThe path to Paimon warehouse.
–databaseThe database name in Paimon catalog.
–ignore_incompatibleIt is default false, in this case, if MySQL table name exists in Paimon and their schema is incompatible,an exception will be thrown. You can specify it to true explicitly to ignore the incompatible tables and exception.
–merge_shardsIt is default true, in this case, if some tables in different databases have the same name, their schemas will be merged and their records will be synchronized into one Paimon table. Otherwise, each table’s records will be synchronized to a corresponding Paimon table, and the Paimon table will be named to ‘databaseName_tableName’ to avoid potential name conflict.
–table_prefixThe prefix of all Paimon tables to be synchronized. For example, if you want all synchronized tables to have “ods_” as prefix, you can specify “–table_prefix ods_”.
–table_suffixThe suffix of all Paimon tables to be synchronized. The usage is same as “–table_prefix”.
–including_tablesIt is used to specify which source tables are to be synchronized. You must use ‘|’ to separate multiple tables.Because ‘|’ is a special character, a comma is required, for example: ‘a|b|c’.Regular expression is supported, for example, specifying “–including_tables test|paimon.*” means to synchronize table ‘test’ and all tables start with ‘paimon’.
–excluding_tablesIt is used to specify which source tables are not to be synchronized. The usage is same as “–including_tables”. “–excluding_tables” has higher priority than “–including_tables” if you specified both.
–modeIt is used to specify synchronization mode. Possible values:“divided” (the default mode if you haven’t specified one): start a sink for each table, the synchronization of the new table requires restarting the job.“combined”: start a single combined sink for all tables, the new table will be automatically synchronized.
–metadata_column–metadata_column is used to specify which metadata columns to include in the output schema of the connector. Metadata columns provide additional information related to the source data, for example: --metadata_column table_name,database_name,op_ts. See its document for a complete list of available metadata.
–type_mappingIt is used to specify how to map MySQL data type to Paimon type. Supported options:“tinyint1-not-bool”: maps MySQL TINYINT(1) to TINYINT instead of BOOLEAN.“to-nullable”: ignores all NOT NULL constraints (except for primary keys). This is used to solve the problem that Flink cannot accept the MySQL ‘ALTER TABLE ADD COLUMN column type NOT NULL DEFAULT x’ operation.“to-string”: maps all MySQL types to STRING.“char-to-string”: maps MySQL CHAR(length)/VARCHAR(length) types to STRING.“longtext-to-bytes”: maps MySQL LONGTEXT types to BYTES.“bigint-unsigned-to-bigint”: maps MySQL BIGINT UNSIGNED, BIGINT UNSIGNED ZEROFILL, SERIAL to BIGINT. You should ensure overflow won’t occur when using this option.
–mysql_confThe configuration for Flink CDC MySQL sources. Each configuration should be specified in the format “key=value”. hostname, username, password, database-name and table-name are required configurations, others are optional. See its document for a complete list of configurations.
–catalog_confThe configuration for Paimon catalog. Each configuration should be specified in the format “key=value”. See here for a complete list of catalog configurations.
–table_confThe configuration for Paimon table sink. Each configuration should be specified in the format “key=value”. See here for a complete list of table configurations.

只有带有主键的表才会同步。

对于要同步的每个MySQL表,如果相应的Paimon表不存在,将自动创建该表。它的模式将从所有指定的MySQL表中派生出来。如果Paimon表已经存在,其模式将与所有指定MySQL表的模式进行比较。

示例1:同步整个数据库

<FLINK_HOME>/bin/flink run \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_database \--warehouse hdfs:///path/to/warehouse \--database test_db \--mysql_conf hostname=127.0.0.1 \--mysql_conf username=root \--mysql_conf password=123456 \--mysql_conf database-name=source_db \--catalog_conf metastore=hive \--catalog_conf uri=thrift://hive-metastore:9083 \--table_conf bucket=4 \--table_conf changelog-producer=input \--table_conf sink.parallelism=4

示例2:同步数据库下新添加的表

首先,假设Flink作业是在数据库source_db下同步表[产品、用户、地址]。

<FLINK_HOME>/bin/flink run \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_database \--warehouse hdfs:///path/to/warehouse \--database test_db \--mysql_conf hostname=127.0.0.1 \--mysql_conf username=root \--mysql_conf password=123456 \--mysql_conf database-name=source_db \--catalog_conf metastore=hive \--catalog_conf uri=thrift://hive-metastore:9083 \--table_conf bucket=4 \--table_conf changelog-producer=input \--table_conf sink.parallelism=4 \--including_tables 'product|user|address'

然后,希望该Job也同步包含历史数据的表[order, custom]。可以通过从之前的Job快照中恢复,从而重用作业的现有状态来实现这一点。恢复的Job将首先快照新添加的表,然后自动从上一个位置继续读取changelog。

从以前的快照恢复并添加新表进行同步的命令如下所示:

<FLINK_HOME>/bin/flink run \--fromSavepoint savepointPath \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_database \--warehouse hdfs:///path/to/warehouse \--database test_db \--mysql_conf hostname=127.0.0.1 \--mysql_conf username=root \--mysql_conf password=123456 \--mysql_conf database-name=source_db \--catalog_conf metastore=hive \--catalog_conf uri=thrift://hive-metastore:9083 \--table_conf bucket=4 \--including_tables 'product|user|address|order|custom'

注意:可以设置–mode combined启动,自动不同新增加的表而无需重启Job。

示例3:同步和合并多个碎片

假设有多个数据库分片db1db2,…每个数据库都有表tbl1tbl2,…,可以同步所有的db.+.tbl.+到表test_db.tbl1, test_db.tbl2 …。

<FLINK_HOME>/bin/flink run \/path/to/paimon-flink-action-0.7.0-incubating.jar \mysql_sync_database \--warehouse hdfs:///path/to/warehouse \--database test_db \--mysql_conf hostname=127.0.0.1 \--mysql_conf username=root \--mysql_conf password=123456 \--mysql_conf database-name='db.+' \--catalog_conf metastore=hive \--catalog_conf uri=thrift://hive-metastore:9083 \--table_conf bucket=4 \--table_conf changelog-producer=input \--table_conf sink.parallelism=4 \--including_tables 'tbl.+'

通过将数据库名称设置为正则表达式,同步作业将捕获匹配数据库下的所有表,并将同名的表合并到一个表中。

设置--merge_shards false,以防止合并分片。同步表将被命名为“databaseName_tableName”,以避免潜在的名称冲突。

常见问题

  1. 从MySQL中提取的数据汉字乱码
  • flink-conf.yaml设置env.java.opts: -Dfile.encoding=UTF-8(自Flink-1.17以来,该选项已更改为env.java.opts.all)。

相关文章:

  • 如何用 UDP 实现可靠传输?并以LabVIEW为例进行说明
  • 项目中如何排查jvm问题
  • CXL-Enabled Enhanced Memory Functions——论文阅读
  • 第十四届蓝桥杯省赛真题 Java 研究生 组【原卷】
  • 神策分析 Copilot 成功通过网信办算法备案,数据分析 AI 化全面落地
  • 栈与队列|232.用栈实现队列
  • 音频数据如果在中断中会随机给的那就放入队列或者缓冲区;队列缓冲区对音频的作用
  • RabbitMQ基础
  • 在 Mac 上通过“启动转换助理”安装 Windows 10
  • swiftUI中的可变属性和封装
  • huawei services HK华为云服务
  • mysql启动报错:ERROR! The server quit without updating PID file
  • 从0开始回顾MySQL --- 三范式与表设计
  • 腾讯云对象存储的在Java使用步骤介绍
  • Vue学习日记 Day7 —— json-server工具、基于VueCli自定义创建项目、postcss插件
  • 网络传输文件的问题
  • 【Amaple教程】5. 插件
  • 【跃迁之路】【477天】刻意练习系列236(2018.05.28)
  • Centos6.8 使用rpm安装mysql5.7
  • Eureka 2.0 开源流产,真的对你影响很大吗?
  • gops —— Go 程序诊断分析工具
  • JS 面试题总结
  • MySQL-事务管理(基础)
  • npx命令介绍
  • React Transition Group -- Transition 组件
  • React-flux杂记
  • use Google search engine
  • vue:响应原理
  • 从@property说起(二)当我们写下@property (nonatomic, weak) id obj时,我们究竟写了什么...
  • 从0实现一个tiny react(三)生命周期
  • 大数据与云计算学习:数据分析(二)
  • 动手做个聊天室,前端工程师百无聊赖的人生
  • 思考 CSS 架构
  • 算法-插入排序
  • 腾讯视频格式如何转换成mp4 将下载的qlv文件转换成mp4的方法
  • postgresql行列转换函数
  • puppet连载22:define用法
  • 进程与线程(三)——进程/线程间通信
  • 如何正确理解,内页权重高于首页?
  • 我们雇佣了一只大猴子...
  • ​VRRP 虚拟路由冗余协议(华为)
  • ​猴子吃桃问题:每天都吃了前一天剩下的一半多一个。
  • !!Dom4j 学习笔记
  • # 日期待t_最值得等的SUV奥迪Q9:空间比MPV还大,或搭4.0T,香
  • ###51单片机学习(1)-----单片机烧录软件的使用,以及如何建立一个工程项目
  • #if 1...#endif
  • #使用清华镜像源 安装/更新 指定版本tensorflow
  • (51单片机)第五章-A/D和D/A工作原理-A/D
  • (八)Docker网络跨主机通讯vxlan和vlan
  • (草履虫都可以看懂的)PyQt子窗口向主窗口传递参数,主窗口接收子窗口信号、参数。
  • (分类)KNN算法- 参数调优
  • (附源码)ssm考生评分系统 毕业设计 071114
  • (汇总)os模块以及shutil模块对文件的操作
  • (论文阅读11/100)Fast R-CNN
  • (求助)用傲游上csdn博客时标签栏和网址栏一直显示袁萌 的头像