Kunni
Kunni
为刚接触chunjun或需要快速使用的同学提供一个docker的镜像。 镜像名: ``` dtopensource/chunjun-master ``` 提供了以下可选项: 1. 直接启动,默认使用chunjun-examples/json/stream/stream.json这个任务,standalone模式 ``` docker run -p 8081:8081 dtopensource/chunjun-master ``` 2. 指定文件 /Users/kunni/IdeaProjects/chunjun/chunjun-examples/json/stream/stream.json是在你机器上的文件,docker内挂载的目录必须是/opt/flink/job 任务类型根据文件名自动推断:例如stream.json是sync任务,stream.sql是sql任务 ``` docker run -p 8081:8081 -v /Users/kunni/IdeaProjects/chunjun/chunjun-examples/json/stream/stream.json:/opt/flink/job/stream.json dtopensource/chunjun-master ``` 3. 指定模式...
as what stevenzwu said in https://github.com/apache/iceberg/pull/5050 , the tables in TestFlinkUpsert was not partitioned by date, which would bring about confusion, needed to be refactored.
# [Community] Who is using chunjun? Hi, All! Thanks to everyone who participated in this open source project. We appreciate your contributions to make this community better. ## Original intention...
error message: ``` Caused by: java.lang.NullPointerException: Cannot find source column: 3 at org.apache.iceberg.relocated.com.google.common.base.Preconditions.checkNotNull(Preconditions.java:953) ~[iceberg-bundled-guava-0.13.2.jar:na] at org.apache.iceberg.PartitionSpec$Builder.add(PartitionSpec.java:503) ~[iceberg-api-0.13.2.jar:na] at org.apache.iceberg.PartitionSpecParser.buildFromJsonFields(PartitionSpecParser.java:155) ~[iceberg-core-0.13.2.jar:na] at org.apache.iceberg.PartitionSpecParser.fromJson(PartitionSpecParser.java:78) ~[iceberg-core-0.13.2.jar:na] at org.apache.iceberg.TableMetadataParser.fromJson(TableMetadataParser.java:357) ~[iceberg-core-0.13.2.jar:na] at org.apache.iceberg.TableMetadataParser.fromJson(TableMetadataParser.java:288) ~[iceberg-core-0.13.2.jar:na] ```...
This closes https://github.com/ververica/flink-cdc-connectors/issues/2691. * support value format of `debeium-json` and `canal-json`. * The written topic of Kafka will be `namespace.schemaName.tableName` string of TableId,this can be changed using `route` function of...
### Search before asking - [X] I searched in the [issues](https://github.com/ververica/flink-cdc-connectors/issues) and found nothing similar. ### Motivation Currently, there is no a clear description to tell how to run a...
reference: [Oracle doc](https://docs.oracle.com/en/database/oracle/oracle-database/19/refrn/ALL_TABLES.html#GUID-6823CD28-0681-468E-950B-966C6F71325D). This closes https://github.com/ververica/flink-cdc-connectors/issues/1737,https://github.com/ververica/flink-cdc-connectors/issues/2287. `WHERE TABLESPACE_NAME IS NOT NULL AND TABLESPACE_NAME NOT IN ('SYSTEM','SYSAUX')` will filter partitioned table. As TABLESPACE_NAME means: TABLESPACE_NAME | VARCHAR2(30) | Name of the...
This closes https://github.com/ververica/flink-cdc-connectors/issues/2859. Add e2e test for two scenes: sync the whole database and sync sharding tables using route function. To simplify the complexity of different source/sink combinations, I plan...
This close https://github.com/ververica/flink-cdc-connectors/issues/2856. Some codes are inspired by [FlinkCdcMultiTableSink](https://github.com/apache/incubator-paimon/blob/9f151ab7258f05e8fbda8ad6cc5c92e241411de9/paimon-flink/paimon-flink-cdc/src/main/java/org/apache/paimon/flink/sink/cdc/FlinkCdcMultiTableSink.java#L59) in Paimon repo, and add a sinkV2 implement.
This closes https://github.com/ververica/flink-cdc-connectors/issues/2940. Use `fromSavepoint` to be consistent with [Flink cli](https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/cli/#starting-a-job-from-a-savepoint). Some codes are inspired from [CliFrontendParser](https://github.com/apache/flink/blob/6e78eb18524ead3abd60da0ca41751b45e0e2482/flink-clients/src/main/java/org/apache/flink/client/cli/CliFrontendParser.java#L681-L700).