Hongshun Wang

Results 19 issues of Hongshun Wang

## Search before asking - [X] I searched in the [issues](https://github.com/ververica/flink-cdc-connectors/issues) and found nothing similar. ## Flink version 1.18 ## Flink CDC version 3.0 ## Database and its version anyway...

bug

As shown in https://github.com/ververica/flink-cdc-connectors/issues/3071, Unlike flink-table-common、flink-connector-base 、flink-core whose maven scope is provided , flink-shaded-guava and flink-shaded-force-shading will be included in the final jar package, so maybe cause dependency conflict. Now...

docs
composer
common
runtime
mongodb-cdc-connector
build
mysql-cdc-connector
base
oracle-cdc-connector
postgres-cdc-connector
sqlserver-cdc-connector
cli
vitess-cdc-connector
mysql-pipeline-connector
starrocks-pipeline-connector
tidb-cdc-connector
Stale

Add 'scan.incremental.snapshot.backfill.skip' to docs of mysql、postgres、oracle、mongodb、sqlserver

docs
Stale
waiting for review

### Search before asking - [X] I searched in the [issues](https://github.com/ververica/flink-cdc-connectors/issues) and found nothing similar. ### Flink version 1.18 ### Flink CDC version 3.0.1 ### Database and its version anyone...

bug

In current com.ververica.cdc.connectors.mysql.source.reader.MySqlSplitReader#pollSplitRecords, if currentReader == null(for example, there is no split), NullPointerException will be thrown here: ```java private MySqlRecords pollSplitRecords() throws InterruptedException { Iterator dataIt; if (currentReader == null)...

mysql-cdc-connector

### Reason Current, we emit record first then determine whether is after out of bound. So it may emit data after high_watermark. ### Minimal reproduce step ```java @Test public void...

mysql-cdc-connector

## Search before asking - [X] I searched in the [issues](https://github.com/ververica/flink-cdc-connectors/issues) and found nothing similar. ## Flink version 1.18 ## Flink CDC version 3.0 ## Reason ### overview At first,...

bug

Although java.sql.DatabaseMetaData#getTables provided a param(catalog), but this function will never use it. If a database instance contains many databases, including databases A and B. Now i open a JDBCConnection to...

In current modification, option is changed from `org.apache.rocketmq.flink.common.RocketMQOptions` to `org.apache.flink.connector.rocketmq.source.RocketMQSourceOptions.` However, each key's name has been add a prefix of rocketmq.source. It means that same sql ddl will throw exception...

Fix exception like this: ```java Caused by: java.lang.NullPointerException at org.apache.flink.cdc.common.configuration.ConfigurationUtils.convertToString(ConfigurationUtils.java:133) ~[?:?] at org.apache.flink.cdc.common.configuration.Configuration.toMap(Configuration.java:138) ~[?:?] ```

common