starrocks-connector-for-apache-flink
starrocks-connector-for-apache-flink copied to clipboard
Bug in using streamload to synchronize data to starrocks through seatunnel
Steps to reproduce the behavior (Required)
- source engine : GreenPlum
- seatunnel config info: { "sink": [{ "base-url": "jdbc:mysql://x.x.x.x:19030/dwd", "password": "xxx", "database": "dwd", "batch_max_rows": 64000, "max_retries": 10, "batch_max_bytes": 94371840, "nodeUrls": "["x.x.x.x:18030"]", "plugin_name": "StarRocks", "table": "xxx", "username": "xxx", "batch_interval_ms": 10000 }], "source": [{ "password": "xxx", "driver": "org.postgresql.Driver", "query": "select column as Column from xxx_v", "plugin_name": "Jdbc", "user": "xxx", "url": "jdbc:postgresql://x.x.x.x:xxx/xxx" }], "env": { "job.mode": "BATCH", "execution.parallelism": "1" } }
- seatunnel engine is flink
- flink run
Expected behavior (Required)
All data synchronized to starrocks
Real behavior (Required)
As long as the fields containing uppercase letters are empty when creating the starrocks table We tried to import the data through ” insert into“, and all the data was normal Guess it's a bug with the streamload method
StarRocks version (Required)
StarRocks version 2.5.13
I encounter this situation when importing through the starrocks connector using Flinkcdc and Flink Stream load requests sent through HTTP will not
this is starrocks connector version