starrocks-connector-for-apache-flink icon indicating copy to clipboard operation
starrocks-connector-for-apache-flink copied to clipboard

Bug in using streamload to synchronize data to starrocks through seatunnel

Open cyd257666 opened this issue 1 year ago • 2 comments

Steps to reproduce the behavior (Required)

  1. source engine : GreenPlum
  2. seatunnel config info: { "sink": [{ "base-url": "jdbc:mysql://x.x.x.x:19030/dwd", "password": "xxx", "database": "dwd", "batch_max_rows": 64000, "max_retries": 10, "batch_max_bytes": 94371840, "nodeUrls": "["x.x.x.x:18030"]", "plugin_name": "StarRocks", "table": "xxx", "username": "xxx", "batch_interval_ms": 10000 }], "source": [{ "password": "xxx", "driver": "org.postgresql.Driver", "query": "select column as Column from xxx_v", "plugin_name": "Jdbc", "user": "xxx", "url": "jdbc:postgresql://x.x.x.x:xxx/xxx" }], "env": { "job.mode": "BATCH", "execution.parallelism": "1" } }
  3. seatunnel engine is flink
  4. flink run

Expected behavior (Required)

All data synchronized to starrocks

Real behavior (Required)

As long as the fields containing uppercase letters are empty when creating the starrocks table We tried to import the data through ” insert into“, and all the data was normal Guess it's a bug with the streamload method

StarRocks version (Required)

StarRocks version 2.5.13

cyd257666 avatar Dec 19 '23 06:12 cyd257666

I encounter this situation when importing through the starrocks connector using Flinkcdc and Flink Stream load requests sent through HTTP will not

cyd257666 avatar Dec 19 '23 06:12 cyd257666

this is starrocks connector version <groupId>com.starrocks</groupId> <artifactId>flink-connector-starrocks</artifactId> 1.2.7_flink-1.13_${scala.version}

cyd257666 avatar Dec 19 '23 06:12 cyd257666