starrocks-connector-for-apache-flink icon indicating copy to clipboard operation
starrocks-connector-for-apache-flink copied to clipboard

Results 77 starrocks-connector-for-apache-flink issues
Sort by recently updated
recently updated
newest added

cdc的starocks有没有选项忽略源库表的delete,比如像 sink.deletion.ignored = true,这样可以支持源库定期删除日志类的数据而让starrocks这边保持全量数据? 这个选项支持应该比较简单(就是 starrocks sink时根据 __op 类型进行下过滤,其实 debezium 就有个选项 debezium.skipped.operations = 'c, u, d',但不知什么原因到了flink中的各种 source connector 的实现就变成无效了,没法从源头进行过滤),但如果用户自己要实现这个效果就比较麻烦(需要开发或借用临时表经常性人为去处理,我们这边已经等这个功能2周了) 相关参考:https://github.com/StarRocks/starrocks/issues/14559 其他建议: [https://docs.starrocks.io/zh-cn/latest/loading/PrimaryKeyLoad#upsert.](https://docs.starrocks.io/zh-cn/latest/loading/PrimaryKeyLoad#upsert.)虽然支持upsert 和 delete 的分开导入,但毕竟不是对CDC的connector的支持(与需求不对应);另外即使这种离线导入也可以优化一下的:stream 从 文件等导入 如果 也支持 __op...

flink连接sr导出数据,程序能够正常运行,但是会突然在某一刻挂掉,提示是某个Failed to get next from be -> ip:[10.151.217.3] NOT_FOUND msg:[context_id: be9a2c1f-2119-4331-ad6b-23dd0698f555 not found],而且这个ip是随机的,每次运行都不一样,数据能够正常导出

Are there compilation and packaging steps? Can you send one? These cannot be found import com.starrocks.shade.org.apache.thrift.TException; import com.starrocks.shade.org.apache.thrift.protocol.TBinaryProtocol; import com.starrocks.shade.org.apache.thrift.protocol.TProtocol; import com.starrocks.shade.org.apache.thrift.transport.TSocket; import com.starrocks.shade.org.apache.thrift.transport.TTransportException;

Version: flink-connector-starrocks:1.2.3_flink-1.15 flink:1.15.0 environment: idea doubt: When dynamic column update is specified, CSV does not need to set sink.properties Columns is set according to fieldnames in the Flink connector starlocks...

StarrocksSink support json object as the element of stream source. But when the element of stream source is a json array like [{"a":"a","b":"b"},{"c":"c","d":"d"}],it doen't support and will thorw an error...

The flink task consumes kafka and writes starrocks, implemented via Flinks-connector-Starrocks. flink task started for 3 months. On a node of yarn cluster, it was found that this task created...

- Background: Connector Does't support partial update will throw exception like "Fields count of test_tbl mismatch. flinkSchema[8]:k1,k2,v1,v2,v3,v4,v5,v6\n realTab[7]:k1,k2,v1,v2,v3,v4,v5" - Support new option `sink.partial.update` - Update README - Add Test Case

![image](https://user-images.githubusercontent.com/13082598/201814589-2490e835-a2ef-47a3-8617-bfc370183469.png) 翻阅了源码后发现确实wei未适配。 ![image](https://user-images.githubusercontent.com/13082598/201814692-4dc8d92c-63f9-4db3-9175-f6626a4de165.png)

当项目中存在如下jar包列表: connect-api-2.7.0.jar fastjson-1.2.68.jar flink-connector-kafka_2.12-1.13.6.jar flink-connector-starrocks-1.2.3_flink-1.13_2.11.jar flink-csv-1.13.6.jar flink-dist_2.12-1.13.6.jar flink-json-1.13.6.jar flink-s3-fs-hadoop-1.14.2.jar flink-shaded-hadoop-uber-3.1.2.jar flink-shaded-zookeeper-3.4.14.jar flink-sql-connector-mysql-cdc-2.2.0.jar flink-table_2.12-1.13.6.jar flink-table-blink_2.12-1.13.6.jar kafka-clients-2.4.1.jar log4j-1.2-api-2.17.1.jar log4j-api-2.17.1.jar log4j-core-2.17.1.jar log4j-slf4j-impl-2.17.1.jar mysql-connector-java-8.0.16.jar flink-connector-starrocks-1.2.3_flink-1.13_2.11.jar 配置中存在如下配置: s3a.endpoint: http://XXX.XXX.XXX s3a.access.key: XXX s3a.secret.key: XXX s3a.path.style.access: true...

## What type of PR is this: - [x] bugfix - [ ] feature - [ ] enhancement - [ ] refactor - [ ] others ## Which issues of...