pinot icon indicating copy to clipboard operation
pinot copied to clipboard

Request: Flink connector enhancements

Open davecromberge opened this issue 1 year ago • 3 comments

What needs to be done?

  • Upgrade Flink version The version of Flink in the pinot project pom should be updated from 1.14.6 to the latest version of Flink, 1.18.1.

  • Authentication The org.apache.pinot.controller.helix.ControllerRequestClient does not currently accept an org.apache.pinot.spi.auth.AuthProvider. Similarly, the org.apache.pinot.connector.flink.sink.PinotSinkFunction<T> does not accept an AuthProvider.

  • Error handling Verify that the underlying SegmentWriter handles errors appending records to a segment and that errors propagate correctly to the Flink runtime. Check that the SegmentUploader handles transmission errors and propagates these correctly.

  • Schema and Data Type Mapping Correctly convert types from Flink timestamps with timezones to the Pinot equivalent. See this comment. Double / decimal conversions - see this comment.

Nice to haves:

  • Segment size parameter Currently segmentFlushMaxNumRecords controls when the segment is flushed according to the number of ingested rows. A dual to this could be desiredSegmentSize that could be used to flush segments when the number of bytes approaches or exceeds a size threshold.

  • Checkpoint support Understanding the limitations behind only supporting Batch mode execution in Flink. Can the current segment writer be serialized and is there support for resuming from the serialized state?

  • Connector assembly Packaging the connector as a single assembly with shaded dependencies so that it can be used within the FlinkSQL environment. This is done in other connectors such as Delta Lake, Google BigQuery etc.

Other questions:

  • Is it possible to upload directly to a deep store and push metadata to the controller, or do we need the controller to implement the two-phase commit protocol?

Why the feature is needed

Our particular use case involves using pre-aggregation before ingestion into Pinot using Apache Datasketches. These are serialized as binary and can be in the order of megabytes. These are appended to a Delta Lake. The idea is to stream records continuously from the Delta Lake using the Flink Delta Connector and have fine grained control over Pinot Segment generation. These segments are to be uploaded directly to Pinot. Our Pinot controllers are secured using Basic Authentication.

It is possible to clone and modify the existing connector and make modifications but some of these enhancements might benefit other users and discussing here is better.

Initial idea/proposal Discuss the points above and collaborate on implementation.

davecromberge avatar Feb 20 '24 14:02 davecromberge