paimon icon indicating copy to clipboard operation
paimon copied to clipboard

Apache Paimon is a lake format that enables building a Realtime Lakehouse Architecture with Flink and Spark for both streaming and batch operations.

Results 540 paimon issues
Sort by recently updated
recently updated
newest added

### Purpose Linked issue: close #xxx When querying with paimon_incremental_between_timestamp, we want to switch among different scan modes. * delta or changelog, if every single change is needed. * diff,...

### Purpose Linked issue: close #5079 validate aggregation functions that doesn't exists before creating sink and source, instead of throw exception when insert and reading data ### Tests PreAggregationITCase.NotExistAggregationFunctionITCase PartiallUpdateITCase.testSequenceGroupWithNotExistAgg...

### Purpose Linked issue: close #5214 ### Tests MergeTreeCompactManagerTest#testSyncOrphanFiles() KeyValueFileStoreWriteTest#testMultiWriteModeEnabled() KeyValueFileStoreWriteTest#testMultiWriteModeDisabled() ### API and Format ### Documentation

### Search before asking - [X] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Paimon version 0.9 ### Compute Engine hive: 2.1-cdh-6.3-1 ### Minimal reproduce step create paimon...

bug

### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Motivation Flink 2.0 has been released recently, and Paimon's Flink version should be updated...

enhancement

### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Motivation I want to implement incremental data reading of snapshots using the Java-API. I...

enhancement

### Search before asking - [X] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Motivation Currently, when we define the `bucket-key` in the Paimon table, the bucket is...

enhancement

### Purpose Linked issue: close #xxx This PR is for computing columns when using kafka_sync_database. There would be multiple different tables to sync, so it's hard to list the exact...

### Purpose Linked issue: part of #4816 Support spark datasource v2 write path, reduce write serialization overhead and accelerate the process of writing to primary key tables in Spark. Currently...

### Purpose When reduce `scan.manifest.parallelism` parameter, it does not take effect. ### Tests ### API and Format ### Documentation