paimon
paimon copied to clipboard
Apache Paimon is a lake format that enables building a Realtime Lakehouse Architecture with Flink and Spark for both streaming and batch operations.
### Purpose 1. Add `name` in interface `Catalog` 2. Make fileStoreTable's `fullName` return `catalog.db.table` if catalogName if present. - This is beneficial for systems like Kyuubi to obtain the catalogName...
### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Paimon version master ### Compute Engine all ### Minimal reproduce step ## Summary When...
### Purpose We set `sync-all-properties=false` and find all properties also sync to hms, we should check it by using paimon table options. ### Tests ### API and Format ### Documentation
### Purpose The primary-keys is NOT NULL, unique and natural sort by ascending, when deletion-vector enabled, the following rules can let TopN convert to limit. 1. The preceding sort keys...
### Purpose - In paimon 1.0, if compact_manifest failed, legacy manifest files will be deleted, pr https://github.com/apache/paimon/pull/5776 remove this deletion. - Add default commit user-prefix for spark procedure. ### Tests...
### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Paimon version 1.1.1 ### Compute Engine flink 1.20.1 ### Minimal reproduce step CREATE TABLE...
### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Motivation In offline model inference and table writing scenarios, it is common to conduct...
### Search before asking - [x] I searched in the [issues](https://github.com/apache/paimon/issues) and found nothing similar. ### Motivation When need to improve query performance, one of the methods is to switch...
- When reading parquet, hive converts the TZ to local timezone while querying which causes result mismatch against paimon - Add support to do the same when querying through paimon...
### Purpose To support drop partition from top level in spark if partition spec with one more fields example: ``` CREATE TABLE tbl (id int, data string) USING paimon PARTITIONED...