hudi icon indicating copy to clipboard operation
hudi copied to clipboard

Upserts, Deletes And Incremental Processing on Big Data.

Results 906 hudi issues
Sort by recently updated
recently updated
newest added

## *Tips* - *Thank you very much for contributing to Apache Hudi.* - *Please review https://hudi.apache.org/contribute/how-to-contribute before opening a pull request.* ## What is the purpose of the pull request...

priority:blocker

**Describe the problem you faced** when i use config 1 ,The exception A occurred in one of the two spark jobs writing the same table; when i use config 2...

priority:major
spark
multi-writer

## What is the purpose of the pull request Add a new module `hudi-metastore` ## Brief change log HoodieMetastoreBasedTimeline HoodieMetastoreFileSystemView MetaStore has three parts: - client, connects with server by...

priority:critical
big-needle-movers

**Describe the problem you faced** Currently using the delatstreamer to ingested from one S3 bucket to another. In Hudi v10 I would use the upsert operation in the delatstreamer. When...

schema-and-data-types
priority:major
deltastreamer
on-call-triaged

### Change Logs _Describe context and summary for this change. Highlight if any code was copied._ ### Impact _Describe any public API or user-facing feature change or any performance impact._...

priority:critical
index
rfc
size:M

Replenish: #6344 ### Change Logs ### Impact none ### Contributor's checklist - [ ] Read through [contributor's guide](https://hudi.apache.org/contribute/how-to-contribute) - [ ] Change Logs and Impact were stated clearly - [...

**_Tips before filing an issue_** - Have you gone through our [FAQs](https://hudi.apache.org/learn/faq/)? - Join the mailing list to engage in conversations and get faster support at [email protected]. - If you...

priority:minor
spark

obivously, operator priority problem

when we incremental query a hudi table, if // 1. there are files in metadata be deleted; // 2. read from earliest // 3. the start commit is archived //...

**Describe the problem you faced** after I update hudi to 0.11 from 0.8, using `spark.table(fullTableName)` to read a hudi table is not working, the table has been sync to hive...

priority:major
spark-sql