hudi
hudi copied to clipboard
[HUDI-7441] Move `getWritePartitionPaths` method to common module to …
As Hudi-7441 mentioned, a normal flink streaming reading hudi case should not include hive dependency, but since the IncrementalInputSplits#getReadPartitions method will call `
HoodieInputFormatUtils.getWritePartitionPathsmethod, andHoodieInputFormatUtils` imported hive classes, so we have to include hive dependency in the use case without hive integration.
Change Logs
Move the IncrementalInputSplits#getReadPartitions method to HoodieCommitMetadataUtils#getReadPartitions
Impact
No
Risk level (write none, low medium or high below)
Low
Documentation Update
No
Contributor's checklist
- [ ] Read through contributor's guide
- [ ] Change Logs and Impact were stated clearly
- [ ] Adequate tests were added if applicable
- [ ] CI passed
CI report:
- ba1ae3434440bf73d4a32465de630ceffaffcc97 Azure: SUCCESS
Bot commands
@hudi-bot supports the following commands:@hudi-bot run azurere-run the last Azure build
@danny0405 do we need to cheery pick to 0.x or 0.14 branch?
I don't think so, it is just a code refactoring right?
I don't think so, it is just a code refactoring right?
It's also solve the problem that we have to include hive dependency jar(e.g. flink-hive-connector) during flink streaming read a normal hudi table, even through the flink job doesn't do any integration with hive.
Sure, my fault, this is a critical fix for flink streaming read, let's cherry pick it into the branch-0.x.
Can you also update our site to remove the key requirements: https://hudi.apache.org/docs/flink-quick-start-guide#streaming-query?