hudi
hudi copied to clipboard
[SUPPORT]hudi way of doing bucket index cannot be used to improve query engines queries such join and filter
According to [parisni in [HUDI-6150] Support bucketing for each hive client (https://github.com/apache/hudi/pull/8657)
"So I assume hudi way of doing (which is not compliant with both hive and spark) cannot be used to improve query engines queries such join and filter. Then this leads all of below are wrong:
the current config https://hudi.apache.org/docs/configurations/#hoodiedatasourcehive_syncbucket_sync this current PR the rfc statement about support of hive bucketing https://cwiki.apache.org/confluence/display/HUDI/RFC+-+29%3A+Hash+Index"
Do you have any update on this?
Hi Danny0405,
I think the support for 2 hudi tables' Spark sort-merge-join with bucket optimization is an important feature.
Currently if we join 2 hudi tables, the bucket index's bucket information is not usable by spark, so shuffle is always needs. As explained in 8657 - hashing- file naming- file numbering- file sorting are different.
Unfortunately, according to https://issues.apache.org/jira/browse/SPARK-19256, spark bucket is not compatible with hive bucket yet. So if we have to choose one between spark and hive, I think spark might be of higher priority.
So if we have to choose one between spark and hive, I think spark might be of higher priority
I agree, do you have energy to complete that suspended PR.
I'm a newbie. It took me a while to understand why bucket join does not work.
This is really useful feature to have. We want to use Hudi at work, but unfortunately we have couple of bucketed/sorted tables, and this is definitely a stopper for us to migrate to Hudi.
@KnightChess do you have intreast to push-forward this feature?
@KnightChess do you have intreast to push-forward this feature?
@danny0405 yes, I follow up this problem