can
can
You should use the following configuration to obtain the user's group information from ranger to avoid the above warning. ranger.plugin.$getServiceType.use.usergroups.from.userstore.enabled = true
I also noticed this issue with Hudi: 0.14.1 Hadoop: 3.2.2 Spark: 3.4.2 Hbase: 2.4.5 Does anyone have a solution?
> [@wardlican](https://github.com/wardlican), thanks for reporting this, Would you like to fix this? OK, I will fix it.
> From the error message, it seems that the thrift deserialization uses a somewhat incompatible thrift protocol to deserialize parquet metadata. If the thrift protocol used to write the parquet...
A partial submission feature similar to spark-procedure can be used, and we also plan to implement this feature.
Please conduct a code review of the implementation here.
Please perform CR on these implementations.
Please perform CR on these implementations.
Please help review whether this repair plan is feasible. Fix Results: - When the total size of the input files is less than `targetSize`, return `Long.MAX_VALUE` (existing logic). - When...
> @wardlican I have the same concern: could simply setting it to the maximum value result in a single file being excessively large? Could we perhaps refer to Spark's estimation...