hudi
hudi copied to clipboard
Adding New Configuration To Support ZSTD Level
Tips before filing an issue
-
Have you gone through our FAQs?
-
Join the mailing list to engage in conversations and get faster support at [email protected].
-
If you have triaged this as a bug, then file an issue directly.
Describe the problem you faced
A clear and concise description of the problem.
To Reproduce
Steps to reproduce the behavior:
Expected behavior hoodie.parquet.zstd.compression.level Config to support the level setting for zstd in SPARK or parquet write. A clear and concise description of what you expected to happen.
Environment Description
-
Hudi version :
-
Spark version : 3.3
-
Hive version :
-
Hadoop version :
-
Storage (HDFS/S3/GCS..) :
-
Running on Docker? (yes/no) :
Additional context
Add any other context about the problem here.
Stacktrace
Add the stacktrace of the error.
In Flink, you can use parquet.
prefix for any property that you wanna customize with the parquet writer, not sure whether Spark has the similiar function.
@Amar1404 With spark, Did you tried to give config along with write.df.
- .option("parquet.compression.codec.zstd.level", "22")
@Amar1404 Were you able to check this, does this work?