zstd compressor and decompressor use the same configuration
I use spark to rewrite the parquet files that are compressed by zstd. And the parquet version is 1.12.2. I want to read the parquet files compressed by level 3 and compress them on another level. But the level can't be changed. After I check the source, I found the problem was the codec was cached, and the configuration will not be updated: https://github.com/apache/parquet-mr/blob/master/parquet-hadoop/src/main/java/org/apache/parquet/hadoop/CodecFactory.java#L144
I think the problem is important. I found it when I try to use a different level to compaction the files in the iceberg table. Asynchronous rewriting with a higher level can lead to higher compression ratio. This is important to save storage costs.
Reporter: Peidian Li
Note: This issue was originally created as PARQUET-2152. Please see the migration documentation for further details.
The issue continues to exist in Apache Spark 4.0preview1 as well. Please fix the issue.