parquet-java
parquet-java copied to clipboard
PARQUET-2416: Use 'mapreduce.outputcommitter.factory.class' in ParquetOutpuFormat
see https://issues.apache.org/jira/browse/PARQUET-2416
this PR fix a bug in ParquetOutputFormat, which hard-coded a "new ParquetOutputCommitter(output, context)" with "class ParquetOutputCommitter extends FileOutputCommitter".
The objective is to fix this bug, using the "delegate instead of extends" design pattern. Technically, it would be possible to use "class ParquetOutputCommitter extend PathOutputCommitter" and delegate to underlying "PathOutputCommitter", but it would changes the binary compatibility of the class. Therefore, the class still "extends FileOutputCommitter", and override all public method to delegate.
Indeed, it is necessary for Spark code, because it is hard-coded "if (classOf[FileOutputCommitter].isAssignableFrom(clazz))"
Changing in parquet-mr without changing in spark would give error at runtime:
java.lang.NoSuchMethodException: org.apache.parquet.hadoop.ParquetOutputCommitter.<init>()
at java.lang.Class.getConstructor0(Unknown Source) ~[?:?]
at java.lang.Class.getDeclaredConstructor(Unknown Source) ~[?:?]
at org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol.setupCommitter(SQLHadoopMapReduceCommitProtocol.scala:63) ~[spark-sql_2.12-3.3.3.jar:3.3.3]
at org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.setupJob(HadoopMapReduceCommitProtocol.scala:187) ~[spark-core_2.12-3.3.3.jar:3.3.3]
To cleanup this temporary code in parquet-mr, it is necessary to wait for another PR in Spark code to avoid the "classOf[FileOutputCommitter ].isAssignableFrom"
Could you please make CI happy?
spark already ships with a a workaround for all of this, primarily so we could avoid having to fix parquet up too.
getting something choreographed across spark and parquet would be lovely.
ideally -spark should just look for a path output committer, and use its factory -parquet shouldn't insist on its committers always being a subclass of ParquetOutputCommitter.