Steve Loughran
Steve Loughran
failure is because spark 3.4 runtimes are using older hadoop releases.
@danielcweeks so here we are 1. no reflection 1. forced update of hadoop version on spark 3.4 and 3.5. That forced update isn't great, but if iceberg compiles with hadoop...
aah, this got closed while I was off on a european-length vacation. PITA.
@nastra thanks. I think I'll split the "move everything to hadoop 3.4.1 libs" from the code changes, so that build changes which highlight spark 3.4 issues.
prepared parquet for this by renaming vectorio package to `org.apache.parquet.hadoop.util.wrappedio`
@mukund-thakur this pr renames `bulkDelete_PageSize` to `bulkDelete_pageSize` to be consistent with everything else. My iceberg PR https://github.com/apache/iceberg/pull/10233 looks for the new name; it is now dynamic and should build link...
thread memory problems triggering test failures
test failure is `org.apache.hadoop.hdfs.server.namenode.TestFSNamesystemLockReport.test`
javadocs ``` hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/functional/FunctionalIO.java:83: warning: no @param for ``` checkstyles are all about use of _ in method names, except for one ``` ./hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/wrappedio/impl/TestWrappedStatistics.java:257: snapshot.setCounter( "c1", 10);:24: '(' is followed by...