Steve Loughran
Steve Loughran
the latest update runs the tests against local fs paramterized on using/not using bulk delete -the library settings have been modified to use hadoop 3.4.1-SNAPSHOT for this. it works, and...
+ going to create a separate test repository to run an iceberg build against s3/abfs etc with a different hadoop build. this will do end to end testing.
For anyone watching this, there's a full integration test suite in the hadoop test code: https://github.com/apache/hadoop/pull/7285 All is good, though as it's the first java17 code and depends on an...
Closing as is because now that Hadoop 3.4.1 is the hadoop version, this can be done without reflection.(*) Doing that in a new PR; this one exists to show the...
The failures here are due to mocking; if a mocked Hadoop FS lacks the new bulk delete API, hadoop common's attempt to invoke it will fail to link even though...
@danielcweeks > I feel like in HadoopFileIO we can just use reflection to check if the bulk methods exist and use bulk delete if supported. I'm not sure we need...
yeah, classpaths are mess in places, primarily due to downstream things * spark pulling in 3.3.6. These need to be excluded and then the new versions pulled in. * hive-metastore...
> The fallback logic is unnecessary complexity, we should just rely on the bulk delete if available and log/throw if individual deletes fails > There are a lot of comments...
@danielcweeks there's too much patch history in there that the chained patch application doesn't take...it's complaining about a file which was later deleted. proposed - squash all but the changes...
not sure what went up with flink; rebasing and testing locally to make sure that it is unrelated to my PR.