Cheng Pan
Cheng Pan
it's by design, that the engine(in your case, spark app runs on YARN) and server manage each lifecycle by themselves, in this case, you should kill the app manually.
> Is everyone with a risc-v setup able to test this? to reviewers, https://github.com/apache/hadoop/pull/7924 may help you to set up a dev box on x86 or aarch platform by leveraging...
Actually, I found nearly all tests in `parquet-hadoop` start to fail since Hadoop 3.3.5, and the errors are very similar ``` closeAllocator:175 » LeakedByteBuffer 4 ByteBuffer object(s) is/are remained unreleased...
> Tested by running locally. Hi @muskan1012, thanks for working on this area, as CI workflow is not ready for Java 17 yet, please describe how you tested it locally...
The issue is irrelevant to Kyuubi, you are expected to see the same behavior with `spark-submit` cluster mode. The root cause is that Iceberg does not implement the `HadoopDelegationTokenProvider`, so...
> ... kyuubi cannot query the iceberg table of the cluster when you say something does not work, provide the concrete configuration and stacktrace, otherwise, you should not expect active...
I just noticed you are using the client mode, the KSHC workaround only takes effect on cluster mode. the key points - `hive.metastore.uris` should point to your local HMS -...
@ayushtkn thanks for checking! BTW, are you familiar with Hadoop CI infra? I'm stuck on setting up Java 17 tests, any help is appreciated https://github.com/apache/hadoop/pull/6914
@steveloughran https://github.com/apache/hadoop/pull/6930 is opened for 3.4 backporting
it's by design