xiaohu-liu

Results 9 comments of xiaohu-liu

it seems like a bug, If alluxio.proxy.s3.v2.version.enabled and alluxio.proxy.s3.v2.async.processing.enabled are set false proxy rest api functionality can be restored. you can have a try. @TCGOGOGO

```org/apache/flink/runtime/fs/hdfs/HadoopRecoverableWriter.java``` as we can see, the code in flink likes what as below: ```java public HadoopRecoverableWriter(org.apache.hadoop.fs.FileSystem fs) { this.fs = checkNotNull(fs); // This writer is only supported on a subset...

It doesn't seem to be a bug in Alluxio, but in order to be compatible with Flink, Alluxio needs to make some internal adaptations.

In the alluxio namespace, alluxio can enable support as long as the ufs file system is compatible with the hdfs protocol.

have you put the alluxio-client-2.9.3.jar to the `lib` directory of hive installation? @jiang320

and then, you should restart hive meta service and hive server2 service one after another.

it seems that jvm version is incompatible. what's your jvm version to run hive ? firstly unify the version of jvm you used to compile alluxio and the one to...

check the pr, it may resolve the issue https://github.com/Alluxio/alluxio/pull/18266/files

you can also configure the URIStatus with kyro serialized and try again