AoShen
AoShen
I want to do some math by the key_point to do something more than observe.Any good advice to get the information which I can use? TKU a lot!
I’m using the `ZIO-HTTP` framework, and I found that the `ZIO-AMQP` Connection must be in scope to be effective. Suppose I want a unique global connection after the program starts,...
My hadoop warehouse is S3a://XXXXXX, and I add the --source-catalog-hadoop-conf fs.s3a.access.key=$AWS_ACCESS_KEY_ID,fs.s3a.secret.key=$AWS_SECRET_ACCESS_KEY,fs.s3a.endpoint=$AWS_S3_ENDPOINT Then goes wrong with: ``` com.amazonaws.AmazonClientException: Unable to unmarshall response (Failed to parse XML document with handler class com.amazonaws.services.s3.model.transform.XmlResponsesSaxParser$ListBucketHandler)....
### Issue description ``` 16:39:33.385 ERROR o.a.s.s.e.d.v2.AppendDataExec : Data source write support IcebergBatchWrite(table=xxxxxxxx, format=PARQUET) aborted. org.apache.spark.SparkException: Writing job aborted at org.apache.spark.sql.errors.QueryExecutionErrors$.writingJobAbortedError(QueryExecutionErrors.scala:767) at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2(WriteToDataSourceV2Exec.scala:409) at org.apache.spark.sql.execution.datasources.v2.V2TableWriteExec.writeWithV2$(WriteToDataSourceV2Exec.scala:353) at org.apache.spark.sql.execution.datasources.v2.AppendDataExec.writeWithV2(WriteToDataSourceV2Exec.scala:244) at org.apache.spark.sql.execution.datasources.v2.V2ExistingTableWriteExec.run(WriteToDataSourceV2Exec.scala:332) at...