meatheadmike
meatheadmike
Same here :(
This is still broken. Does gogogate integration work for anyone at this point? Or is it just abandoned?
I'm seeing this with flink using the hive metastore for locking as well. The worst part is that it seems to have wiped my table in the process!
Yes. This happens this seems to happens with multiple writers. I do specify the lock config: ``` 'compaction.delta_commits'='2', 'connector' = 'hudi', 'hive_sync.db'='{hudi_db}', 'hive_sync.enable'='true', 'hive_sync.jdbcurl'='{hive_jdbc_url}', 'hive_sync.metastore.uris'='{hive_thrift_url}', 'hive_sync.mode'='hms', 'hive_sync.password'='', 'hive_sync.table'='{hudi_table}', 'hive_sync.username'='hive', 'hoodie.cleaner.policy.failed.writes'...
Certainly: ```bash spark.broadcast.compress true spark.checkpoint.compress true #spark.driver.log.dfsDir s3a://XXXXXXXX #spark.driver.log.persistToDfs.enabled true spark.driver.maxResultSize 2g spark.dynamicAllocation.shuffleTracking.enabled true spark.eventLog.compress true spark.eventLog.compression.codec snappy #spark.eventLog.dir s3a://XXXXXXXX spark.eventLog.enabled false spark.eventLog.rolling.enabled true spark.eventLog.rolling.maxFileSize 20m spark.executor.memoryOverhead 2g spark.hadoop.fs.s3a.aws.credentials.provider com.amazonaws.auth.WebIdentityTokenCredentialsProvider...
I should mention that I turned on debug logging in spark. The actual call to the nessie server does not appear to contain the bearer token. When I attempt the...
You are correct. It works when I set `spark.sql.catalog.iceberg.token`. BUT - I have to set BOTH `spark.sql.catalog.iceberg.token` and `spark.sql.catalog.iceberg.authentication.token` or I get an error message!
This is what I thought initially too. So then I deployed a bone-stock Spark image (i.e. the Dockerhub image published by Apacke) and used the `--packages` flag: ``` export SPARK_CONF_DIR=/opt/spark/conf...
UPDATE: This appears to be a regression with 1.0.0-beta2 I tried spinning up a container with spark 3.4 to see if downgrading spark would help. No dice. I got the...
I can certainly attempt partitioning again, but doesn't that just exacerbate the file group problem? My last attempt at partitioning made the batches take waaaaaaaay too long. If the writing...