shubhamn21
shubhamn21
@ad1happy2go I did try increasing the connection maximum to 300 and thread.max to 30 but it still failed with same error.
I also see subsequent rollback failures causing the job to fail. The job is often halting after failure warning for clean operation. I tried with different S3 buckets and also...
`23/12/04 08:00:23 WARN CleanActionExecutor: Failed to perform previous clean operation, instant: [==>20231204075005981__clean__INFLIGHT] java.lang.IllegalArgumentException at org.apache.hudi.common.util.ValidationUtils.checkArgument(ValidationUtils.java:31)` Hi @nsivabalan , Tagging you here as I had seen you as an assignee for...
Disabling the clean action by setting `hoodie.clean.automatic` as `false` has helped for now. I'll be creating a daemon/cron-job that can clean the cold metadata in parallel but not touch the...
Hi @ad1happy2go, Thanks for responding. Yes, I did have multiple executors at one point writing to the table. But I recently limited my deployments to 1 executor since I realized...
Thank you! Closing this issue - I'll set up hive metastore to avoid these issues.