Sudhir Saxena

Results 15 comments of Sudhir Saxena

tgt_db value is same datalake_dev1_entp_cds, which i am using for insert operation and upsert operation. Currently i am facing another issue " Caused by: java.lang.StackOverflowError " so i have increased...

Hi @soumilshah1995 , yes, we are currently using bloom index (below hudi configuration given) hudi lower version(0.10) and it's working but somewhat slow performance, so we are trying to use...

i have done some changes in configuration so job is not failing for **hoodie.database.name: datalake_dev1_entp_cds** now but job is running longer and not completing. it's getting stuck where trying to...

Hi @soumilshah1995 , i am not using any existing hudi table to do transition from Bloom index to the Record Level index. since it's new version so have to test...

so you mean i shouldn't give bulk_insert operation option in job, just run to check for fresh table. hudi_operation = "bulk_insert" **one question :** for testing upsert only for new...

@soumilshah1995 @ad1happy2go i have run this job as **upsert operation** first time to load the data into fresh Hudi table and it's completed in 7 minutes(attached screenshot). I have verified...

@soumilshah1995 @ad1happy2go @nsivabalan just sharing hoodie.properties and metadata/hoodie.properties. I see bydefault it's taking **hoodie.table.type=MERGE_ON_READ** in .hoodie/metadata/.hoodie/hoodie.properties and hoodie.table.type=COPY_ON_WRITE in .hoodie/hoodie.properties . please have a look if there is any changes...

Hi @ad1happy2go , @soumilshah1995 ,@nsivabalan I am trying to see where job is getting stuck . i see driver which is in Executor id summary (below screenshot) is running more...

Where can i find driver logs, i don't see any option to get the driver logs? while checking all the logs, i am noticing, there were 9 executors created but...

@ad1happy2go , it's printing the same which i have posted above even for 2-3 hours, creating task, finishing task but there is no actual execution happening, even there is no...