Aditya Goenka
Aditya Goenka
@smileyboy2019 Closing this issue as the stack trace shows spark version compatibility issue. Please reopen if you still see the issue.
@zyclove Also, Any reason why you are setting DataSourceReadOptions.EXTRACT_PARTITION_VALUES_FROM_PARTITION_PATH. This config will extract the partition values from physical partition path.
@zhangjw123321 The number of input partition/spark tasks is derived from the input dataset. How many tasks are getting created and what is the nature of source of dataset?
@zhangjw123321 This is for number of partitions after shuffle stage(Mainly used when we use any sort mode or custom partitioning as mentioned in doc). Can you show the spark DAG...
@zhangjw123321 Its going in deduping records. For bulk insert it doesn't dedup with the default configs. Are you setting any other configs?
@Amar1404 Can you give more details like table/writer configurations you are using? I tried with simple scenario and schema evolution from long to double works fine. ``` schema1 = StructType(...
Thanks for the details. I will check and try to traige it.
@Amar1404 Sorry for the delay here I was OOO. Can you ping me on slack so we can work on this together?
@lei-su-awx I tried this code with 0.14.1 and it worked fine. With 0.14.0 I can see the error. @lei-su-awx @Amar1404 Can you guys try with 0.14.1 and let me know...
@qidian99 Let us know in case you face any issues while trying this. Feel free to close this issue if it worked. Thanks.