maheshguptags
maheshguptags
@xicm Yes I agree but it would not effect it to 10 times. let say I have 100 partition and each partition has 10 sub-partition with 16 bucket then total...
@xicm let me reduce the number of bucket and test it for same number of record to check process time. can you tell me how to check number of filegroup?
@xicm I reduced the number of bucket( it makes sense to reduce the bucket size as we have second level partition) but it is still taking 45-50 min to execute...
Yes I am trying to test the different combination with bucket number.
Hi @xicm, I tried below combination with same number record. Please find the below details related to filegroups After testing it several times I noticed that 8,4 buckets looks good...
I already had 20 task to write the data, please check in below screenshot. do you want me to increase it more?
Yes. it is 20. it start from 0 and ending with 19.
@xicm Let me try to increase the number write task and for load and test the performance. Is there a way to control the number of file group for particular...
@danny0405 I am asking about the number of file group added for particular commit. I am already implementing bucket index. Number of filegroup is more than 2000 for a commit.
Hi @xicm and @danny0405, I tried to increase the parallelism as @xicm suggested but it is trying to consume the data in a single commit i.e. it accumulates the data...