How to make transfer data faster while using databricks-sql-go
I am using the driver to run a data migration. When dealing with a table of 24 million rows and 9 columns, the performance is excellent when I fetch 10 thousand rows. When I increase the fetch size to 100 thousand rows, the transfer speed is still good. However, when fetching 1 million rows or more, the data transfer becomes very slow. A quick test made me think that the driver tries to fetch all the data in a single batch. Is there a way to improve this process?
This is the connection string
"token:xxx@host:443$xxx-path?catalog=sample&database=big_table&useCloudFetch=true&maxRows=10000"
I've tried to use this two settings, data transfer is still slow to big tables.
- useCloudFetch=true
- maxRows=10000
In forums users suggest change spark.driver.maxResultSize ?
@calebeaires considering you enabled CloudFetch - most likely your assumption is correct, the driver actually tries to download all the data in memory. That's what we discovered recently, and are trying to fix right now - databricks/databricks-sql-go#234 Try to disable CloudFetch, this helped some other users. And then you can play with maxRows to see if it has any effect in your case
@kravets-levko I've changed the settings to useCloudFetch=false, still getting errors. Sometimes this erros come out:
execution error: failed to execute query: unexpected operation state ERROR_STATE: Total size of serialized results of 382 tasks (4.0 GiB) is bigger than spark.driver.maxResultSize 4.0 GiB. Code: 108
Topic: Spark Driver maxResultSize exceed 4.0 GiB
Hope we find a solution!
@calebeaires for this particular error please reach out an administrator of your workspace, or Databricks support. This one is related to your workspace configuration, and that's not something that can be handled in library
Thank you so much. Please, help me elaborate the issue to the administrator. Is there some config the admin can define concerning max memory result size?