python-bigquery-pandas icon indicating copy to clipboard operation
python-bigquery-pandas copied to clipboard

Querying large amounts of data without creating a destination table

Open Mikerah opened this issue 6 years ago • 11 comments

I'm trying to load data from BigQuery into a data frame. This is a lot of data, initially requiring the use of allowLargeResults. However, every time I query, I use resources that I shouldn't have to be using. Is there a way to query large amount of data from BigQuery without having to create a destination table every time and just store the result in a data frame?

I am using a jupyter notebook on an AWS instance, if this matters.

Mikerah avatar Sep 30 '19 16:09 Mikerah

Hi @Mikerah, I recommend you try out the use_bqstorage_api=True option, which uses the BigQuery Storage API to download results (currently in Beta). Creating a destination table shouldn't be required.

See: https://pandas-gbq.readthedocs.io/en/latest/reading.html#improving-download-performance for the full instructions.

Closing for now, but let me know if you have any questions or this doesn't work.

tswast avatar Oct 08 '19 23:10 tswast

Are you using legacy SQL? https://cloud.google.com/bigquery/docs/writing-results#limitations

The workarounds I suggest are:

  • Use standard SQL dialect (recommended)
  • OR create a dataset with a default table expiration of a few hours. That way when you write destination tables to it, you only keep the tables around for a limited time.

tswast avatar Oct 09 '19 16:10 tswast

Hi @tswast,

I am executing a query that should be returning ~150GB of data and I am getting the following error even when using "use_bqstorage_api=True" in the pandas "read_gbq" function.

Error: "google.api_core.exceptions.Forbidden: 403 Response too large to return. Consider specifying a destination table in your job configuration. For more details, see https://cloud.google.com/bigquery/troubleshooting-errors"

I believe this is failing in the query stage since I am not specifying a destination table to save the data to. From this issue ticket, it seems like I should not have to specify a destination table if I am using the BQ storage API. Is that true?

If this method does not work, how would you suggest executing python/pandas queries that can regularly expect large amounts of data returned? Is there a way to change BQ settings to allow large temporary tables to be created? My automated workflow will have variable size results returned so always creating tables to read data from seems like a lot of extra work to do if im unsure the result size needs it.

Thanks!

Edit: I'm actually surprised I ran into this issue because some of the other code I have written before should have already exceeded this limit I believe. Maybe its possible there was a lot of repeat data in that query response and the total compressed size was under the 10GB limit? I am not sure. See this for reference: https://github.com/googleapis/python-bigquery/issues/1252

jlynchMicron avatar Jul 12 '22 19:07 jlynchMicron

@jlynchMicron I think you are encountering a different error from the one described in https://github.com/googleapis/python-bigquery/issues/1252

I'm not seeing a relevant limit listed here https://cloud.google.com/bigquery/quotas#query_jobs but you might be returning too many results to write to an anonymous results table. https://cloud.google.com/bigquery/docs/cached-results

Are you doing a SELECT * query? If so, you can workaround this by reading from the table directly.

data_frame = pandas_gbq.read_gbq(
    'data-project-id.dataset_id.table_id',
    project_id='billing-project-id',
)

If not, as a workaround, you can specify a destination table with the configuration argument. https://pandas-gbq.readthedocs.io/en/latest/reading.html#advanced-configuration

configuration = {
   'query': {  # https://cloud.google.com/bigquery/docs/reference/rest/v2/Job#jobconfigurationquery
     'destinationTable': {
       # https://cloud.google.com/bigquery/docs/reference/rest/v2/TableReference
        "projectId": string,
        "datasetId": string,
        "tableId": string
     }
   }
}
data_frame = read_gbq(
    'SELECT * FROM `test_dataset.test_table` WHERE stuff > 3',
    project_id=projectid,
    configuration=configuration)

tswast avatar Jul 13 '22 16:07 tswast

Hi @tswast

You are right it is a different error, I was just showing an example of the amount of data I could expect to receive from a past issue ticket.

As for your suggestion, is there a way for this python code to run a test query beforehand to know if a destination table will need to be created or not? The way I have query code setup, it is basically an SQLalchemy query wrapper so it can have varying result sizes from very small amounts of data to the entire table being returned (1TB+). Is there a way to dry run the query to see how much data it is expected to return to see if a destination table needs to be written or not?

Thanks!

jlynchMicron avatar Jul 13 '22 16:07 jlynchMicron

@jlynchMicron I don't know of a good way to know the result size without actually running the query.

In the google-cloud-bigquery library, you can run the query and wait for it to finish without downloading the results. At that point, you can check the result size before downloading it, but that still doesn't help know if you need to create a destination table or not.

For your use case, I might suggest always creating a destination table but also setting an expiration time so that the result tables don't stay around for much longer than they are needed.

tswast avatar Jul 13 '22 18:07 tswast

Hi @tswast,

I tired your first code suggestion as a test, but it fails to work: data_frame = pandas_gbq.read_gbq( 'data-project-id.dataset_id.table_id', project_id='billing-project-id', )

Error: "Reason: 400 Syntax error: Expected end of input but got identifier" ... it goes on with my table information.

Looking at the query in BigQuery, it thinks I'm trying to execute that table identifier as a SQL query and does not know what to do with it. Based on this, I don't think you can download a table just but putting in its table identifier string.

This seems like its a problem, especially if I want to do a "Select *" on an existing table. Does that mean my only course of action to do that is save the result of a table "Select *" to another table with the query will then download from?

jlynchMicron avatar Jul 13 '22 19:07 jlynchMicron

Hi @jlynchMicron the feature to read directly from a table was added in pandas-gbq version 0.17.0 and later. https://github.com/googleapis/python-bigquery-pandas/blob/main/CHANGELOG.md#0170-2022-01-19 That may explain why you are encountering this error.

tswast avatar Jul 14 '22 23:07 tswast

Hi @tswast, ah looks like I am using version 0.16.0. Thanks!

jlynchMicron avatar Jul 15 '22 15:07 jlynchMicron

Hi @tswast,

If I know my query will exceed the "max query response size" and I will have to specify a destination table to write the results to, do you know if there is any way to specify that the result table should be a temporary table?

The type of table defined in these places:

  • https://cloud.google.com/bigquery/docs/writing-results#temporary_and_permanent_tables
  • https://cloud.google.com/bigquery/docs/multi-statement-queries#temporary_tables

If this was possible, do you think this process could be automated into the read_gbq function as an argument flag? I am just trying to think of a nice way to download very large results for a python/pandas data scientist who does not want to deal with extra table management just to execute a query on a big dataset.

jlynchMicron avatar Aug 16 '22 22:08 jlynchMicron

It's not possible to manually create a temporary table, but you can emulate one by creating a table with a default expiration time.

If this was possible, do you think this process could be automated into the read_gbq function as an argument flag? I am just trying to think of a nice way to download very large results for a python/pandas data scientist who does not want to deal with extra table management just to execute a query on a big dataset.

I'd like to learn a little more about the cases in which this error can occur, as I didn't realize it could happen when using Standard SQL syntax. Perhaps one of my teammates can investigate as a feature request.

If we were to implement this, I'd expect us to create a _pandas_gbq_results dataset or similar with a default table expiration for use in emulating temporary tables.

tswast avatar Aug 24 '22 14:08 tswast