snowflake-connector-python icon indicating copy to clipboard operation
snowflake-connector-python copied to clipboard

SNOW-242177: AttributeError: 'NoneType' object has no attribute 'fetch'

Open petehanssens opened this issue 4 years ago • 21 comments

Please answer these questions before submitting your issue. Thanks!

  1. What version of Python are you using (python --version)?

Python3.8.2

  1. What operating system and processor architecture are you using (python -c 'import platform; print(platform.platform())')?

Using python:3.8.2 image from DockerHub

  1. What are the component versions in the environment (pip freeze)?
snowflake-connector-python==2.1.1
azure-storage-blob==2.1.0
jinja2==2.11.2
  1. What did you do?
    results = execute_snowflake_query(snowflake_database, None, query, context, verbose)
    existing_privileges = []
    for cursor in results:
        cursor_results = cursor.fetchmany(1000000)
        cursor_results_array = list(itertools.chain.from_iterable(cursor_results))
        print(cursor_results_array[0])
        if cursor_results_array[0] != 'Statement executed successfully.':
            existing_privileges.extend(cursor_results_array)
    return existing_privileges
  1. What did you expect to see?

No error

  1. What did you see instead?
AttributeError: 'NoneType' object has no attribute 'fetch'
  1. Can you set logging to DEBUG and collect the logs?
Statement executed successfully.
--
  | Traceback (most recent call last):
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/json_result.py", line 76, in __next__
  | row = next(self._current_chunk_row)
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 270, in __next__
  | return next(self._it)
  | StopIteration
  |  
  | During handling of the above exception, another exception occurred:
  |  
  | Traceback (most recent call last):
.... (not showing these as they are my code)
  | cursor_results = cursor.fetchmany(1000000)
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/cursor.py", line 844, in fetchmany
  | row = self.fetchone()
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/cursor.py", line 823, in fetchone
  | return next(self._result)
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/json_result.py", line 82, in __next__
  | next_chunk = self._chunk_downloader.next_chunk()
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 187, in next_chunk
  | raise self._downloader_error
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 125, in _download_chunk
  | result_data = self._fetch_chunk(self._chunks[idx].url, headers)
  | File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 254, in _fetch_chunk
  | return self._connection.rest.fetch(
  | AttributeError: 'NoneType' object has no attribute 'fetch'

petehanssens avatar Dec 11 '20 07:12 petehanssens

Is there a 10k limit on the row count that you will return?

petehanssens avatar Dec 11 '20 07:12 petehanssens

I get the same error, in my case I read 100K records with a cursor in chunks of 100 and the error is thrown when it reaches 36800 rows.

pburner avatar Dec 11 '20 19:12 pburner

@peterburnash would you be able to reply with the code you've used to make that happen - are you using a fetchmany() inside a for loop?

petehanssens avatar Dec 13 '20 11:12 petehanssens

con = snowflake.connector.connect(**options) cursor = connection.execute_string("select * from testtable")[-1]

Then if I check the cursor.row_count property (going by memory) it shows 36000, given that testtable has 1M+ rows. Looping with fetchmany or fetchone throws an error when it crosses the 36000 boundary.

pburner avatar Dec 28 '20 18:12 pburner

Hey folks! We experienced this same issue again on the 2.3.7.

I bumped the version to 2.6.0 and am experiencing a similar, but not exactly the same issue. It downloads some of the chunks of data until it just stops with the following stacktrace.

DEBUG:snowflake.connector.CArrowIterator:Current batch index: 2, rows in current batch: 309

DEBUG:snowflake.connector.result_set:user requesting to consume result batch 1

DEBUG:snowflake.connector.result_batch:started downloading result batch id: data_0_0_1

ERROR:snowflake.connector.result_batch:Failed to fetch the large result set batch data_0_0_1 for the 2 th time, backing off for 4s for the reason: ''NoneType' object has no attribute '_use_requests_session''

Traceback (most recent call last):

  File "/usr/local/lib/python3.8/site-packages/snowflake/connector/result_batch.py", line 294, in _download

    with connection._rest._use_requests_session() as session:

AttributeError: 'NoneType' object has no attribute '_use_requests_session'

DEBUG:snowflake.connector.result_batch:started downloading result batch id: data_0_0_1

ERROR:snowflake.connector.result_batch:Failed to fetch the large result set batch data_0_0_1 for the 3 th time, backing off for 8s for the reason: ''NoneType' object has no attribute '_use_requests_session''

Traceback (most recent call last):

  File "/usr/local/lib/python3.8/site-packages/snowflake/connector/result_batch.py", line 294, in _download

    with connection._rest._use_requests_session() as session:

AttributeError: 'NoneType' object has no attribute '_use_requests_session'

DEBUG:snowflake.connector.result_batch:started downloading result batch id: data_0_0_1

ERROR:snowflake.connector.result_batch:Failed to fetch the large result set batch data_0_0_1 for the 4 th time, backing off for 16s for the reason: ''NoneType' object has no attribute '_use_requests_session''

This stacktrace continues until the 9th time.

Our logic uses the fetchmany() like below. We retry it the same way after the library fails for the 9th time but this just returns a None.

cursor_results = cursor.fetchmany(5000)

Help would be appreciated :)

Cheers, Pavan

pavan-intellify avatar Sep 01 '21 07:09 pavan-intellify

@petehanssens can you provide the function definition of execute_snowflake_query? @pavan-intellify information on how you're calling the function would also provide some insight into this.

sfc-gh-jbahk avatar Nov 11 '21 04:11 sfc-gh-jbahk

I will mark this as closed as I am unable to reproduce. Please re-open if need be with more details of the query.

sfc-gh-jbahk avatar Dec 09 '21 21:12 sfc-gh-jbahk

@sfc-gh-jbahk

def execute_snowflake_query(database, schema, query, context, verbose):
    con = snowflake.connector.connect(
        user = context['user'],
        account = context['account'],
        role = context['role'],
        warehouse = context['warehouse'],
        database = database,
        schema = schema,
        region = context['region'],
        authenticator = context['authenticator'],
        password = context['password']
    )
    if verbose:
        print("SQL query: %s" % query)
    try:
        return con.execute_string(query)
    finally:
        con.close()

krutisfood avatar Dec 16 '21 00:12 krutisfood

Hi @sfc-gh-jbahk can you re-open this ticket and look at the above please?

petehanssens avatar Dec 16 '21 12:12 petehanssens

@petehanssens can you provide the query itself? The query ID?

sfc-gh-jbahk avatar Dec 17 '21 00:12 sfc-gh-jbahk

@petehanssens @krutisfood please get back to me with the above asks for me to proceed; I am unable to reproduce the issue. On another note, a fix that deals with something similar has been merged some time ago so I suggest upgrading the driver as well. Thanks.

sfc-gh-jbahk avatar Jan 18 '22 22:01 sfc-gh-jbahk

I am also hitting similar error. Here's the trace back. I have tried using latest python snowflake connector and still got same failure message. The table from query is pulling data contains ~900 rows. Could this be checked ?

Traceback (most recent call last): File "c:\folders\schemachange-master\schemachange\cli.py", line 759, in main() File "c:\folders\schemachange-master\schemachange\cli.py", line 756, in main deploy_command(config) File "c:\folders\schemachange-master\schemachange\cli.py", line 199, in deploy_command r_scripts_checksum = fetch_r_scripts_checksum(change_history_table, snowflake_session_parameters, config['autocommit'], config['verbose']) File "c:\folders\schemachange-master\schemachange\cli.py", line 618, in fetch_r_scripts_checksum for row in cursor: File "C:\Users\ram\Anaconda3\lib\site-packages\snowflake\connector\cursor.py", line 1082, in _result_iterator for _next in self._result: File "C:\Users\ram\Anaconda3\lib\site-packages\snowflake\connector\result_set.py", line 99, in result_set_iterator batch_iterator = future.result() File "C:\Users\ram\Anaconda3\lib\concurrent\futures_base.py", line 445, in result return self.__get_result() File "C:\Users\ram\Anaconda3\lib\concurrent\futures_base.py", line 390, in __get_result raise self._exception File "C:\Users\ram\Anaconda3\lib\concurrent\futures\thread.py", line 52, in run result = self.fn(*self.args, **self.kwargs) File "C:\Users\ram\Anaconda3\lib\site-packages\snowflake\connector\result_batch.py", line 734, in create_iter return self._create_iter(iter_unit=iter_unit, connection=connection) File "C:\Users\ram\Anaconda3\lib\site-packages\snowflake\connector\result_batch.py", line 653, in _create_iter response = self._download(connection=connection) File "C:\Users\ram\Anaconda3\lib\site-packages\snowflake\connector\result_batch.py", line 340, in _download raise e File "C:\Users\ram\Anaconda3\lib\site-packages\snowflake\connector\result_batch.py", line 308, in _download with connection._rest._use_requests_session() as session: AttributeError: 'NoneType' object has no attribute '_use_requests_session'

kumar-sf avatar Feb 03 '22 17:02 kumar-sf

@sfc-gh-jbahk, Looks like the issue reported is not fixed yet. could this issue be re-opened ?

kumar-sf avatar Feb 03 '22 19:02 kumar-sf

Re-opened. I'm happy to look into this but I generally need more information than just the error statement, especially if there's no clear and consistently method of reproduction. Thanks all for your patience.

sfc-gh-jbahk avatar Feb 03 '22 19:02 sfc-gh-jbahk

Let me know what info you may require.

kumar-sf avatar Feb 03 '22 19:02 kumar-sf

@kumar-sf can you provide the query you ran? The query ID? What version of the connector are you on? Like I mentioned above, just sharing the error logs does little to aid the investigation process. If you'd rather not share it in a public forum, you are welcome to raise a ticket with our support team.

sfc-gh-jbahk avatar Feb 09 '22 18:02 sfc-gh-jbahk

@sfc-gh-jbahk , I think snowflake case has been created.

kumar-sf avatar Feb 09 '22 22:02 kumar-sf

@sfc-gh-jbahk I'm running into the same error as @kumar-sf.

File "/usr/local/lib/python3.9/site-packages/snowflake/connector/result_batch.py", line 295, in _download
    with connection._rest._use_requests_session() as session:
AttributeError: 'NoneType' object has no attribute '_use_requests_session'

I've installed https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/1.2/linux_x86_64/snowflake-snowsql-1.2.21-1.x86_64.rpm.

The query is select to_json(col_x) from table_x limit 1 where col_x contains a ~8 MB file.

It works if I wait for 10 minutes between each script execution. If I execute my script twice without waiting then I consistently get this error.

courtney-miro avatar Apr 14 '22 15:04 courtney-miro

@kumar-sf we have been experiencing the same issue as you when running schemachange pipelines. I was able to trace the issue to the situation where the number of R-scripts we have in the versionhistory table (and the length of those script names) is large enough that the cursor contains more than one "chunk" (i.e. result batch). When iterating through the rows in the cursor, it consistently fails when trying to transition from the first chunk to the second. On a whim I tried the latest version of the snowflake-connector-python (2.7.8, just recently released) and it no longer failed. You mentioned above that you thought a snowflake case had been created - sounds like maybe this was resolved recently? If you have an update that could be shared here that would be helpful.

I simplified code based on the relevant schemachange code to pinpoint where the failure was occurring:

import snowflake.connector
from pandas import DataFrame

con = snowflake.connector.connect(
    user = '',
    account = '',
    role = '',
    warehouse = '',
    database = '',
    password = '',
    application = '',
    )

query = "SELECT DISTINCT SCRIPT, FIRST_VALUE(CHECKSUM) OVER (PARTITION BY SCRIPT ORDER BY INSTALLED_ON DESC) \
FROM DEV.CONTROL.SCHEMACHANGE_VERSIONHISTORY WHERE SCRIPT_TYPE = 'R' AND STATUS = 'Success'"   

res = con.execute_string(query)
con.close()

# from fetch_r_scripts_checksum method
# Collect all the results into a dict
d_script_checksum = DataFrame(columns=['script_name', 'checksum'])
script_names = []
checksums = []
for cursor in res:
    for row in cursor: # FAILS HERE - after last row in first chunk, having dug into chunk sizes within the cursor instance
        script_names.append(row[0])
        checksums.append(row[1])

d_script_checksum['script_name'] = script_names
d_script_checksum['checksum'] = checksums

d_script_checksum.to_csv('schemachange_testing.csv')

I did also just submit a proposed update to schemachange to handle this more gracefully in the case of not being on the latest snowflake-connector-python version, all it changes is first returning the batches using cursor.get_result_batches(), then iterating through each batch to access the rows.

gcnuss avatar Jun 14 '22 04:06 gcnuss

@gcnuss , Yes, that is the exact root cause that I found when I was troubleshooting. I think issue was resolved from snowflake-connector-python version 2.7.5.

kumar-sf avatar Jun 14 '22 12:06 kumar-sf

Great to get that confirmation! Thanks for the quick reply

gcnuss avatar Jun 14 '22 21:06 gcnuss

Can we close this issue?

iamontheinet avatar Dec 12 '22 23:12 iamontheinet