snowflake-connector-python
snowflake-connector-python copied to clipboard
SNOW-242177: AttributeError: 'NoneType' object has no attribute 'fetch'
Please answer these questions before submitting your issue. Thanks!
- What version of Python are you using (
python --version
)?
Python3.8.2
- What operating system and processor architecture are you using (
python -c 'import platform; print(platform.platform())'
)?
Using python:3.8.2 image from DockerHub
- What are the component versions in the environment (
pip freeze
)?
snowflake-connector-python==2.1.1
azure-storage-blob==2.1.0
jinja2==2.11.2
- What did you do?
results = execute_snowflake_query(snowflake_database, None, query, context, verbose)
existing_privileges = []
for cursor in results:
cursor_results = cursor.fetchmany(1000000)
cursor_results_array = list(itertools.chain.from_iterable(cursor_results))
print(cursor_results_array[0])
if cursor_results_array[0] != 'Statement executed successfully.':
existing_privileges.extend(cursor_results_array)
return existing_privileges
- What did you expect to see?
No error
- What did you see instead?
AttributeError: 'NoneType' object has no attribute 'fetch'
- Can you set logging to DEBUG and collect the logs?
Statement executed successfully.
--
| Traceback (most recent call last):
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/json_result.py", line 76, in __next__
| row = next(self._current_chunk_row)
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 270, in __next__
| return next(self._it)
| StopIteration
|
| During handling of the above exception, another exception occurred:
|
| Traceback (most recent call last):
.... (not showing these as they are my code)
| cursor_results = cursor.fetchmany(1000000)
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/cursor.py", line 844, in fetchmany
| row = self.fetchone()
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/cursor.py", line 823, in fetchone
| return next(self._result)
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/json_result.py", line 82, in __next__
| next_chunk = self._chunk_downloader.next_chunk()
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 187, in next_chunk
| raise self._downloader_error
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 125, in _download_chunk
| result_data = self._fetch_chunk(self._chunks[idx].url, headers)
| File "/usr/local/lib/python3.8/site-packages/snowflake/connector/chunk_downloader.py", line 254, in _fetch_chunk
| return self._connection.rest.fetch(
| AttributeError: 'NoneType' object has no attribute 'fetch'
Is there a 10k limit on the row count that you will return?
I get the same error, in my case I read 100K records with a cursor in chunks of 100 and the error is thrown when it reaches 36800 rows.
@peterburnash would you be able to reply with the code you've used to make that happen - are you using a fetchmany() inside a for loop?
con = snowflake.connector.connect(**options) cursor = connection.execute_string("select * from testtable")[-1]
Then if I check the cursor.row_count property (going by memory) it shows 36000, given that testtable has 1M+ rows. Looping with fetchmany or fetchone throws an error when it crosses the 36000 boundary.
Hey folks! We experienced this same issue again on the 2.3.7.
I bumped the version to 2.6.0 and am experiencing a similar, but not exactly the same issue. It downloads some of the chunks of data until it just stops with the following stacktrace.
DEBUG:snowflake.connector.CArrowIterator:Current batch index: 2, rows in current batch: 309
DEBUG:snowflake.connector.result_set:user requesting to consume result batch 1
DEBUG:snowflake.connector.result_batch:started downloading result batch id: data_0_0_1
ERROR:snowflake.connector.result_batch:Failed to fetch the large result set batch data_0_0_1 for the 2 th time, backing off for 4s for the reason: ''NoneType' object has no attribute '_use_requests_session''
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/snowflake/connector/result_batch.py", line 294, in _download
with connection._rest._use_requests_session() as session:
AttributeError: 'NoneType' object has no attribute '_use_requests_session'
DEBUG:snowflake.connector.result_batch:started downloading result batch id: data_0_0_1
ERROR:snowflake.connector.result_batch:Failed to fetch the large result set batch data_0_0_1 for the 3 th time, backing off for 8s for the reason: ''NoneType' object has no attribute '_use_requests_session''
Traceback (most recent call last):
File "/usr/local/lib/python3.8/site-packages/snowflake/connector/result_batch.py", line 294, in _download
with connection._rest._use_requests_session() as session:
AttributeError: 'NoneType' object has no attribute '_use_requests_session'
DEBUG:snowflake.connector.result_batch:started downloading result batch id: data_0_0_1
ERROR:snowflake.connector.result_batch:Failed to fetch the large result set batch data_0_0_1 for the 4 th time, backing off for 16s for the reason: ''NoneType' object has no attribute '_use_requests_session''
This stacktrace continues until the 9th time.
Our logic uses the fetchmany()
like below.
We retry it the same way after the library fails for the 9th time but this just returns a None.
cursor_results = cursor.fetchmany(5000)
Help would be appreciated :)
Cheers, Pavan
@petehanssens can you provide the function definition of execute_snowflake_query? @pavan-intellify information on how you're calling the function would also provide some insight into this.
I will mark this as closed as I am unable to reproduce. Please re-open if need be with more details of the query.
@sfc-gh-jbahk
def execute_snowflake_query(database, schema, query, context, verbose):
con = snowflake.connector.connect(
user = context['user'],
account = context['account'],
role = context['role'],
warehouse = context['warehouse'],
database = database,
schema = schema,
region = context['region'],
authenticator = context['authenticator'],
password = context['password']
)
if verbose:
print("SQL query: %s" % query)
try:
return con.execute_string(query)
finally:
con.close()
Hi @sfc-gh-jbahk can you re-open this ticket and look at the above please?
@petehanssens can you provide the query itself? The query ID?
@petehanssens @krutisfood please get back to me with the above asks for me to proceed; I am unable to reproduce the issue. On another note, a fix that deals with something similar has been merged some time ago so I suggest upgrading the driver as well. Thanks.
I am also hitting similar error. Here's the trace back. I have tried using latest python snowflake connector and still got same failure message. The table from query is pulling data contains ~900 rows. Could this be checked ?
Traceback (most recent call last):
File "c:\folders\schemachange-master\schemachange\cli.py", line 759, in
@sfc-gh-jbahk, Looks like the issue reported is not fixed yet. could this issue be re-opened ?
Re-opened. I'm happy to look into this but I generally need more information than just the error statement, especially if there's no clear and consistently method of reproduction. Thanks all for your patience.
Let me know what info you may require.
@kumar-sf can you provide the query you ran? The query ID? What version of the connector are you on? Like I mentioned above, just sharing the error logs does little to aid the investigation process. If you'd rather not share it in a public forum, you are welcome to raise a ticket with our support team.
@sfc-gh-jbahk , I think snowflake case has been created.
@sfc-gh-jbahk I'm running into the same error as @kumar-sf.
File "/usr/local/lib/python3.9/site-packages/snowflake/connector/result_batch.py", line 295, in _download
with connection._rest._use_requests_session() as session:
AttributeError: 'NoneType' object has no attribute '_use_requests_session'
I've installed https://sfc-repo.snowflakecomputing.com/snowsql/bootstrap/1.2/linux_x86_64/snowflake-snowsql-1.2.21-1.x86_64.rpm.
The query is select to_json(col_x) from table_x limit 1
where col_x contains a ~8 MB file.
It works if I wait for 10 minutes between each script execution. If I execute my script twice without waiting then I consistently get this error.
@kumar-sf we have been experiencing the same issue as you when running schemachange pipelines. I was able to trace the issue to the situation where the number of R-scripts we have in the versionhistory table (and the length of those script names) is large enough that the cursor contains more than one "chunk" (i.e. result batch). When iterating through the rows in the cursor, it consistently fails when trying to transition from the first chunk to the second. On a whim I tried the latest version of the snowflake-connector-python (2.7.8, just recently released) and it no longer failed. You mentioned above that you thought a snowflake case had been created - sounds like maybe this was resolved recently? If you have an update that could be shared here that would be helpful.
I simplified code based on the relevant schemachange code to pinpoint where the failure was occurring:
import snowflake.connector
from pandas import DataFrame
con = snowflake.connector.connect(
user = '',
account = '',
role = '',
warehouse = '',
database = '',
password = '',
application = '',
)
query = "SELECT DISTINCT SCRIPT, FIRST_VALUE(CHECKSUM) OVER (PARTITION BY SCRIPT ORDER BY INSTALLED_ON DESC) \
FROM DEV.CONTROL.SCHEMACHANGE_VERSIONHISTORY WHERE SCRIPT_TYPE = 'R' AND STATUS = 'Success'"
res = con.execute_string(query)
con.close()
# from fetch_r_scripts_checksum method
# Collect all the results into a dict
d_script_checksum = DataFrame(columns=['script_name', 'checksum'])
script_names = []
checksums = []
for cursor in res:
for row in cursor: # FAILS HERE - after last row in first chunk, having dug into chunk sizes within the cursor instance
script_names.append(row[0])
checksums.append(row[1])
d_script_checksum['script_name'] = script_names
d_script_checksum['checksum'] = checksums
d_script_checksum.to_csv('schemachange_testing.csv')
I did also just submit a proposed update to schemachange to handle this more gracefully in the case of not being on the latest snowflake-connector-python version, all it changes is first returning the batches using cursor.get_result_batches(), then iterating through each batch to access the rows.
@gcnuss , Yes, that is the exact root cause that I found when I was troubleshooting. I think issue was resolved from snowflake-connector-python version 2.7.5.
Great to get that confirmation! Thanks for the quick reply
Can we close this issue?