aiobotocore icon indicating copy to clipboard operation
aiobotocore copied to clipboard

aiobotocore or aiohttp stuck after many requests

Open dimaqq opened this issue 6 years ago • 36 comments

Describe the bug It's a bug that happens rarely in production. We've not been able to reproduce it in an isolated environment yet.

The code in question is a background worker, it pulls a job from SQS and runs it. The job in question was to delete 100s of objects, which means 800 concurrent coro's each of which issues 1..several concurrent delete_item calls.

gather(800 obj = gather( 1+ delete_item()))

In this case, it just stuck on object 799.

Checklist

  • [x] I have reproduced in environment where pip check passes without errors
  • [x] I have provided pip freeze results
  • [ ] I have provided sample code or detailed way to reproduce
  • [ ] I have tried the same code in botocore to ensure this is a aiobotocore specific issue

pip freeze results

aiobotocore==0.10.3
aiodns==2.0.0
aiodynamo==19.9
aiohttp==3.6.2
boto3==1.9.189
botocore==1.12.189
requests==2.22.0
requests-toolbelt==0.9.1
urllib3==1.25.6

Environment:

  • Python Version: 3.7.5
  • OS name and version: 3.7.5-alpine10 (official Docker image + deps + our code)

Additional context Thanks to https://github.com/dimaqq/awaitwhat I have a nice coro trace when the process was stuck: Add any other context about the problem here. aiobotocore-stuck

dimaqq avatar Oct 24 '19 08:10 dimaqq

One thing to note is that the code in question does essentially:

return await self._aio_session.request(...)

While official aiohttp documentation appears to recommend:

async with session.request(...):
    return xxx

dimaqq avatar Oct 24 '19 08:10 dimaqq

the session is aentered: https://github.com/aio-libs/aiobotocore/blob/master/aiobotocore/endpoint.py#L98

you need to make your code like the following to call aexit:

response = await client.method(...)
async with response['Body'] as stream:
    pass

I've seen issues with newer aiohttp, could you try with 3.2.2 w/ python 3.6? Another long standing issue is if you wait long enough, eventually botocore will attempt to refresh your credentials and that whole section hasn't been async'ified yet: https://github.com/aio-libs/aiobotocore/issues/619

at our company we have code that periodically refreshes the credentials, I'll see if I can open source it.

thehesiod avatar Oct 24 '19 16:10 thehesiod

ok added: https://gist.github.com/thehesiod/05298c1c89c7b5a38da0abc4ccbed7b7

thehesiod avatar Oct 24 '19 16:10 thehesiod

on the other hand, this looks like https://github.com/aio-libs/aiohttp/issues/4258

thehesiod avatar Oct 24 '19 16:10 thehesiod

re: credential refresh I'm not well-versed in the deployment... Do I understand it right that if we don't use AWS_PROFILE, then there's nothing to refresh? As far as I can tell we don't use that.

dimaqq avatar Oct 25 '19 01:10 dimaqq

I've poked the instance where container is running:

  • security-credentials are updated every hour
  • they are valid for 8h30min
  • process got stuck 40 minutes after latest update, that's 20 minutes before next 🤔
  • process was running (mostly idle) 20h45min before it got stuck while executing a job

My hunch is, this is unrelated to credentials rollover.

It's something else, maybe connection pool or DNS or whatnot...

dimaqq avatar Oct 25 '19 05:10 dimaqq

I worked hard with 3.2.2 to ensure connect/read timeouts worked correctly. DNS also has timeout in aiohttp. can you try above suggestions re versions.

thehesiod avatar Oct 25 '19 05:10 thehesiod

I'm curious as well as we want to move to the new aiohttp and python releases, but cannot if socket timeouts are broken

thehesiod avatar Oct 25 '19 05:10 thehesiod

Edit: 100 was red herring

Was: 99 done tasks + 1 stuck task == 100 which was suspicious.

dimaqq avatar Oct 25 '19 05:10 dimaqq

sounds like an aiohttp issue, max_pool_connections is defaulted to 10: https://github.com/boto/botocore/blob/develop/botocore/endpoint.py#L35, which we obey: https://github.com/aio-libs/aiobotocore/blob/master/aiobotocore/endpoint.py#L176

thehesiod avatar Oct 25 '19 06:10 thehesiod

this really sounds like https://github.com/aio-libs/aiohttp/issues/4258, could you report there? Note from linked bug: We run aiohttp 3.6.2 before on Hass.io 190. After we had some issues with ingress, we detect that request going lost or more specific was blocked and never response inside aiohttp core.

thehesiod avatar Oct 25 '19 06:10 thehesiod

actually first I would come up with a reproduceable test case. You can use moto for mocking the backend.

thehesiod avatar Oct 25 '19 06:10 thehesiod

100 was a red herring, sorry. It was batching in our code. Original comment edited.

dimaqq avatar Oct 25 '19 06:10 dimaqq

I've updated the issue template again for future issues to help narrow down the issue

thehesiod avatar Oct 28 '19 21:10 thehesiod

I wonder if something like https://github.com/aio-libs/aiobotocore/pull/724 might fix it... I'll post an update when 0.10.4 is thoroughly tested.

dimaqq avatar Oct 29 '19 04:10 dimaqq

That was a no op, it was returning a future/coroutine before that could be awaited

thehesiod avatar Oct 30 '19 07:10 thehesiod

I've faced with the problem too while I was uploading 1000 objects to S3 with code similar to this one:

session = aiobotocore.get_session(loop=event_loop)
async with session.create_client('s3') as s3_client:
    awaitables = []
    for instance in instances:
        awaitable = s3_client.put_object(
            Bucket=s3_bucket_name,
            Key=instance.key,
            Body=instance.body,
        )
        awaitables.append(awaitable)
    await asyncio.gather(*awaitables)

It happens around two of five retries, it uploads 999 files and stucks.

Python 3.7 and aiobotocore 0.11.1

illia-v avatar Feb 03 '20 14:02 illia-v

Still getting stuck for me with following versions:

  • Python 3.8.1
  • aiobotocore 0.11.1
  • aiodns 2.0.0
  • aiohttp 3.6.2
  • botocore 1.13.14
  • boto3 1.10.14

It's kinda reproducible (not every time) in prod and staging against real aws dynamodb. However it's not reproducible locally against dynalite.

I'm beginning to suspect it's somehow more complicated, for example, I'd guess that the dynamodb endpoint resolves to multiple ip addresses, perhaps re-connection or retry and/or switching to another ip address is the key? 🤔

dimaqq avatar Feb 04 '20 02:02 dimaqq

can you try aiohttp 3.3.2,

thehesiod avatar Feb 05 '20 05:02 thehesiod

can you try aiohttp 3.3.2,

Is there something significant changed in subsequent version, or what?

dimaqq avatar Feb 05 '20 06:02 dimaqq

Our company has been using 3.3.2 for some time and have never run into this issue, and I've seen aiohttp bugs about newer versions hanging, so I want to see if this is an aiohttp regression

thehesiod avatar Feb 05 '20 08:02 thehesiod

Hello I am experiencing a similar issue.

My code is fetching messages from SQS and invoking a lambda function asynchronously. I can reproduce this with a sample of 200k messages in SQS.

A few requests to the lambda client will hang and be released only when I cancel the task. (Log message with stack trace can be seen below). Error sampling is around 0.5%.

The code that hangs look like this

session = get_session()
aio_lambda = session.create_client('lambda')
q = Queue()
producers = [ensure_future(produce(q, context)) for _ in range(num_producers)]
consumers = [ensure_future(consume(q, context)) for _ in range(num_consumers)]
await gather(*producers)
...

# this is called 1800 times in 3 minutes inside a consumer task. Around 10 calls will hang
response = await aio_lambda.invoke(
                FunctionName=environ['WORKER_FUNCTION_ARN'],
                InvocationType='Event',
                Payload=payload
                )

Python version : 3.6 All dependencies came from pip aioboto3 aiobotocore-0.12.0 aiohttp-3.6.2

Stack trace of error when cancelling the task

{
    "exception": "Traceback (most recent call last):\n  File \"/var/task/orchestrator/lambda_function.py\", line 242, in invoke_worker\n    Payload=payload\n  File \"/opt/python/aiobotocore/client.py\", line 77, in _make_api_call\n    operation_model, request_dict, request_context)\n  File \"/opt/python/aiobotocore/client.py\", line 96, in _make_request\n    return await self._endpoint.make_request(operation_model, request_dict)\n  File \"/opt/python/aiobotocore/endpoint.py\", line 74, in _send_request\n    exception):\n  File \"/opt/python/aiobotocore/endpoint.py\", line 176, in _needs_retry\n    caught_exception=caught_exception, request_dict=request_dict)\n  File \"/opt/python/botocore/hooks.py\", line 356, in emit\n    return self._emitter.emit(aliased_event_name, **kwargs)\n  File \"/opt/python/botocore/hooks.py\", line 228, in emit\n    return self._emit(event_name, kwargs)\n  File \"/opt/python/botocore/hooks.py\", line 211, in _emit\n    response = handler(**kwargs)\n  File \"/opt/python/botocore/retryhandler.py\", line 183, in __call__\n    if self._checker(attempts, response, caught_exception):\n  File \"/opt/python/botocore/retryhandler.py\", line 251, in __call__\n    caught_exception)\n  File \"/opt/python/botocore/retryhandler.py\", line 269, in _should_retry\n    return self._checker(attempt_number, response, caught_exception)\n  File \"/opt/python/botocore/retryhandler.py\", line 317, in __call__\n    caught_exception)\n  File \"/opt/python/botocore/retryhandler.py\", line 223, in __call__\n    attempt_number, caught_exception)\n  File \"/opt/python/botocore/retryhandler.py\", line 359, in _check_caught_exception\n    raise caught_exception\n  File \"/opt/python/aiobotocore/endpoint.py\", line 137, in _do_get_response\n    http_response = await self._send(request)\n  File \"/opt/python/aiobotocore/endpoint.py\", line 218, in _send\n    request.method, url=url, headers=headers_, data=data, proxy=proxy)\n  File \"/opt/python/aiohttp/client.py\", line 483, in _request\n    timeout=real_timeout\n  File \"/opt/python/aiohttp/connector.py\", line 499, in connect\n    raise e\n  File \"/opt/python/aiohttp/connector.py\", line 490, in connect\n    await fut\nconcurrent.futures._base.CancelledError"
}

marcioemiranda avatar Mar 30 '20 23:03 marcioemiranda

Re-formatted stack trace from @marcioemiranda

Traceback (most recent call last):
  File "/var/task/orchestrator/lambda_function.py", line 242, in invoke_worker
    Payload=payload
  File "/opt/python/aiobotocore/client.py", line 77, in _make_api_call
    operation_model, request_dict, request_context)
  File "/opt/python/aiobotocore/client.py", line 96, in _make_request
    return await self._endpoint.make_request(operation_model, request_dict)
  File "/opt/python/aiobotocore/endpoint.py", line 74, in _send_request
    exception):
  File "/opt/python/aiobotocore/endpoint.py", line 176, in _needs_retry
    caught_exception=caught_exception, request_dict=request_dict)
  File "/opt/python/botocore/hooks.py", line 356, in emit
    return self._emitter.emit(aliased_event_name, **kwargs)
  File "/opt/python/botocore/hooks.py", line 228, in emit
    return self._emit(event_name, kwargs)
  File "/opt/python/botocore/hooks.py", line 211, in _emit
    response = handler(**kwargs)
  File "/opt/python/botocore/retryhandler.py", line 183, in __call__
    if self._checker(attempts, response, caught_exception):
  File "/opt/python/botocore/retryhandler.py", line 251, in __call__
    caught_exception)
  File "/opt/python/botocore/retryhandler.py", line 269, in _should_retry
    return self._checker(attempt_number, response, caught_exception)
  File "/opt/python/botocore/retryhandler.py", line 317, in __call__
    caught_exception)
  File "/opt/python/botocore/retryhandler.py", line 223, in __call__
    attempt_number, caught_exception)
  File "/opt/python/botocore/retryhandler.py", line 359, in _check_caught_exception
    raise caught_exception
  File "/opt/python/aiobotocore/endpoint.py", line 137, in _do_get_response
    http_response = await self._send(request)
  File "/opt/python/aiobotocore/endpoint.py", line 218, in _send
    request.method, url=url, headers=headers_, data=data, proxy=proxy)
  File "/opt/python/aiohttp/client.py", line 483, in _request
    timeout=real_timeout
  File "/opt/python/aiohttp/connector.py", line 499, in connect
    raise e
  File "/opt/python/aiohttp/connector.py", line 490, in connect
    await fut
concurrent.futures._base.CancelledError

dimaqq avatar Mar 31 '20 01:03 dimaqq

In case it helps anyone, we ran into this issue as well recently and were able to work around it with an asyncio Semaphore to ensure we never attempt to make an aiohttp call when there is not an available connection pool slot.

Our specific scenario was:

Uploading thousands of files to s3 from a realtime messaging system. We set max_pool_connections in AoiConfig to N between 100 and 500 and would then launch coroutines for every incoming message we received that would await s3_client.put_object() using the max_pool_connections limit to ultimately put backpressure on the file upload process and rate limit everything back to the message broker queue (rabbitmq).

We found that after uploading ~10-30k files we would reliably find a small number that hung "forever" inside of put_object() and based on reading this issue and linked ones, likely ultimately inside of aoihttp.

For reference, this was tested on:

Windows 10
Selector event loop
aoibotocore == 1.0.4
aiodns  == 2.0.0
aiohttp == 3.6.2
aioitertools == 0.7.0

Note: We also tried downgrading to aoihttp == 3.3.2 and the problem remained unchanged.

While we didn't yet find the underlying issue, we were able to workaround it reliably by using an external semaphore with a limit set to the same max_pool_connections that we passed to aiobotocore to block concurrent calls into put_object() when we know they will block on the connection pool.

For example:

max_connection_pool = 100

#BEFORE WORKAROUND -  THIS WOULD HANG EVENTUALLY
# Note that this is simplified for clarity, we have a loop launching coroutines that each call
# s3_client.put_object() once.
await s3_client.put_object(Bucket=bucket, Key=path, Body=body_contents)


# WORKAROUND - THIS DOES NOT HANG
# inside shared state between all coroutines
upload_semaphore = asyncio.BoundedSemaphore(max_connection_pool)

# inside each coroutine 
async with upload_semaphore:
        await s3_client.put_object(Bucket=bucket, Key=path, Body=body_contents)

This doesn't resolve the underlying problem but in case someone else is blocked on this issue urgently and needs a robust workaround, we found the above to work very reliably. It hasn't been in production long but we've gone from 100% hitting this issue to 0% (so far) hitting it and it's an easy change.

Note that this also lets you put an overall timeout on the s3_client.put_object() if needed as a last resort since you know it will never block waiting for a connection pool slot but only on actually connecting / sending data.

timburke avatar Jun 04 '20 18:06 timburke

In case it helps anyone, we ran into this issue as well recently and were able to work around it with an asyncio Semaphore to ensure we never attempt to make an aiohttp call when there is not an available connection pool slot

I am trying that, just running since few hour and the work arrounds since to prevent task to be stuck. I will share more after few day.

AyWa avatar Jun 09 '20 10:06 AyWa

In case it helps anyone, we ran into this issue as well recently and were able to work around it with an asyncio Semaphore to ensure we never attempt to make an aiohttp call when there is not an available connection pool slot.

Yes, using a semaphore that way should fix the problem. When I was investigating the bug, this fix in the aiohttp code worked for me https://github.com/aio-libs/aiohttp/pull/4562/files#diff-7f25afde79f309e2f8722c26cf1f10ad.

Unfortunately, it seems that aiohttp is not actively maintained at the moment and there is a small chance that a new version will be released soon :slightly_frowning_face:

illia-v avatar Jun 09 '20 13:06 illia-v

~~I've been wondering about trying https://www.python-httpx.org/ as a backend~~ (ping me privately)

thehesiod avatar Jun 09 '20 16:06 thehesiod

Hi @thehesiod it's sad to hear that Aiohttp is not as maintained used to be after all of the giant effort done by the authors and the community.

The connector pool has received many changes for fixing several issues, from what I can see one of the last ones spotted was this one [1], but it seems it was fixed in the 3.6.0 version, so it should not be related with what you are presenting here.

Seems that @illia-v might have caught the bug, so would be nice if someone from Aiohttp could review the PR - If I have time I will do it.

In any case, something that is not really clear to me, from what I can read in the code - correct me if I'm wrong - the connection is never released unless you use a streaming response, otherwise, the returned object to the user is a simple dictionary which does not provide access to the methods for closing or releasing the connection, am I wrong?

[1] https://github.com/aio-libs/aiohttp/pull/3640

pfreixes avatar Jun 17 '20 21:06 pfreixes

@pfreixes aiohttp since its inception has sorta been counter culture. It refused to behave like all the other http libraries, just getting them to support connection/read timeouts was a battle. There have been many questionable design decisions imo. bugs like these should not be happening on a library with this level of maturity, it speaks to this some fundamental issues. In terms of the last paragraph, are you saying its never releasing the connection on the aiobotocore side? Aiobotocore has code that detects if the API has a streaming response or not. If it doesn't have a streaming response it reads the body completely before returning the response dict to the client. This follows the botocore behavior, see https://github.com/aio-libs/aiobotocore/blob/ec0b676eeaa76ce9e181f3327058b0019bf0073b/aiobotocore/endpoint.py#L57

thehesiod avatar Jun 17 '20 22:06 thehesiod