aiohttp
aiohttp copied to clipboard
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body
Describe the bug
File "/usr/local/lib/python3.7/site-packages/aiohttp/streams.py", line 604, in read
await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body for
To Reproduce
from config import (
CLICKHOUSE_USER,
CLICKHOUSE_PORT,
CLICKHOUSE_PASSWORD,
CLICKHOUSE_HOST,
)
from aiochclient import ChClient
import aiohttp
class ClickhouseConnection:
_session = None
_client = None
@classmethod
async def create_connection(cls):
connector = aiohttp.TCPConnector(limit=30)
cls._session = aiohttp.ClientSession(connector=connector)
cls._client = ChClient(
session=cls._session,
url=f"http://{CLICKHOUSE_HOST}:{CLICKHOUSE_PORT}",
user=CLICKHOUSE_USER,
password=CLICKHOUSE_PASSWORD,
database="cliff",
)
@classmethod
async def create_intermediate_roll_up_table(cls, table_name, dimensions, measures):
create_table_query = f"MY QUERY"
await cls._client.execute(create_table_query)
@classmethod
async def add_bulk_data_to_rollup_table(cls, columns, table_name, data_list):
insert_statement = "MY STATEMENT"
await cls._client.execute(insert_statement, *data_list)
@classmethod
async def execute_query(cls, query, execute_many=False, as_dict=False):
if execute_many:
return await cls._client.fetch(query=query, json=as_dict)
return await cls._client.fetchrow(query=query)
@classmethod
async def optimize_clickhouse_table(cls, table_name: str):
optimize_query = f"OPTIMIZE TABLE {table_name} FINAL DEDUPLICATE;"
await cls._client.execute(optimize_query)
@classmethod
async def gracefull_close_clikchouse_connection(cls):
await cls._session.close()
await cls._client.close()
LOGGER.info("Closed all conenctions")
Expected behavior
I'm using these methods again and again inside a for loop.
These work most of the time but sometimes aiohttp throws an error.
Logs/tracebacks
File "/source/taa_utils/clickhouse_utils.py", line 100, in create_intermediate_roll_up_table
await cls._client.execute(create_table_query)
File "/usr/local/lib/python3.7/site-packages/aiochclient/client.py", line 230, in execute
query, *args, json=json, query_params=params, query_id=query_id
File "/usr/local/lib/python3.7/site-packages/aiochclient/client.py", line 189, in _execute
url=self.url, params=params, data=data
File "/usr/local/lib/python3.7/site-packages/aiochclient/http_clients/aiohttp.py", line 38, in post_no_return
async with self._session.post(url=url, params=params, data=data) as resp:
File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 1117, in __aenter__
self._resp = await self._coro
File "/usr/local/lib/python3.7/site-packages/aiohttp/client.py", line 544, in _request
await resp.start(conn)
File "/usr/local/lib/python3.7/site-packages/aiohttp/client_reqrep.py", line 890, in start
message, payload = await self._protocol.read() # type: ignore
File "/usr/local/lib/python3.7/site-packages/aiohttp/streams.py", line 604, in read
await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno None] Can not write request body for
Python Version
$ python --version
Python 3.7.0
aiohttp Version
$ python -m pip show aiohttp
Name: aiohttp
Version: 3.7.4.post0
multidict Version
$ python -m pip show multidict
Name: multidict
Version: 5.1.0
Summary: multidict implementation
yarl Version
$ python -m pip show yarl
Name: yarl
Version: 1.6.3
Summary: Yet another URL library
OS
Linux Debian
Related component
Client
Additional context
No response
Code of Conduct
- [X] I agree to follow the aio-libs Code of Conduct
I'm having the exact same issue with python 3.7.4, aiohttp 3.5.4, multidict 4.5.2, yarl 1.3.0
Is there any solution?
This only happens when we query the database inside a for loop...
Anyhow, I switched it back to the official clickhouse python driver. Which is synchronous in nature, but gets the job done.
It doesn't happen to me while using CH. I get the exact same error, also using ClientSession, But when I use regular HTTP requests (session.post). It also happens only to a portion of the requests.
aiohttp 3.7 is EOL and won't get any update. Is this happening under aiohttp 3.8?
Also, try asking that library you use (aiochclient). Maybe they pass invalid args to aiohttp.
File "/usr/local/lib/python3.7/site-packages/aiochclient/http_clients/aiohttp.py", line 38, in post_no_return
async with self._session.post(url=url, params=params, data=data) as resp:
There's not enough information provided to guess what's happening but w/o understanding what exactly is passed, it's a lost cause. We need an aiohttp-only reproducer demonstrating that this problem actually exists. Without that, we'll probably have to just close this as it does not demonstrate a bug the way it is reported.
Current judgment — this is likely a problem in that third-party library, maybe they misuse aiohttp.
I wasn't using aiochclient, but straight forward aiohttp. With it, I would send http requests to an nginx that proxies me to different containers (faas).
I was able to solve the issue, by looking at the nginx logs at the same time I would receive those exceptions in my app, and see that I receive these errors:
[alert] 7#7: 1024 worker_connections are not enough [alert] 7#7: *55279 1024 worker_connections are not enough while connecting to upstream
To solve this, with a little help from Google, I added to my nginx.conf file: events { worker_connections 10000; }
Thanks anyways!
I'm also getting this error, although for minor chunk of requests under a for loop. I'm using aiohttp 3.8.1.
Hello, we are currently facing this issue where we have repeating jobs that run at intervals; each job makes some request (mostly POST requests).
This has been happening ever since we migrated to aiohttp, a fix was to use aiohttp.TCPConnector(force_close=True) or by using http1.0 aiohttp.ClientSession(version=http.HttpVersion10) but we had like to reuse connections without force closing for every request.
From my investigation on the network side it shows that the client fails return a companing ACK packet after already exchanging a FIN and FIN ACK packet to the server which results in the server sending a RST packet as a way to graceful close the connection.
version: aiohttp==3.8.1
Any help to resolving this would be appreciated.
We are also facing this issue, it happens from time to time. We haven't investigated as far as @beesaferoot.
Version: aiohttp==3.8.1
Python: 3.10.4
Hello, we also have the problem on our application (~20 req/s) for 1 every ~500 to 1000 requests. setting the TCPConnector and/or Http version didn't solved the issue. The fix for us was to catch the exception and retry for now.
Python 3.9 and aiohttp 3.8.1
import asyncio
import io
import os
import aiohttp
from tqdm.asyncio import tqdm
URL = 'http://your-ip:3000/upload'
async def chunks(data, chunk_size):
with tqdm.wrapattr(io.BytesIO(data), 'read', total=len(data)) as f:
chunk = f.read(chunk_size)
while chunk:
yield chunk
chunk = f.read(chunk_size)
async def download(session, chunk_size):
data_to_send = os.urandom(30_000_000)
data_generator = chunks(data_to_send, chunk_size)
await session.post(URL, data=data_generator)
async def main():
async with aiohttp.ClientSession() as session:
tasks = []
for _ in range(5):
t = asyncio.create_task(download(session, 4096))
tasks.append(t)
await asyncio.gather(*tasks)
asyncio.run(main())
I am trying to make a CLI client for OpenSpeedTest-Server I am getting same error like this. to reproduce this use our DOCKER IMAGE or Android App. then make a post request to "http://your-ip:3000/upload" issues : For docker image it will only send first chunk for Android app it will throw error like this.
Traceback (most recent call last):
File "r.py", line 35, in <module>
asyncio.run(main())
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Applications/Xcode.app/Contents/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete
return future.result()
File "r.py", line 32, in main
await asyncio.gather(*tasks)
File "r.py", line 23, in download
await session.post(URL, data=data_generator)
File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client.py", line 559, in _request
await resp.start(conn)
File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/client_reqrep.py", line 898, in start
message, payload = await protocol.read() # type: ignore[union-attr]
File "/Users/goredplanet/Library/Python/3.8/lib/python/site-packages/aiohttp/streams.py", line 616, in read
await self._waiter
aiohttp.client_exceptions.ClientOSError: [Errno 32] Broken pipe
It is working fine when using Electron Apps of OpenSpeedTest-Server (Windows, Mac and Linux GUI Server apps) it uses Express server.
Mobile Apps uses iOnic WebServer, for Android it's NanoHTTP Server and for iOS it is GDC WebServer. for Docker we use Nginx WebServer. Configuration posted on my profile.
Same.
python: 3.10
aiohttp: 3.8.3
aiochclient: 2.2.0
@asvetlov any news ?
it's been 3yrs can we get any update??
Hello, we are currently facing this issue where we have repeating jobs that run at intervals; each job makes some request (mostly POST requests).
This has been happening ever since we migrated to aiohttp, a fix was to use
aiohttp.TCPConnector(force_close=True)or by using http1.0aiohttp.ClientSession(version=http.HttpVersion10)but we had like to reuse connections without force closing for every request.From my investigation on the network side it shows that the client fails return a companing
ACKpacket after already exchanging aFINandFIN ACKpacket to the server which results in the server sending aRSTpacket as a way to graceful close the connection.version:
aiohttp==3.8.1Any help to resolving this would be appreciated.
@beesaferoot could you provide me the reproduction code, i will try to make an pr fixing this if i can solve this issue but for that i need a code which produce this constantly
There is no update. If someone can create a PR with a test that reproduces the error, then we can look into it, but we really don't have the time to try and figure anything out from the above comments.
https://github.com/aio-libs/aiohttp/issues/6138#issuecomment-1009164970 suggests that the receiving end ran out of connections and so the connection got rejected (if that's the case, I'm not really sure there's a bug here...).
While https://github.com/aio-libs/aiohttp/issues/6138#issuecomment-1171170516 suggests that there could be an issue with keep-alive connections (which makes it sound like a different issue to the previous comment...). If we can get a test that reproduces these steps, then maybe we can fix something..
So in my case this error was not of this library it's cloudflare which has max file size upload per request.
I think, whoever is getting this error, the reason is that the website you are making POST request is using cloudflare, so the it's upload limit implies too.
I was getting this issue when repeating requests in short period of time.
In my case manually clossing session after every request helped.
I have fixed this bug by creating try except block in a while loop with sleep and retry:
# init
conn = aiohttp.TCPConnector(limit_per_host=30)
self.__session = aiohttp.ClientSession(
self.__url,
# timeout=self.__timeout,
raise_for_status=True,
connector=conn,
)
# method
ids_info = None
retries = 0
while not ids_info:
try:
async with (
self.__session.get(
self.__path, json={"ids": ids}
) as response
):
if response.status == 200:
data = await response.json(content_type="text/plain")
ids_info = data["info"]
if not ids_info:
return dict()
else:
return ids_info
# if not 200
else:
return dict()
except ClientOSError as e:
logger.exception(f"retry number={retries} with error: {e}")
retries += 1
if retries >= self.__max_retries:
return dict()
await asyncio.sleep(1)
but I do not think it is proper way. The main thing I have noticed, that I this error occurs at a random time, so I can not reproduce it.
I faced this issue while I was trying to proxy my requests to a server and I figured it out that proxy server wasn't able handle that amount of requests. It could be that others are facing same kind of issue. Maybe try rate limiting your requests.