aiohttp
aiohttp copied to clipboard
"ClientPayloadError: Response payload is not completed" with https://www.тв-програма.bg
import aiohttp
import pytest
async def fetch():
async with aiohttp.ClientSession() as session:
remote_resp = await session.request("GET", "https://www.тв-програма.bg")
print(remote_resp)
await remote_resp.text()
@pytest.mark.asyncio
async def test():
await fetch()
Results in:
platform linux -- Python 3.7.3, pytest-5.0.1, py-1.8.0, pluggy-0.12.0 -- …/Vcs/aiohttp/.venv/bin/python
cachedir: .pytest_cache
rootdir: …/Vcs/aiohttp, inifile: pytest.ini
plugins: asyncio-0.10.0
collected 1 item
t-connection.py::test FAILED [100%]
=================================== FAILURES ===================================
_____________________________________ test _____________________________________
@pytest.mark.asyncio
async def test():
> await fetch()
t-connection.py:14:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
t-connection.py:9: in fetch
await remote_resp.text()
aiohttp/client_reqrep.py:952: in text
await self.read()
aiohttp/client_reqrep.py:916: in read
self._body = await self.content.read()
aiohttp/streams.py:347: in read
block = await self.readany()
aiohttp/streams.py:369: in readany
await self._wait('readany')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
self = <StreamReader e=ClientPayloadError('Response payload is not completed')>
func_name = 'readany'
async def _wait(self, func_name: str) -> None:
# StreamReader uses a future to link the protocol feed_data() method
# to a read coroutine. Running two read coroutines at the same time
# would have an unexpected behaviour. It would not possible to know
# which coroutine would get the next data.
if self._waiter is not None:
raise RuntimeError('%s() called while another coroutine is '
'already waiting for incoming data' % func_name)
waiter = self._waiter = self._loop.create_future()
try:
if self._timer:
with self._timer:
> await waiter
E aiohttp.client_exceptions.ClientPayloadError: Response payload is not completed
aiohttp/streams.py:297: ClientPayloadError
----------------------------- Captured stdout call -----------------------------
<ClientResponse(https://www.xn----8sbafg9clhjcp.bg) [200 OK]>
<CIMultiDictProxy('Date': 'Sun, 14 Jul 2019 18:32:22 GMT', 'Server': 'Apache', 'Expires': 'Thu, 19 Nov 1981 08:52:00 GMT', 'Cache-Control': 'no-store, no-cache, must-revalidate', 'Pragma': 'no-cache', 'Set-Cookie': 'PHPSESSID=XXX; path=/; domain=.xn----8sbafg9clhjcp.bg', 'Upgrade': 'h2', 'Connection': 'Upgrade', 'Vary': 'Accept-Encoding', 'Content-Encoding': 'gzip', 'Transfer-Encoding': 'chunked', 'Content-Type': 'text/html; charset=utf-8')>
----------------------------- Captured stderr call -----------------------------
Exception ignored in: <function _SSLProtocolTransport.__del__ at 0x7fdc87bf32f0>
Traceback (most recent call last):
File "/usr/lib/python3.7/asyncio/sslproto.py", line 322, in __del__
source=self)
ResourceWarning: unclosed transport <asyncio.sslproto._SSLProtocolTransport object at 0x7fdc87053f98>
=========================== 1 failed in 5.39 seconds ===========================
URL: https://www.тв-програма.bg
It works fine using a browser or curl.
There are not response headers related to content length:
HTTP/2 200
date: Sun, 14 Jul 2019 18:33:24 GMT
server: Apache
expires: Thu, 19 Nov 1981 08:52:00 GMT
cache-control: no-store, no-cache, must-revalidate
pragma: no-cache
set-cookie: PHPSESSID=XXX; path=/; domain=.xn----8sbafg9clhjcp.bg
content-type: text/html; charset=utf-8
Using aiohttp master (95ead73).
aiohttp does not support HTTP/2.0
Oh, I see. Found https://github.com/aio-libs/aiohttp/issues/320 and https://github.com/aio-libs/aiohttp/issues/863 now.
Might be worth having a clearer error here - I assume it waits for "HTTP/1" or something similar, and could detect "HTTP/2" as unsupported version.
This definitely could use some help with improving the UX.
I have the same error, and I don't use HTTP/2
Python 3.7.3 Aiohttp 3.5.4
Works OK (without pytest though)
$ openssl s_client -crlf -connect www.xn----8sbafg9clhjcp.bg:443
GET / HTTP/1.1
host: www.xn----8sbafg9clhjcp.bg
HTTP/1.1 200 OK
Date: Mon, 29 Jul 2019 11:36:32 GMT
Server: Apache
Expires: Thu, 19 Nov 1981 08:52:00 GMT
Cache-Control: no-store, no-cache, must-revalidate
Pragma: no-cache
Set-Cookie: PHPSESSID=8beb1b3201371704ba1fea323e2552fc; path=/; domain=.xn----8sbafg9clhjcp.bg
Upgrade: h2
Connection: Upgrade
Vary: Accept-Encoding
Transfer-Encoding: chunked
Content-Type: text/html; charset=utf-8
11ff8
.....
(chunks)
....
0
Looks like everything is OK at HTTP level
@socketpair
Upgrade: h2 Connection: Upgrade
But does it go ahead and upgrade to HTTP/2.0 as instructed?
@innocencex logs or didn't happen
@webknjaz It should be 101
actually: https://svn.tools.ietf.org/svn/wg/httpbis/specs/rfc7230.html#header.upgrade
https://en.wikipedia.org/wiki/HTTP/1.1_Upgrade_header
But 200
instead. With upgrade
. WHAT?
The server should not respond with upgrade
header if client did not ask it for.
Anyway, server closes connection after sending a full reply. (Which also breaks rules, since HTTP/1.1 connections are keep-alive by default unless connection: close
)
It seems we face broken reverse-proxy. Possibly upstream is HTTP/2, and reverse proxy pushes these headers to clients.
I do not remember what should RFC-compliant client should do in that case.
I'm facing the same issue in my company.
Our app download large image files on AWS S3 buckets (HTTP 1) with aiohttp. It seems that when I overload the app with Locust, if I stop the Locust connections, the app raises this Exception.
@bourdeau
It seems that when I overload the app with Locust, if I stop the Locust connections, the app raises this Exception.
Looks like a different issue / cause. Please create a new issue with more details (traceback etc).
I've also encountered this. httpstatusgoats.net is apparently PHP, but it looks like there's a common factor of Apache here...
% curl -I https://httpstatusgoats.net/img/200.jpg
HTTP/2 200
date: Wed, 08 Apr 2020 15:16:20 GMT
server: Apache/2
last-modified: Sun, 30 Oct 2016 16:12:55 GMT
etag: "117d2-540175d29cfc0"
accept-ranges: bytes
content-length: 71634
content-type: image/jpeg
% http HEAD https://httpstatusgoats.net/img/200.jpg
HTTP/1.1 200 OK
accept-ranges: bytes
connection: Upgrade
content-length: 71634
content-type: image/jpeg
date: Wed, 08 Apr 2020 15:17:31 GMT
etag: "117d2-540175d29cfc0"
last-modified: Sun, 30 Oct 2016 16:12:55 GMT
server: Apache/2
upgrade: h2,h2c
@auscompgeek i have same bug with Apache & http2 I've handled it with sending Connection: Close header with request
Interestingly, I wasn't able to reproduce this issue locally. Eventually I discovered that I had aiohttp without the Cython extensions on my server (as I'm running Python 3.8 so pip grabbed the pure-Python wheel from PyPI), however I have aiohttp installed from the Arch Linux repos on my laptop (which does have the Cython extensions).
After reinstalling aiohttp with the Cython extensions, I'm no longer able to reproduce this issue with httpstatusgoats.net.
Oh, so the issue is only with the pure Python implementation. Can anybody confirm this with their cases?
This seems to go away when I downgrade to 3.6.1 from 3.6.2
Update: I had this problem again, googled it and found myself :)
I have now re-verified this with python 3.8.0 and 3.8.3 on mac. The problem is not present in 3.6.1 but appears in 3.6.2
It's not entirely clear; there might be some python version component here too. This fails with 3.6.1 on 3.8.0 on linux, but works with 3.6.1 on 3.8.5 on linux.
I ran into this error on Python 3.8.2 in a docker image using the python:3.8.2
image. When I downgraded it to python:3.7.7
the error went away.
The error is also absent when I use 3.7.3 natively on my Mac.
I originally encountered this with http://spacejam.com
but I get the exact same results against https://www.тв-програма.bg
with all versions.
I ran into this error on Python 3.8.2 in a docker image using the
python:3.8.2
image. When I downgraded it topython:3.7.7
the error went away.The error is also absent when I use 3.7.3 natively on my Mac.
Under the assumption that my hypothesis is correct, this is not surprising, as there are wheels available for CPython 3.7 for both Linux and macOS.
I originally encountered this with http://spacejam.com
This is useful. This server is providing a more specific Server header, so maybe someone else will be able to reproduce this with a local Apache.
% http -v HEAD https://spacejam.com
HEAD / HTTP/1.1
Accept: */*
Accept-Encoding: gzip, deflate
Connection: keep-alive
Host: spacejam.com
User-Agent: HTTPie/2.1.0
HTTP/1.1 200 OK
Accept-Ranges: bytes
Connection: Upgrade, Keep-Alive
Content-Encoding: gzip
Content-Length: 2136
Content-Type: text/html; charset=UTF-8
Date: Fri, 08 May 2020 12:13:27 GMT
ETag: "1f01-5a4034fb45953-gzip"
Keep-Alive: timeout=5, max=100
Last-Modified: Fri, 24 Apr 2020 06:16:52 GMT
Server: Apache/2.4.41 () OpenSSL/1.0.2k-fips
Strict-Transport-Security: max-age=15768000
Upgrade: h2,h2c
Vary: Accept-Encoding
Adding these request headers seems to solve the http2 issue.
Connection: Upgrade
Upgrade: http/1.1
But apparently there are other reasons causing "Response payload is not completed" error, which happens occasionally and hard to reproduce.
I had the same issue with a server that responds with "Connection: Upgrade".
Requests library could fetch the page without problems, and the main difference was, that requests sends "Connection: keep-alive" header and that makes the server respond with "Connection: Upgrade, keep-alive".
So I used aiohttp.ClientSession(headers={'Connection': 'keep-alive'})
to simulate that, and aiohttp seems to be able to deal with that without issues so far.
I am using Python 3.7 on Google Cloud Function and tried all the suggested options in this thread to get rid of "aiohttp.client_exceptions.ClientPayloadError: Response payload is not completed" error.
- Downgrading the aiohttp version to 3.5.4
- Downgrading the aiohttp version to 3.6.1
- Used
aiohttp.ClientSession(headers={'Connection': 'keep-alive'})
I think this issue is not yet resolved, can someone please explain the root cause and any possible workaround of this problem.
P.S. - this behaviour is intermittent. Sometimes it passed, sometimes it got failed.
Update: Below is the response header.
accept-ranges → bytes
cache-control → no-store, no-cache
connection → Keep-Alive
content-encoding → gzip
content-security-policy → default-src 'none'
content-type → application/json
date → Thu, 19 Nov 2020 15:01:23 GMT
expires → Thu, 01 Jan 1970 00:00:00 GMT
keep-alive → timeout=5, max=100
pragma → No-cache
rocketpowered → 1
server → bean
transfer-encoding → chunked
vary → accept-encoding
x-clacks-overhead → GNU Terry Pratchett
x-request-id → RK2CSVfczkkd
x-varnish → 338093738
Note: the Response payload is not completed error can also happen, if the connection got unexpectedly interrupted for some reason. That includes unstable connections, or server side measures against (potentially unintended) DOS attacks or automation (e.g. bots) that might be in place.
If it only happens from time to time, it sounds more like it's actually one of those situations, rather than the specific issue with the unwanted protocol Upgrade that was talked about here.
You can try to inspect the headers
attribute of your response (before awaiting the content), to see if the Upgrade: h2,h2c
header is present. If it's not present, it's likely not the same issue as discussed here.
Edit: nvm. It's possible that the Upgrade
header is also present in some other cases..., But the Connection
might only contain Upgrade
if it wants to upgrade ...
I'm running into ClientPayloadError: Response payload is not completed
as well (aiohttp 3.7.3, Python 3.7). There's no upgrade header present in my response. The error is non-deterministic, but manifests pretty reliably after a couple minutes of requests (in my case, to Azure blob storage). It seems like this is a fairly common issue for users of aiohttp, from a quick grep, at least: #4581 #2954 #4843 #4397
My attempted debugging of this issue is really similar to what @DusanMadar saw in https://github.com/aio-libs/aiohttp/issues/2954#issuecomment-460088764, except that the exception that causes the ClientPayloadError that's caught at https://github.com/aio-libs/aiohttp/blob/5d1a75e68d278c641c90021409f4eb5de1810e5e/aiohttp/client_proto.py#L83 is ContentLengthError for me, not TransferEncodingError.
Also cc @luvvien, you mention you found a solution in https://github.com/aio-libs/aiohttp/issues/2954#issuecomment-472645611, but that link is dead (and wayback machine hasn't scraped it). Any chance you remember? :-)
My issue was from a proxy in between the client and the server that was disconnecting the connection when there's no packet received for longer than 5 minutes.
curl was working fine because curl was sending keep-alive TCP probes while waiting for the data.
aiohttp client doesn't do this, so it gets disconnected.
Here is what I did to fix (python3.8.0, aiohttp==3.7.3) -
from aiohttp import ClientSession, TCPConnector
from aiohttp.client_reqrep import ClientRequest
import asyncio
import socket
class KeepAliveClientRequest(ClientRequest):
async def send(self, conn: "Connection") -> "ClientResponse":
sock = conn.protocol.transport.get_extra_info("socket")
sock.setsockopt(socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPIDLE, 60)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPINTVL, 2)
sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_KEEPCNT, 5)
return (await super().send(conn))
with ClientSession(request_class=KeepAliveClientRequest) as session:
response = await session.post(
url
headers={'Connection': 'keep-alive'}
)
@iameugenejo your solution works for me! Thanks
Hey everyone, I just released v3.9.4rc0. It'll hopefully help us get more complete tracebacks and that in turn, will let us know the underlying causes for this exception being raised — there may be several distinct behaviors underneath and we need to discover what they are.
If you have ability to do so, please try out this pre-release in environments where you can reproduce your problems and come back with the logs with complete tracebacks. Let's figure this out together!