httpx
httpx copied to clipboard
Improve async performance.
There seems to be some performance issues in httpx (0.27.0) as it has much worse performance than aiohttp (3.9.4) with concurrently running requests (in python 3.12). The following benchmark shows how running 20 requests concurrently is over 10x slower with httpx compared to aiohttp. The benchmark has very basic httpx usage for doing multiple GET requests with limited concurrency. The script outputs a figure showing how duration of each GET request has a huge duration variance with httpx.
# requirements.txt:
# httpx == 0.27.0
# aiohttp == 3.9.4
# matplotlib == 3.9.0
#
# 1. start server: python bench.py server
# 2. run client test: python bench.py client
import asyncio
import sys
from typing import Any, Coroutine, Iterator
import aiohttp
import time
import httpx
from aiohttp import web
import matplotlib.pyplot as plt
PORT = 1234
URL = f"http://localhost:{PORT}/req"
RESP = "a" * 2000
REQUESTS = 100
CONCURRENCY = 20
def run_web_server():
async def handle(_request):
return web.Response(text=RESP)
app = web.Application()
app.add_routes([web.get('/req', handle)])
web.run_app(app, host="localhost", port=PORT)
def duration(start: float) -> int:
return int((time.monotonic() - start) * 1000)
async def run_requests(axis: plt.Axes):
async def gather_limited_concurrency(coros: Iterator[Coroutine[Any, Any, Any]]):
sem = asyncio.Semaphore(CONCURRENCY)
async def coro_with_sem(coro):
async with sem:
return await coro
return await asyncio.gather(*(coro_with_sem(c) for c in coros))
async def httpx_get(session: httpx.AsyncClient, timings: list[int]):
start = time.monotonic()
res = await session.request("GET", URL)
assert len(await res.aread()) == len(RESP)
assert res.status_code == 200, f"status_code={res.status_code}"
timings.append(duration(start))
async def aiohttp_get(session: aiohttp.ClientSession, timings: list[int]):
start = time.monotonic()
async with session.request("GET", URL) as res:
assert len(await res.read()) == len(RESP)
assert res.status == 200, f"status={res.status}"
timings.append(duration(start))
async with httpx.AsyncClient() as session:
# warmup
await asyncio.gather(*(httpx_get(session, []) for _ in range(REQUESTS)))
timings = []
start = time.monotonic()
await gather_limited_concurrency((httpx_get(session, timings) for _ in range(REQUESTS)))
axis.plot([*range(REQUESTS)], timings, label=f"httpx (tot={duration(start)}ms)")
async with aiohttp.ClientSession() as session:
# warmup
await asyncio.gather(*(aiohttp_get(session, []) for _ in range(REQUESTS)))
timings = []
start = time.monotonic()
await gather_limited_concurrency((aiohttp_get(session, timings) for _ in range(REQUESTS)))
axis.plot([*range(REQUESTS)], timings, label=f"aiohttp (tot={duration(start)}ms)")
def main(mode: str):
assert mode in {"server", "client"}, f"invalid mode: {mode}"
if mode == "server":
run_web_server()
else:
fig, ax = plt.subplots()
asyncio.run(run_requests(ax))
plt.legend(loc="upper left")
ax.set_xlabel("# request")
ax.set_ylabel("[ms]")
plt.show()
print("DONE", flush=True)
if __name__ == "__main__":
assert len(sys.argv) == 2, f"Usage: {sys.argv[0]} server|client"
main(sys.argv[1])
I found the following issue but seems its not related as the workaround doesnt make a difference here https://github.com/encode/httpx/issues/838#issuecomment-1291224189
Found some related discussions:
- https://github.com/encode/httpx/discussions/3100
- https://github.com/encode/httpx/discussions/3206
Opening a proper issue is warranted to get better visibility for this. So the issue is easier to find for others. In its current state httpx is not a good option for highly concurrent applications. Hopefully the issue gets fixed as otherwise the library is great, so thanks for it!
Oh, interesting. There's some places I can think of where we might want to be digging into here...
- A comparison of threaded performance would also be worthwhile.
requestscompared againsthttpx, with multithreaded requests. - A comparison of performance against a remote server would be more representative than performance against localhost.
Possibly points of interest here...
- Do we have the same socket options as
aiohttp? Are we sending simple GET requests across more than one TCP packet unneccessarily, either due to socket options or due to our flow in writing the request to the stream, or both? Eg. see https://brooker.co.za/blog/2024/05/09/nagle.html - We're currently using
h11for our HTTP construction and parsing. This is the best python option for careful spec correctness, tho it has more CPU overhead than eg.httptools. - We're currently using
anyiofor our async support. We did previously have a nativeasynciobackend, there might be some CPU overhead to be saved here, which in this localhost case might be outweighing network overheads. - Also worth noting here that
aiohttpcurrently supports DNS caching wherehttpxdoes not, although not relevant in this particular case.
Also, the tracing support in both aiohttp and in httpx are likely to be extremely valuable to us here.
Thank you for the good points!
A comparison of performance against a remote server would be more representative than performance against localhost.
My original benchmark hit AWS S3. There I got very similar results where httpx had a huge variance with requests timings with concurrent requests. This investigation was due to us observing some strange requests durations when servers were under heavy load in production. For now we have switched to aiohttp and it seems to have fixed the issue.
My original benchmark hit AWS S3. There I got very similar results [...]
Okay, thanks. Was that also testing small GET requests / similar approach to above?
Okay, thanks. Was that also testing small
GETrequests / similar approach to above?
Yes pretty much, GET of a file with size of a couple KB. In the real system the sizes ofcourse vary alot.
We're currently using anyio for our async support. We did previously have a native asyncio backend, there might be some CPU overhead to be saved here, which in this localhost case might be outweighing network overheads.
@tomchristie you were right, this is the issue ^!
When I just do a simple patch into httpcore to replace anyio.Lock with asyncio.Lock the performance improves greatly. Why does httpcore use AnyIO there instead of asyncio? Seems AnyIO may have some issues.
With asyncio:
With anyio:
There is another hot spot in AsyncHTTP11Connection.has_expired which is called eg from AsyncConnectionPool heavily. This checks the connection status via this is_readable logic. That seems to be a particularly heavy check.
The logic in connection pool is quite heavy as it rechecks all of the connections every time requests are assigned to the connectors. It might be possible to skip the is_readable checks in the pool side if we just take a connector from the pool and take another if the picked connector was actually not healthy. Instead of checking them all every time. What do you think?
Probably it would be good idea to add some performance tests to httpx/httpcore CI.
I can probably help with a PR if you give me pointers about how to proceed :)
I could eg replace the synchronization primitives to use the native asyncio.
Why does httpcore use AnyIO there instead of asyncio?
See https://github.com/encode/httpcore/issues/344, https://github.com/encode/httpx/discussions/1511, and https://github.com/encode/httpcore/pull/345 for where/why we switched over to anyio.
I can probably help with a PR if you give me pointers about how to proceed
A good first pass onto this would be to add an asyncio.py backend, without switching the default over.
You might want to work from the last version that had an asyncio native backend, although I think the backend API has probably changed slightly.
Docs... https://www.encode.io/httpcore/network-backends/
Other context...
- https://github.com/encode/httpcore/pull/169 for where/why we added anyio.
- https://github.com/encode/httpcore/pull/420 for when we dropped the asyncio-native backend completly.
Thanks @tomchristie
What about this case I pointed:
When I just do a simple patch into httpcore to replace anyio.Lock with asyncio.Lock the performance improves greatly
There switching network backend won't help as the lock is not defined by the network implementation. The lock implementation is a global one. Should we just change the synchronization to use asyncio?
I'm able to push the performance of httpcore to be exactly on par with aiohttp:
Previously (in httpcore master) the performance is not great and the latency behaves very randomly:
You can see the benchmark here.
Here are the changes. There are 3 things required to improve the performance to get it as fast as aiohttp (in separate commits):
- Commit 1. Change synchronization primitives (in
_synchronization.py) to useasyncioand notanyio - Commit 2. Bringing back
asyncio-based backend which was removed in the past (AsyncIOStream) - Commit 3. Optimize the
AsyncConnectionPoolto avoid calling the socket poll every time the pool is used. Also fixing idle connection checking to have lower time complexity for it
I'm happy to open a PR from these. What do you think @tomchristie?
@MarkusSintonen - Nice one. Let's work through those as individual PRs.
Is it worth submitting a PR where we add a scripts/benchmark?
Is it worth submitting a PR where we add a scripts/benchmark?
I think it would be beneficial to have benchmark run in CI so we would see the difference. Previously I have contributed to Pydantic and they use codspeed. That outputs benchmark diffs to PR when the benchmarked behaviour changes. It should be free for open-source projects.
That's an interesting idea. I'd clearly be in agreement with adding a scripts/benchmark. I'm uncertain on if we'd want the extra CI runs everytime or not. Suggest proceeding with the uncontroversial progression to start with, and then afterwards figure out if/how to tie it into CI. (Reasonable?)
@tomchristie I have now opened the 2 fix PRs:
- https://github.com/encode/httpcore/pull/922
- https://github.com/encode/httpcore/pull/924
Maybe Ill open the network backend addition after these as its the most complex one.
Maybe you can refer to the implementation of aiohttp https://docs.aiohttp.org/en/stable/http_request_lifecycle.html#why-is-aiohttp-client-api-that-way https://stackoverflow.com/questions/78516655/httpx-vs-requests-vs-aiohttp
Isn't usage of http.CookieJar a part of the problem?
https://github.com/encode/httpx/blob/db9072f998b53ff66d50778bf5edee8e2cc8ede1/httpx/_models.py#L1020
https://github.com/python/cpython/blob/68e279b37aae3019979a05ca55f462b11aac14be/Lib/http/cookiejar.py#L1266
Isn't usage of http.CookieJar a part of the problem?
@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general. I run similar benchmarks from httpcore side with httpx. Performance is at similar levels as with aiohttp and urllib3 when using the performance fixes from the PRs:
- https://github.com/encode/httpcore/pull/922
- https://github.com/encode/httpcore/pull/928
- https://github.com/encode/httpcore/pull/929
- https://github.com/encode/httpcore/pull/930
(Waiting for review from @tomchristie)
Async (httpx vs aiohttp):
Sync (httpx vs urllib3):
TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.
TBH I'm surprised by httpx ditching anyio. Sure anyio comes with performance overhead, but this is breaking compatibility with Trio.
I'm not aware of it ditching it completely. It will still support using it, it's just optional. Trio will be also supported by httpcore.
@rafalkrupinski I haven't run benchmarks when requests/responses uses cookies but atleast it doesnt cause performance issues in general
These are really cool speed-ups. Can't wait for httpx to overtake aiohttp ;)
Since the benchmark seems to be using http I think below is also a related issue where creation of ssl context in httpx had some overhead compared to aiohttp.
Ref : https://github.com/encode/httpx/issues/838
Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.
Hi, any movements on the PRs? We're having to use both aiohttp and httpx in our project because of this reason, whereas we'd like to only have 1 set of API.
I use aiohttp to encapsulate a chain call method, which I personally feel is pretty good.
url = "https://juejin.cn/"
resp = await AsyncHttpClient().get(url).execute()
# json_data = await AsyncHttpClient().get(url).json()
text_data = await AsyncHttpClient(new_session=True).get(url).text()
byte_data = await AsyncHttpClient().get(url).bytes()
example:https://github.com/HuiDBK/py-tools/blob/master/demo/connections/http_client_demo.py
Is there any progress on this issue?
In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?
Hello guys, I think I've encountered the same issue. However, our production code heavily relies on httpx, and our tests depend on respx, making it difficult to migrate to aiohttp. If anyone has faced similar challenges, I think there's a workaround: take advantage of httpx's custom transport capability to use aiohttp for the actual requests:
import asyncio
import typing
import time
import aiohttp
from aiohttp import ClientSession
import httpx
from concurrent.futures import ProcessPoolExecutor
import statistics
ADDRESS = "https://www.baidu.com"
async def request_with_aiohttp(session):
async with session.get(ADDRESS) as rsp:
return await rsp.text()
async def request_with_httpx(client):
rsp = await client.get(ADDRESS)
return rsp.text
# 性能测试函数
async def benchmark_aiohttp(n):
async with ClientSession() as session:
# make sure code is right
print(await request_with_aiohttp(session))
start = time.time()
tasks = []
for i in range(n):
tasks.append(request_with_aiohttp(session))
await asyncio.gather(*tasks)
return time.time() - start
async def benchmark_httpx(n):
async with httpx.AsyncClient(
timeout=httpx.Timeout(
timeout=10,
),
) as client:
# make sure code is right
print(await request_with_httpx(client))
start = time.time()
tasks = []
for i in range(n):
tasks.append(request_with_httpx(client))
await asyncio.gather(*tasks)
return time.time() - start
class AiohttpTransport(httpx.AsyncBaseTransport):
def __init__(self, session: typing.Optional[aiohttp.ClientSession] = None):
self._session = session or aiohttp.ClientSession()
self._closed = False
async def handle_async_request(self, request: httpx.Request) -> httpx.Response:
if self._closed:
raise RuntimeError("Transport is closed")
# 转换headers
headers = dict(request.headers)
# 准备请求参数
method = request.method
url = str(request.url)
content = request.content
async with self._session.request(
method=method,
url=url,
headers=headers,
data=content,
allow_redirects=False,
) as aiohttp_response:
# 读取响应内容
content = await aiohttp_response.read()
# 转换headers
headers = [(k.lower(), v) for k, v in aiohttp_response.headers.items()]
# 构建httpx.Response
return httpx.Response(
status_code=aiohttp_response.status,
headers=headers,
content=content,
request=request
)
async def aclose(self):
if not self._closed:
self._closed = True
await self._session.close()
async def benchmark_httpx_with_aiohttp_transport(n):
async with httpx.AsyncClient(
timeout=httpx.Timeout(
timeout=10,
),
transport=AiohttpTransport(),
) as client:
start = time.time()
tasks = []
for i in range(n):
tasks.append(request_with_httpx(client))
await asyncio.gather(*tasks)
return time.time() - start
async def run_benchmark(requests=1000, rounds=3):
aiohttp_times = []
httpx_times = []
httpx_aio_times = []
print(f"开始测试 {requests} 并发请求...")
for i in range(rounds):
print(f"\n第 {i+1} 轮测试:")
# aiohttp 测试
aiohttp_time = await benchmark_aiohttp(requests)
aiohttp_times.append(aiohttp_time)
print(f"aiohttp 耗时: {aiohttp_time:.2f} 秒")
# 短暂暂停让系统冷却
await asyncio.sleep(1)
# httpx 测试
httpx_time = await benchmark_httpx(requests)
httpx_times.append(httpx_time)
print(f"httpx 耗时: {httpx_time:.2f} 秒")
# 短暂暂停让系统冷却
await asyncio.sleep(1)
# httpx 测试
httpx_time = await benchmark_httpx_with_aiohttp_transport(requests)
httpx_aio_times.append(httpx_time)
print(f"httpx (aiohttp transport) 耗时: {httpx_time:.2f} 秒")
print("\n测试结果汇总:")
print(f"aiohttp 平均耗时: {statistics.mean(aiohttp_times):.2f} 秒")
print(f"httpx 平均耗时: {statistics.mean(httpx_times):.2f} 秒")
print(f"httpx aio 平均耗时: {statistics.mean(httpx_aio_times):.2f} 秒")
if __name__ == '__main__':
# 运行基准测试
asyncio.run(run_benchmark(512))
测试结果汇总:
aiohttp 平均耗时: 0.49 秒
httpx 平均耗时: 1.55 秒
httpx aio 平均耗时: 0.51 秒
In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?
We encountered an issue with httpx sending requests to our self-hosted embedding API, where it sometimes returned a status code 500 without any clear reason. We've switched to aiohttp to see if the issue persists.
In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?
We encountered an issue with
httpxsending requests to our self-hosted embedding API, where it sometimes returned a status code500without any clear reason. We've switched toaiohttpto see if the issue persists.
Although 500 is usually related to server-side error, it seems that httpx still has shortcomings in handling various corner cases compared to mature libraries like requests, and we've also encountered similar strange issues. https://github.com/encode/httpx/discussions/3269
In what scenarios might httpx encounter performance bottlenecks? Is there a more general explanation?
We encountered an issue with
httpxsending requests to our self-hosted embedding API, where it sometimes returned a status code500without any clear reason. We've switched toaiohttpto see if the issue persists.Although 500 is usually related to server-side error, it seems that
httpxstill has shortcomings in handling various corner cases compared to mature libraries likerequests, and we've also encountered similar strange issues. #3269
That's what we're thinking at first, but it's just a simple encode and return endpoint. Nothing can go wrong.