aioquic icon indicating copy to clipboard operation
aioquic copied to clipboard

Only one active stream even using multiple streams | Head of Line Blocking

Open qyynuaa opened this issue 3 years ago • 11 comments

I'm trying to download multiple resources with multiple streams using http3_client.py and http3_server.py. The purpose is to understand the benefit of avoiding Head of Line Blocking using QUIC.

I run the HTTP3 server:

python3 examples/http3_server.py --certificate tests/ssl_cert.pem --private-key tests/ssl_key.pem

I run the example client to perform four HTTP/3 requests:

python examples/http3_client.py --ca-certs tests/pycacert.pem --quic-log http3 https://localhost:4433/100000 https://localhost:4433/100000 https://localhost:4433/100000 https://localhost:4433/100000 

The result shows that the HTTP requests from the client are sent to the server in parallel, but the server will respond with only one active stream at any time.

image

I think based on QUIC protocol, we can have multiple active streams at the same time, right? I also find an public example from https://qvis.edm.uhasselt.be/#/files --> Option 4 Load a massive demo file.

image

I double checked the source code and I found the current implementation is directly writing all the response body to a byte array, it seems keeps sending the content in this bytearray until it is empty and then switch to start sending another stream.

Did I miss something or are there some configuration parameters I should look at?

Thanks!

qyynuaa avatar Jul 20 '20 22:07 qyynuaa

You are right, the current send loop will drain a stream as much as possible before moving on to the next one.

I had once written a naive patch which shuffled the streams here:

https://github.com/aiortc/aioquic/blob/d381ef9825b1d36ae1884a0b43452f89bbb67ad7/src/aioquic/quic/connection.py#L2514

.. however performance took a significant hit. You are welcome to suggest an alternative implementation!

jlaine avatar Oct 15 '20 06:10 jlaine

Please don't pollute this issue with unrelated matters, open a new issue. I'll let you copy over your text then I'll delete this comment.

jlaine avatar Mar 09 '21 16:03 jlaine

Hello,

I got the same results as qyynuaa. Any update on multiplexing? The strange thing is that in Fig. 1 of "Resource Multiplexing and Prioritization in HTTP/2 over TCP versus HTTP/3 over QUIC" paper by Robin Marx et al. it seems that aioquic has some sort of multiplexing but qyynuaa and my results do not show that.

NooshinEghbal avatar Aug 28 '21 11:08 NooshinEghbal

My comment https://github.com/aiortc/aioquic/issues/125#issuecomment-708924832 still stands, you are welcome to put together a PR for a performant implementation.

jlaine avatar Sep 02 '21 06:09 jlaine

I've been looking at this for a while now, and I'm thoroughly confused...

I don't have an easy setup to test this at the moment, but using the H3 test from interop.seemann.io, I get this: https://qvis.quictools.info/#/multiplexing?file=https%3A%2F%2Finterop.seemann.io%2Flogs%2F2021-09-28T08%3A05%2Faioquic_quic-go%2Fhttp3%2Fclient%2Fqlog%2F43efa12aa2437e6fe377c4.qlog

2021-09-30 11_45_48-qvis_ tools and visualizations for QUIC and HTTP_3

That does seem to give more or less sequential sending, with probably an empty send_buffer for yellow somewhere along the middle, and some flushing at the start for the H3 headers maybe?

However, in my original tests, dating from 13 January 2020, I did get -very- clear per-packet multiplexing results for 10 x 1MB files (example file: run1parallel_10files_1MB_0ms_aioquic.zip

2021-09-30 11_55_02-qvis_ tools and visualizations for QUIC and HTTP_3

I've been looking at the code and can't immediately find major differences in the send loop logic between then and now that would explain this... maybe (probably?) filling the send buffer was done differently then? There's also a difference in how pacing/cwnd adherence is done there, but I don't think that should matter for this...

Maybe @jlaine has an idea of what might have caused my earlier experiments to produce clear multiplexing? I'm personally guessing it has to do with how he put data in the stream send buffers at that time.


Regardless of all this, I strongly feel that the current behaviour (meaning mostly sequential sending of streams) is the best default behaviour, especially when considering Web page loading performance. It's unlikely multiplexing and a bit higher potential HOL-blocking removal would help overall performance much in that typical use case. See also my work into this at https://h3.edm.uhasselt.be/.

rmarx avatar Sep 30 '21 09:09 rmarx

@rmarx What is the canonical procedure to produce an indefinite/infinite datagram or stream from server to client, for example, a live jam session or web radio station? So far I have not successfully achieve that use case myself nor found a solution in the wild.

guest271314 avatar Nov 28 '21 18:11 guest271314

Hey @guest271314,

I am far from an expert in how to program with aioquic... I've mainly used the more top-level APIs to run tests. You'd be better off asking @jlaine as the main implementer here.

Additionally, I don't think your question really has anything to do with the main subject of this issue (as it pertains to how to multiplex different streams, not having a single indefinite stream), so I would recommend you open a new issue for that and provide some more context on what exactly you're trying to do ("an indefinite/infinite datagram or stream" is a very vague description to me).

With best regards, Robin

rmarx avatar Nov 29 '21 09:11 rmarx

@rmarx I searched for issues as close to my requirements as possible. An indefinite/infinite stream means a web radio station that streams perpetutally; a live musical jam session that could go 2 hours or 2 days; capturing audio and video of a live event without a pre-determined end; etc. I am able to achieve that using Native Messaging with a Python script, and separately, fetch() with PHP passthru(). I have asked the author of this repository previously, I ask again in https://github.com/aiortc/aioquic/issues/242.

guest271314 avatar Nov 29 '21 13:11 guest271314

@rmarx FWIW I read the linked tests you performed re QUIC and estimated that you had performed such a live-streaming test. So far I have not located any tests in the wild which address live-streaming using QUIC, specifically with WebTransport. From perspective here that is a use case which needs testing. I am not an expert using Python either. I learned all that Python I use testing WebTransport, thus the question posed to individuals who at least have more experience with QUIC and Python than me. I don't care if one or multiple streams are used, what matters is that the stream is capable of being contigious at the client. Thanks for your reply.

guest271314 avatar Nov 29 '21 15:11 guest271314

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Jul 19 '22 03:07 github-actions[bot]

Let's keep this issue active.

jlaine avatar Jul 19 '22 20:07 jlaine

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Nov 18 '22 03:11 github-actions[bot]

You are right, the current send loop will drain a stream as much as possible before moving on to the next one.

I had once written a naive patch which shuffled the streams here:

https://github.com/aiortc/aioquic/blob/d381ef9825b1d36ae1884a0b43452f89bbb67ad7/src/aioquic/quic/connection.py#L2514

.. however performance took a significant hit. You are welcome to suggest an alternative implementation!

Hi @jlaine

Could you please share that patch here.Some users might not care about performance and will be very intrested on testing with multiple streams

Karthikdasari0423 avatar May 18 '23 15:05 Karthikdasari0423

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

github-actions[bot] avatar Nov 03 '23 02:11 github-actions[bot]

Things should be fairer now.

rthalley avatar Feb 27 '24 22:02 rthalley