jetty.project
jetty.project copied to clipboard
Support decrement of max_concurrent_streams
Target Jetty version(s) 10.0.x
Enhancement Description Currently, if a server advertises that it supports 100 concurrent streams, then a client is entitled to open that many.
If the server decrements max_concurrent_stream to e.g. 5, while the client is busy sending requests, there is a natural race where the client still thinks it is allowed to send 100 concurrent streams.
So e.g. 15 concurrent threads may pass the check and are about to create the streams with the requests to send to the server, but max_concurrent_streams has been updated to 5 by an incoming, concurrent, SETTINGS frame from the server.
The client may enforce the max_concurrent_streams check and fail 10 out of the 15 streams locally.
This is the current implementation.
Or it may send the streams anyway, and the server may reply with a RST_STREAM(REFUSED_STREAM_ERROR), so the client knows that the stream has not been processed by the server.
Or the client may detect the situation and queue the extra requests, etc.
This issue needs a discussion of what is the best option.
Frankly, I think the current implementation is enough.
Fail the streams locally, or fail them because the server refused them is a failure anyway for the client, so fail early seems more efficient to me.
Trying to queue the extra concurrent streams seems like a lot of work for a very unlikely use case -- a server advertising "send me 100 concurrent request", but then changing its mind to "oops no, I'm overloaded, you can only send 5 now" must be prepared to fail the extra ones (for example, those that are sent while the SETTINGS frame is in-flight towards the client).
@gregw @lorban thoughts?
I don't think this is a high priority, but I think it is possible that we will eventually encounter a h2 impl that does decrement the max streams.
I think the solution is possible, as we just need to make sure that once we have checked that the creation of a stream is OK against the current max, then we don't send the ack for any subsequent settings frame until all the approved streams header frames have been sent. This could be done with a prending stream count, or maybe just by noting the stream ID for which we have approved a stream creation and delaying the setting ack until we see that ID in use.
So whilst I don't think we should make this a top priority, we do need to ensure that our designs at least consider that we might implement this in the future. Hence putting multiplexing state/logic in the pool is the wrong thing to do and we should put it in the HTTP2Session.
This issue has been automatically marked as stale because it has been a full year without activity. It will be closed if no further activity occurs. Thank you for your contributions.
This issue has been automatically marked as stale because it has been a full year without activity. It will be closed if no further activity occurs. Thank you for your contributions.