[Feature request] Concurrency limiting
Hi,
It would be nice to be able to limit the number of requests that a client can make not just in a time range, but concurrently.
For example, if a client has reached a maximum number of allowed simultaneous requests, then another one will be rejected with 429 until at least another one finishes, after which they can try again.
Thanks
I think this kind of feature is better implemented in a listener_wrapper rather than an HTTP handler.
The benefit of it being an HTTP handler is it can respond with an HTTP 429.
The problem is that a listener wrapper still has to accept the connection to know the IP address, and all it can do is unkindly close the connection after that. (Unless I'm missing something. Quite possible. Running on like 4 hours of sleep today.)
The benefit of it being an HTTP handler is it can respond with an HTTP 429.
Good point
Would the alternative be letting the request time out ?
I'm new to Caddy internals and looking to protect an upstream WebSocket server from being exploited by excessive concurrent connections.
While Apache (QS_SrvMaxConnPerIP) or Nginx (limit_conn) provide a way to set the maximum number of concurrent connections allowed from a single IP address, Out of luck, Caddy, which we currently use in production, doesn't natively have built-in directives for directly limiting concurrent connections per IP.
I reviewed the documentation on how to extend Caddy and found a section here that primarily focuses on writing an HTTP handler module. While creating an HTTP handler module appears to be straightforward, I believe it may not be feasible to track concurrent WebSocket connections using this approach. The issue is that although an HTTP handler can recognize upgrade requests (as they are HTTP requests), it cannot detect when these connections close since this occurs over the WebSocket protocol (via a close frame). HTTP handler won't interfere with the messages over the WebSocket protocol
I want to clarify that I’m not trying to hijack the conversation, but since you're discussing tracking concurrent connections, I thought it was relevant to raise these difficulties.
Can you confirm whether tracking concurrent connections can't be achieved using an HTTP handler as I explained? Additionally, can Caddy provide finer control over the underlying TCP connections that I can use in a module? I can't find much information about the listener_wrapper mentioned here.
Can you confirm whether tracking concurrent connections can't be achieved using an HTTP handler as I explained?
Depends. The handler doing the upgrade certainly can achieve it. But if another handler is doing the upgrade, the two handlers would have to have some way of sharing state or communicating about it.
Additionally, can Caddy provide finer control over the underlying TCP connections that I can use in a module? I can't find much information about the listener_wrapper mentioned here.
It's less so a Caddy thing and more of the Go standard library. In HTTP handlers, you get access to the underlying net.Conn, but as I was saying above, you'd have to go even below that to complete avoid accepting connections altogether if that was the intended behavior.
Upon rereading this, if the goal is to limit concurrency of HTTP requests, not connections, that is probably feasible... basically an atomic counter instead of a ring buffer.
Happy to review a proposal for this if someone would like to try implementing it.