caddy-ratelimit icon indicating copy to clipboard operation
caddy-ratelimit copied to clipboard

[Feature request] Concurrency limiting

Open KaKi87 opened this issue 1 year ago • 7 comments

Hi,

It would be nice to be able to limit the number of requests that a client can make not just in a time range, but concurrently.

For example, if a client has reached a maximum number of allowed simultaneous requests, then another one will be rejected with 429 until at least another one finishes, after which they can try again.

Thanks

KaKi87 avatar Nov 04 '24 22:11 KaKi87

I think this kind of feature is better implemented in a listener_wrapper rather than an HTTP handler.

mohammed90 avatar Nov 04 '24 23:11 mohammed90

The benefit of it being an HTTP handler is it can respond with an HTTP 429.

The problem is that a listener wrapper still has to accept the connection to know the IP address, and all it can do is unkindly close the connection after that. (Unless I'm missing something. Quite possible. Running on like 4 hours of sleep today.)

mholt avatar Nov 04 '24 23:11 mholt

The benefit of it being an HTTP handler is it can respond with an HTTP 429.

Good point

mohammed90 avatar Nov 04 '24 23:11 mohammed90

Would the alternative be letting the request time out ?

KaKi87 avatar Nov 05 '24 14:11 KaKi87

I'm new to Caddy internals and looking to protect an upstream WebSocket server from being exploited by excessive concurrent connections.

While Apache (QS_SrvMaxConnPerIP) or Nginx (limit_conn) provide a way to set the maximum number of concurrent connections allowed from a single IP address, Out of luck, Caddy, which we currently use in production, doesn't natively have built-in directives for directly limiting concurrent connections per IP.

I reviewed the documentation on how to extend Caddy and found a section here that primarily focuses on writing an HTTP handler module. While creating an HTTP handler module appears to be straightforward, I believe it may not be feasible to track concurrent WebSocket connections using this approach. The issue is that although an HTTP handler can recognize upgrade requests (as they are HTTP requests), it cannot detect when these connections close since this occurs over the WebSocket protocol (via a close frame). HTTP handler won't interfere with the messages over the WebSocket protocol

I want to clarify that I’m not trying to hijack the conversation, but since you're discussing tracking concurrent connections, I thought it was relevant to raise these difficulties. Can you confirm whether tracking concurrent connections can't be achieved using an HTTP handler as I explained? Additionally, can Caddy provide finer control over the underlying TCP connections that I can use in a module? I can't find much information about the listener_wrapper mentioned here.

sameh-farouk avatar Jan 06 '25 11:01 sameh-farouk

Can you confirm whether tracking concurrent connections can't be achieved using an HTTP handler as I explained?

Depends. The handler doing the upgrade certainly can achieve it. But if another handler is doing the upgrade, the two handlers would have to have some way of sharing state or communicating about it.

Additionally, can Caddy provide finer control over the underlying TCP connections that I can use in a module? I can't find much information about the listener_wrapper mentioned here.

It's less so a Caddy thing and more of the Go standard library. In HTTP handlers, you get access to the underlying net.Conn, but as I was saying above, you'd have to go even below that to complete avoid accepting connections altogether if that was the intended behavior.

mholt avatar Jan 06 '25 18:01 mholt

Upon rereading this, if the goal is to limit concurrency of HTTP requests, not connections, that is probably feasible... basically an atomic counter instead of a ring buffer.

Happy to review a proposal for this if someone would like to try implementing it.

mholt avatar Feb 28 '25 18:02 mholt