Allow simulating ongoing network connection issues
HTTP Toolkit can already simulate fatal network issues, like timeouts & unexpected connection closes or resets, but not ongoing instability.
It would be useful to be able to simulate connection issues, such as:
- Additional latency (with optional variability - e.g. a min/max latency range)
- Limited bandwidth (upstream/downstream/both)
- Others?
This should fit into the ongoing 'rule steps' work, so that it'll be possible to add these as additional steps that can run to modify the connection before any other rule (e.g. add latency & limit bandwidth, then send a fixed response that's slowly delivered)
Does this affect you too? Click below and add a :+1: to vote for this and help decide where HTTP Toolkit goes next, or go vote on the other most popular ideas so far.
TCP slow close/slow open from Toxiproxy might also be useful. Tricky in our case though, since we only match requests against rules once we've received them, i.e. after the TCP connection is already open... We can delay request processing at the HTTP level though, which might cover some use cases? Effectively extra latency at the start of the request only.
@pimterry, my thoughts following your comment there: https://github.com/httptoolkit/httptoolkit/issues/55#issuecomment-3389964780.
My use case would be a per-rule configuration (to cite your words). Typically, for a SPA, I want to match a particular request with a URL pattern and throttle it down to test the behaviour of my front-end code when the network is slow between the server and the client. As I mentioned in the issue above, throttling (rather than delaying) would also allow us to test the behaviour when streaming is involved: how does the client handle it, but also, is the server handling that well (back-pressure, etc.)?
As for your question:
Should we throttle server-side or client-side
I'm not familiar enough with the implementation of HTTP Toolkit, but my gut feeling would be on the client side, before delivering to the "user space" code. Does that make sense?
Ok, so I think that's workable. That means you would:
- Create a rule, and add matchers to filter to the traffic you're interested in
- Add a "Throttle traffic" step on the right, set the upstream & downstream bandwidth as you'd like.
- This would probably throttle per request initially, but we could have a total/per-request toggle later, to share a single pool of bandwidth between all requests matching this specific rule.
- Then you'd add subsequent steps to this rule (pass through to the real server, return a fixed response, etc) and save it.
- Each request that matches would have the bandwidth for its request body & response limited accordingly.
This wouldn't affect request header bandwidth, since we need have to receive those before matching the request to rules, but that's generally well under 10KB anyway so unless you're setting an ultra-low upstream bandwidth it won't make a notable difference.
It's much simpler to do this per-request, so I'd start with that first - that means if you set 1MB/s, and make 3 requests, you'll get a total download of 3MB/s between them. Looking at other tools, I think this is still useful - there's different approaches here, but there's quite a few tools that do throttle per connection (e.g. Toxiproxy) as well as more generally (e.g. browser dev tools) so I think both options are useful in different cases.
Would that be a useful setup for you? I don't think this is top of my list yet, but this feels like an achievable outline so I'd definitely like to look at it in the not too distant future if we can.
@pimterry that sounds great! Very excited about that. I've been using Murus to configure PF firewall and Dummynet to do this, and it is a pain. Also, I wanted to point out that another ticket, which is closely related to this and that I've been following for a (very) long time, finally got some traction on the Chromium side: https://issues.chromium.org/u/1/issues/40434685.