mint
mint copied to clipboard
Add Mint.HTTP.recv_response/3
I'd like to propose to add a Mint.HTTP1.recv_response/3 that improve ergonomics of making one off requests:
iex> {:ok, conn} = Mint.HTTP.connect(:https, "httpbin.org", 443, mode: :passive)
iex> {:ok, conn, ref} = Mint.HTTP.request(conn, "GET", "/user-agent", [], nil)
iex> {:ok, conn, response} = Mint.HTTP.recv_response(conn, ref, 5000)
iex> Mint.HTTP1.close(conn)
iex> response
%{
status: 200,
headers: [
{"date", ...},
...
],
body: "{\n \"user-agent\": \"mint/1.6.2\"\n}\n"
}
There is a bunch of libraries which are simply make one off requests, to name a few: tzdata, goth/aws_credentials (they have their own pool, make one off request every ~hour), esbuild/tailwind.
To play the devils advocate, esbuild/tailwind are simply using httpc, copy-pasting these secure ssl defaults from EEF Security WG.
Pull Request Test Coverage Report for Build 55ab096f5279d6b3e29ddcd2654eda50680b0c15-PR-447
Details
- 18 of 21 (85.71%) changed or added relevant lines in 2 files are covered.
- No unchanged relevant lines lost coverage.
- Overall coverage increased (+0.3%) to 87.719%
| Changes Missing Coverage | Covered Lines | Changed/Added Lines | % |
|---|---|---|---|
| lib/mint/core/util.ex | 0 | 1 | 0.0% |
| lib/mint/http.ex | 18 | 20 | 90.0% |
| <!-- | Total: | 18 | 21 |
| Totals | |
|---|---|
| Change from base Build 50b11d668b6a240b0d9b20c67fbb41a10a7410b1: | 0.3% |
| Covered Lines: | 1300 |
| Relevant Lines: | 1482 |
💛 - Coveralls
Another idea is to instead have:
{:ok, conn} = Mint.HTTP.connect(:https, "httpbin.org", 443)
{:ok, conn, _acc = nil} =
Mint.HTTP.request(
conn,
"GET",
"/user-agent",
_headers = [],
_body = nil,
_timeout = 5000,
_acc = nil,
fn conn, entry, acc ->
IO.inspect(entry)
{:cont, conn, acc}
end
)
which does request/recv_response. Yet another idea is to have a single function that does connect/request/recv_response. I think request/8 above is a nice middle ground though.
@wojtekmach why would this crash?
{:ok, conn} = Mint.HTTP1.connect(:https, "httpbin.org", 443, mode: :passive)
{:ok, conn, ref1} = Mint.HTTP1.request(conn, "GET", "/user-agent", [], nil)
{:ok, conn, ref2} = Mint.HTTP1.request(conn, "GET", "/user-agent", [], nil)
Mint.HTTP1.recv_response(conn, ref1, 5000)
Mint.HTTP1.recv_response(conn, ref2, 5000)
recv_response could just discard messages that are not for the given ref, no?
request/8 is a no-go for me, too complex of an API. recv_response/3 is nice because it separates the APIs, so that you can still use request/5 and in case then you decide you're done with async stuff and want to receive a response, you have the option to do so in a blocking fashion with recv_response/3.
By the way, I’m totally in favor for this. 👍
Yes I can totally disregard unrelated messages and can document it as such. The only problem is they cannot be recovered. My understanding is in active mode we do selective receive on conn, not request, and in passive mode we’d have read from the socket. Again I think it’s fine.
Should we support active mode or passive mode or both?
Should we support HTTP/2, ie this becomes a function on Mint.HTTP?
Fair enough about not supporting /8.
Should we support HTTP/2, ie this becomes a function on Mint.HTTP?
Yes IMO
Yes HTTP/2 for sure. I’m thinking we might want to enforce this on passive conns actually, because if you think about it there's not much advantage to doing a blocking req and using active mode (yeah can recv bytes while parsing but probably negligible).
Got it, thanks.
Would you prefer to start with just recv_response/3 or recv_response/5 (that additionally has acc and fun) or both? In the PR we have both at the moment.
@wojtekmach can recv_response/3 return a stream of {atom(), term()} instead? Like a stream of {:status, ...} and so on. That way, you can do Map.new() if you expect a single piece of body or just Enum.reduce for custom stuff.
I renamed this to Mint.HTTP.recv_response/3, added basic docs and tests, and updated PR description accordingly.
@whatyouhide interesting! By stream of {atom, term} tuples you mean a list of these right? Or we're talking about lazy streams? I assume it's a list (so this function returns updated conn) so a follow-up question, what'd be a benefit over a map? I don't know if this would happen (or even be possible) in practice but maybe a silly server could return {:data, "foo"} followed by {:data, ""} and this would break passing it to Map.new(). A response with trailers would break Map.new too but I concede it's rather rare. If people are going to end up using Enum.reduce over this list a lot of times then I'd push for a version with acc + fun which will be very similar to Enum.reduce. In other words, in that case I'd have recv_response/3 which returns a map and recv_response_5 (or recv_response_while/5). WDYT?
RE CI failure, sigh, https://github.com/actions/runner-images/issues/9692.
CI is fixed in #448.
Damn I always forget that streams cannot carry an accumulator, and we really want to accumulate over the conn 🙄
We could have a single function that does connect/request/recv_response/close and it returns {:ok, enumerable}, no conn, it would be something like request/9 or, you know, request/1. :)
Since the use case here is to make a request and receive a response in a single operation without having to deal with the intricacies of Mint, then I would go with:
Mint.HTTP.request_and_response(
conn,
method,
path,
headers,
body,
options :: [{:timeout, timeout()}]
)
which returns a response map (usual %{status: ..., ...}). This is a handy compromise for one-shot requests. If you want to have control over accumulation, go with receiving the messages (or passive mode). Thoughts?
@whatyouhide yeah, I think this is a good compromise. The only thing that comes to my mind is with a distinct recv_response/3 we may still support request body streaming before calling it (since it blocks), but with request_and_response/6 we can't. But I think that's probably OK, can always drop down to lower level things. WDYT?
I think I would keep two separate functions. You get to support request streaming, passive mode, and pipelining by keeping them apart. Sure they are not “deal breakers” but we are not gaining much by unifying anyway, so why do it?
That being said I think what we have right now is pretty decent (if I may say so)
iex> {:ok, conn} = Mint.HTTP.connect(:https, "httpbin.org", 443, mode: :passive)
iex> {:ok, conn, request_ref} = Mint.HTTP.request(conn, "GET", "/user-agent", [], nil)
iex> {:ok, _conn, response} = Mint.HTTP.recv_response(conn, request_ref, 5000)
iex> response
%{
status: 200,
headers: [{"date", ...}, ...],
body: "{\n \"user-agent\": \"1.6.2\"\n}\n"
}
edit: right, ditto
Yeah I like that but I do not like the fact that it can trip you up if there are multiple in-flight reqs. Can we raise an error if there are?
Why can you trip up? You can for HTTP1 if you consume the responses out of order but not for HTTP2, right?
You can because as Wojtek said you will still consume the bytes (or msgs) for other requests. Say you have ref1 and ref2 in flight and you get these bytes:
ref1(HEADERS)
ref2(HEADERS)
ref1(BODY)
ref1(END)
in a single packet. If you call recv_response(ref1), then you will consume the headers for ref2 and won't be able to return them in the response.
Yeah see this comment: https://github.com/elixir-mint/mint/pull/447#issuecomment-2258261087. I’m fine with discarding (right now on the branch) or crashing. Lmk!
We can crash if there are multiple reqs in flight, the conn knows that.
Sorry, to be clear, there is a valid use case for this feature:
- HTTP1, as long as you consume them in order
- HTTP2
So I am not sure why we should raise and forbid all cases? Couldn't we just document it instead that HTTP1 itself requires order?
@josevalim okay yes in HTTP/1 reqs are in order so this is not an issue. In HTTP/2, this is an issue, as per https://github.com/elixir-mint/mint/pull/447#issuecomment-2268750627. If frames come out of order, we can absolutely match them back into their corresponding request, but generally Mint emits "responses" as frames. So in the sequence of frames in the comment above, a normal Mint flow would return
[{:headers, ref1, ...}, {:headers, ref2, ...}, {:body, ref1, ...}, {:done, ref1}]
In this, you didn't lose the headers of ref2. If recv_response instead returns %{status, headers, body} for ref1, we lost the headers for ref2 unless we buffer them into the conn (I would rather not).
Considering that this is a nice utility for one-shot requests, and you can still implement blocking recv_response through the normal message-based API anyway, I'd say we don't need HTTP/2 multiplexing?
@whatyouhide If we receive messages from ref2 when receiving ref1, we should store them on the conn, and replay them only when you receive ref2, no?
@josevalim yeah that's the complexity I want to avoid, as I don't think it's justified for the recv_response utility. We don't store anything now because if you receive a complete frame for ref2 while ref1 is also in flight, we just emit it (as a {:status, ...}, {:headers, ...}, etc response).
Can you expand on the complexity? Because I was thinking it would mostly be a map with frames per ref?
Complexity as we don't need it 🙃
Also, can you describe a use case where I have ref1 and ref2 in flight, and then I call recv_response(ref1). I never call recv_response(ref2). What happens to the frames we receive for ref2? Do we surface them only when the user finally calls stream/2?