Handle CONNECT requests
This PR aims at adding support for CONNECT requests
Gift for @gquintard
Marked as draft bc I still have some tests to add
Would it make sense to explicitly set SETTINGS_ENABLE_CONNECT_PROTOCOL (0x8) = 0 for h2 to make sure that clients wont even try to use connect over h2?
First: Why?! What is the use-case for CONNECT in a server-side proxy as opposed to a client-side proxy ?
Second: This SHALL be disabled by default so a Varnish instance is not an open relay by default.
@cartoush did you look at what it would take to make this possible with vcl_pipe {} as we have it?
@phk: I can answer the first question: "server-side" get a bit murky when Varnish is used internally and it's "just" an HTTP gateway. But semantics aside, we have users who want to use Varnish as a forward proxy and are currently using a patched Varnish to meet that need.
No problem with deactivating this by default I believe.
@nigoroll: CONNECT trashes most of bereq, opens the connection and generates synthetic headers, so pipe can't work as-is.
@gquintard It's clear that vcl_pipe {} can't work as is, but the core functionality of shuffling bytes is very close, so we should at least investigate if/how it could be made to work and, if we decide it can't, for what good reason. After all, this patch even hacks the pipe backend method... This is why I asked did you look at what it would take and as there was no answer, here's my take:
- We need to establish a backend connection without sending any headers
- We need to return a response to the client.
IIUC the patch herein uses the fixed static response, but I would think this might not fit all cases? This patch also generates the response from vbe_dir_http1pipe() which really is far from proper function layering.
So what would we need?
- A VDI interface to "shuffle bytes". I think we should not just expose
vbe_dir_getfd(), because "shuffling bytes" might involve more than read/write on a file descriptor - A good VCL design with a wider scope in mind
On the latter, there is good groundwork in VIP30 , which is being discussed in #3471
IMHO, work should continue there.
If anyone wants CONNECT today, I think it can be done in a VMOD.
My bad, I misunderstood your question, in the sense we did look at a bunch of different ways on how to handle it, and this looked like the cleanest one without requiring massive amounts of foundational work.
Essentially, we saw 4 different approaches:
- vmod: I initially tried creating a
directorwrapping a backend to just leverage thevbe_dir_getfd()in theCONNECTcase, but just passthrough everything else. But it involved accessing a bunch of private functions and I ended up rewriting the whole backend - there's also the option of just tweaking
vbe_dir_http1pipe()to change behavior based on a new boolean field inreq, which can be set by a tiny vmod - VIP30 and a new VDI interface would be the proper way, but we know how reluctant the project is with adding new subroutines to VCL, specially when they are this impactful. I'm not keen in embarking on such a big project with assurance that there's a way through
- finally, the current PR offers a middle ground which doesn't layer the code as well as the previous solution but is also much less invasive on the VCL changes
At the moment, I don't see a way to achieve this feature without core modifications
@gquintard I see nothing wrong with a vmod pulling in a lot of core code, at least not for a "time to market" implementation.
but we know how reluctant the project is with adding new subroutines to VCL
It's interesting that you bring up this point against a generic vcl_tunnel {} to replace vcl_pipe {} as a argument for a specific, additional vcl_connect {}.
I really think we should focus work on the long term solution
being dependent from internal code wasn't so much the issue as having to duplicate a lot of code into the vmod, but yes, I see your point.
I'm actually very favorable to the vcl_tunnel/vcl_connect addition, but I think my argument is that #3471 was opened 4 years ago, so I thought aiming for a smaller changer would encounter less friction
So, @nigoroll, you're saying we close this an focus on VIP30 and #3471 ?
So, @nigoroll, you're saying we close this an focus on VIP30 and #3471 ?
This is my opinion, yes.
Regarding the other point:
I'm actually very favorable to the
vcl_tunnel/vcl_connectaddition, but I think my argument is that #3471 was opened 4 years ago, so I thought aiming for a smaller changer would encounter less friction
Yes, we have too many plans open for too long, but that should not mislead us to suddenly started reaching for short-sighted solutions. This project has a long track record of resisting the easy way and I can tell you that I was frustrated about it soo often, and man, frustrated I was. But it was and is the right way to do things properly.
So regarding our backlog of ideas, I really think we need to follow those which we agreed upon, and work incrementally in the right direction. We do this in a lot of other places (#4035 is an example which comes to mind immediately, but I am sure there are others) and I think it works. It needs patience, but it works.