ttrpc icon indicating copy to clipboard operation
ttrpc copied to clipboard

Add support for fd passing

Open cpuguy83 opened this issue 3 years ago • 9 comments

This adds a new message type for passing file descriptors. How this works is:

  1. Client sends a message with a header for messageTypeFileDescriptor along with the list of descriptors to be sent
  2. Client sends 2nd message to actually pass along the descriptors (needed for unix sockets).
  3. Server sees the message type and waits to receive the fd's.
  4. Once fd's are seen the server responds with the real fd numbers that are used which an application can use in future calls.

To accomplish this reliably (on unix sockets) I had to drop the usage of the bufio.Reader because we need to ensure exact message boundaries.

Within ttrpc this only support unix sockets and net.Conn implementations that implement SendFds/ReceiveFds (this interface is totally invented here).

Something to consider, I have not attempted to do fd passing on Windows which will need other mechanisms entirely (and the conn's provided by winio are not sufficient for fd passing). I'm not sure if this new messaging will actually work on a Windows implementation.

Perhaps the message tpye should be specifically for unix sockets? I'm not sure how this would be enforced at the moment except by checking if the net.Conn is a *net.UnixConn.

Closes #74

cpuguy83 avatar Jan 27 '21 19:01 cpuguy83

I'm interested in comments on the viability of this. Maybe the unbuffered reader should be opt-in... as in support for FD passing would be opt-in rather than implicit for performance reasons.

cpuguy83 avatar Jan 28 '21 17:01 cpuguy83

Updated the test to use an actual out of process ttrpc server.

cpuguy83 avatar May 07 '21 18:05 cpuguy83

Codecov Report

Merging #75 (a19e82d) into master (8ecd516) will decrease coverage by 4.68%. The diff coverage is 43.84%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master      #75      +/-   ##
==========================================
- Coverage   69.77%   65.09%   -4.69%     
==========================================
  Files          11       11              
  Lines         612      719     +107     
==========================================
+ Hits          427      468      +41     
- Misses        149      212      +63     
- Partials       36       39       +3     
Impacted Files Coverage Δ
types.go 15.78% <16.66%> (+0.40%) :arrow_up:
channel.go 51.21% <17.50%> (-30.60%) :arrow_down:
server.go 69.11% <40.90%> (-4.55%) :arrow_down:
client.go 78.01% <77.50%> (-0.98%) :arrow_down:

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 25f5476...a19e82d. Read the comment docs.

codecov-commenter avatar May 07 '21 18:05 codecov-commenter

Example of Sendfd (client):

https://github.com/cpuguy83/docker/blob/6068b721a089b396d557338b088113fe50360882/container/stream/streamv2/stdio/client.go#L92-L129

Example from service (server):

https://github.com/cpuguy83/docker/blob/6068b721a089b396d557338b088113fe50360882/container/stream/streamv2/stdio/server.go#L48-L98

cpuguy83 avatar May 27 '21 15:05 cpuguy83

What would this allow you to do? What do you have in mind?

crosbymichael avatar Oct 28 '21 15:10 crosbymichael

@crosbymichael Apologies, those links I posted started 404ing when I force pushed over them. I have an example here: https://github.com/moby/moby/pull/42019

I've been working on getting dockerd out of the I/O path of containers. In the case of the above mentioned PR everything related to container stdio is moved out of process: log drivers, exec I/O, even attaches. This allows dockerd to restart and keep HTTP clients attached to a container, as an example.

I would also like to be able to do similar things for containerd where a client is able pass the actual stdio pipes it is using directly to the executed container rather than the multiple levels of buffering in place today.

I'm also looking at being able to have a shim that can drain its ttrpc connections and pass them to a cooperating process.

All this can, of course, be done without ttrpc, but then we'd need to come up with a separate protocol/service for managing this.

We should be able to fix the buffered read mentioned in the original comment by wrapping wrapping the net.Conn to replace read with readmsg and set a fixed max number of fd's to be passed in a single rpc.

cpuguy83 avatar Oct 30 '21 17:10 cpuguy83

And another thing we may be able to do is support another message type like FileStream that the client can use when the transport does not support FD passing.

cpuguy83 avatar Nov 15 '21 21:11 cpuguy83

Needs rebase

AkihiroSuda avatar Feb 19 '22 08:02 AkihiroSuda

I think this could just be implemented at the service level. You could implement a service specifically for passing fd and only register it if supported by the underlying connection.

dmcgowan avatar Feb 21 '22 21:02 dmcgowan

Going to close this one for now, would love to see an example of this done as a service though

dmcgowan avatar Nov 30 '22 00:11 dmcgowan

@dmcgowan I'm not sure I understand what you are saying.

What I think you are saying is this would be better implemented as a service that explicitly handles FD passing via a side-channel.

Given that, I've definitely given this some thought and was initially like "ok yeah we could do this and it would be fine". However, with more time to think on this, I think it would be very beneficial to have it directly in the protocol so we don't need to have a side-channel.

I believe we should be able to work out the issues with buffering here by always reading the OOB data with some fixed size buffer.... so the caller would only ever be able to pass some number N of fd's at once, perhaps the buffer for this could be configurable when creating the server.

Of course I may have totally misunderstood what you were trying to say here.

cpuguy83 avatar Jan 25 '23 19:01 cpuguy83

You have probably spent more timing thinking about it than me. What are some of the downsides of having it as a side-channel service?

I might be looking at this differently, less about the more efficient way to do it and more about compatibility. I see the ttrpc client interface as mainly to be used for generated proto services so that services remain compatible between ttrpc and grpc. The tagline for this project is "GRPC for low-memory environments.", so it I think its fair to look at new features from the perspective of whether it is ttrpc only and what diverging from grpc (or more generally usage via proto service definitions) would mean for how ttrpc is used.

dmcgowan avatar Jan 25 '23 20:01 dmcgowan

I think the main thing is you still need to come up with some kind of protocol for exchanging that information.

cpuguy83 avatar Jan 26 '23 16:01 cpuguy83