gapic-generator-ruby
gapic-generator-ruby copied to clipboard
Document how to access trailing_metadata from ActiveCall::Operation on streaming responses
Currently the underlying GRPC::ActiveCall::Operation
is exposed to users directly for every method call and it is the only way to access some gRPC properties, like the trailing_metadata
.
This could be confusing because it is low-level and access to the these properties (i.e metadata) depends on the timing of the call. For example, trailing_metadata
is not available immediately for a streaming call but it is for unary calls (see #239).
Do we need a higher level API for this?
In your PR you have the following test:
def test_chat_with_metadata
options = Gapic::CallOptions.new metadata: {
'showcase-trailer': ["so", "much", "chat"],
quiet: ["please"]
}
stream_input = Gapic::StreamInput.new
@client.chat stream_input, options do |response_enum, operation|
chatty_thread = Thread.new do
sleep rand
["a", "b", "cee"].each { |x| stream_input.push content: x }
stream_input.close
assert_equal ["a", "b", "cee"], response_enum.to_a.map(&:content)
end
chatty_thread.join
assert_equal(
{ 'showcase-trailer' => ["so", "much", "chat"] },
operation.trailing_metadata
)
end
end
Another way or writing that is to use two threads, the first for sending the inputs, and the second for calling GPRC::ActiveCall::Operation#wait
(which blocks):
def test_chat_with_metadata
options = Gapic::CallOptions.new metadata: {
'showcase-trailer': ["so", "much", "chat"],
quiet: ["please"]
}
stream_input = Gapic::StreamInput.new
@client.chat stream_input, options do |response_enum, operation|
Thread.new do
sleep rand
["a", "b", "cee"].each { |x| stream_input.push content: x }
stream_input.close
assert_equal ["a", "b", "cee"], response_enum.to_a.map(&:content)
end
Thread.new do
operation.wait
assert_equal(
{ 'showcase-trailer' => ["so", "much", "chat"] },
operation.trailing_metadata
)
end
end
end
I'm not sure if I understand that comment.
The issue here is if we want to decorate, wrap, or somehow change the operation
in either of the examples above.
What's the significance of one examples vs the other above?
The significance is you can swap the call to Thread#join
on the thread taking care of the request/response actions with another thread that calls GRPC::ActiveCall::Operation#wait
. If we lead with the guidance that users should call GRPC::ActiveCall::Operation#wait
before GRPC::ActiveCall::Operation#trailing_metadata
, then we avoid the instances when trailing_metadata
does not yet have values.
Ok, sure, documenting this as the pattern is one solution. Having a different API, such as a metadata callback, is another solution though and that's what I wanted to open this issue for.
I don't have a strong preference for what we use in the tests now given the current state so if you do just put a comment in the PR and I'll update it, add a second test, etc.
I've explored ways to provide a thread for iterating the streaming response, but there are always unintended consequences for doing so. I believe the best solution is to pass the streaming response (Enumerable) to the user and let the user manage the concurrency for pulling responses out of it.
I also think Operation#wait
is the optimum API for blocking until trailing metadata is available. I think most users consuming streaming responses are going to be doing so in some sort of concurrent manner, and calling Operation#wait
makes a lot of sense.
We can document this pattern in the generated docs so the users are aware of how to perform operations like this.