grpc-swift
grpc-swift copied to clipboard
Create equivalents of `ClientTestExample` and `ServerTestExample` for SwiftGRPC-NIO
See https://github.com/grpc/grpc-swift/blob/master/Tests/SwiftGRPCTests/ClientTestExample.swift and https://github.com/grpc/grpc-swift/blob/master/Tests/SwiftGRPCTests/ServerTestExample.swift. The point of these two files is to illustrate how a library user could use the test stubs optionally generated by our protoc
plugin to test their own code that makes use of SwiftGRPC.
This consists of four parts:
- Add
func fakeSucceeded() -> StreamingResponseCallContextTestStub {
self.statusPromise.succeed(result: GRPCStatus(code: .unknown, message: "fake status used only to prevent leaked promise errors in tests"))
return self
}
to StreamingResponseCallContextTestStub
. Motivation: Usually, the call handler automatically fulfills its call context's status promise when the call has finished. In tests, however, no call handler is being created, so that status promise is never fulfilled. To avoid "leaked" promise errors, users should be encouraged to simply call fakeSucceeded
right after creating their test stub, unless their server code explicitly fulfills that status promise itself (e.g. in case of a failed request). The fact that we need this might indicate a flaw in our design; however, I am not aware of a better design that would alleviate this issue without putting more burden on the user. Also note that the service can still be tested end-to-end without using any test stubs simply by spinning up a "real" gRPC server and sending methods to that one, rather than directly calling the Provider
implementations with test stubs as arguments.
2. Add
func fakeResponse() -> UnaryResponseCallContextTestStub {
self.responsePromise.succeed(result: .init())
return self
}
to UnaryResponseCallContextTestStub
. See above for the motivation behind this.
3. We'd need to implement client test stubs; these are already present for the non-NIO client and server and the NIO server, but not yet for the NIO client.
4. Actually port https://github.com/grpc/grpc-swift/blob/master/Tests/SwiftGRPCTests/ClientTestExample.swift and https://github.com/grpc/grpc-swift/blob/master/Tests/SwiftGRPCTests/ServerTestExample.swift to use the NIO test stubs.
Are there any update to this?
I'm afraid not, is this something you'd be interested in taking up @WilliamIzzo83?
Saying that, I do have some concerns around the point @MrMage highlighted:
To avoid "leaked" promise errors, users should be encouraged to simply call
fakeSucceeded
right after creating their test stub, unless their server code explicitly fulfills that status promise itself (e.g. in case of a failed request). The fact that we need this might indicate a flaw in our design; however, I am not aware of a better design that would alleviate this issue without putting more burden on the user.
I've been wondering whether we should abstract the server-side RPC handling a little more. This is, admittedly, not every fleshed out, but I imagine something like the following which is closer to grpc-java:
func unary<Request, Response>(request: Request, responseStream: StreamObserver<Response>) {
// Do something with the request to get a response
let response = // ...
// Send it back
responseStream.send(response)
// Close the stream with a GRPCStatus, alternatively this could take no arguments
// and errors could be propagated via e.g. `responseStream.error(_:)`
responseStream.close(.ok)
}
For a streaming call:
func bidi<Request, Response>(responseStream: StreamObserver<Response>) -> (StreamEvent<Request>) -> Void {
// Return a handler for request stream events
return { event in
switch event {
case .message(let request):
// We got a message, make and send the response.
let response = // ...
responseStream.send(response)
case .end:
// Client closed the request stream, close the response stream.
let status: GRPCStatus = .ok
responseStream.close(status)
}
}
}
// Takes a request, responseStreams a single response and end stream.
func unary<Request, Response>(request: Request, responseStream: StreamObserver<Response>)
// Takes a request, responseStreams multiple responses and end stream.
func serverStreaming<Request, Response>(request: Request, responseStream: StreamObserver<Response>)
// Returns a stream event handler which uses the responseStream to send a single response and end stream.
func clientStreaming<Request, Response>(responseStream: StreamObserver<Response>) -> (StreamEvent<Request>) -> Void
// Returns a stream event handler which uses the responseStream to send multiple responses and end stream.
func bidiStreaming<Request, Response>(responseStream: StreamObserver<Response>) -> (StreamEvent<Request>) -> Void
This is actually pretty similar to what we have now but doesn't directly use futures/promises.
What do you think @MrMage?
At the moment in order to test I set up the grpc-swift tool so that it creates the server code. Then I mocked up each situation that I need to test. This isn't as bad as I was thinking, because it allows me to test request parameters and implement test logic in the mocked implementation without changing whatever logic that uses the grpc client.
A real world test example:
class FrontendMockService : FrontendProvider {
// implement each method so that they fails with a notimplemented grpc status
}
func testParamsOk() {
class RegisterUserParamsSuccessMock : FrontendMockService {
override func registerUser(request: RegisterUserRequest, context: StatusOnlyCallContext) -> EventLoopFuture<RegisterUserResponse> {
XCTAssertEqual(request.email, "[email protected]")
XCTAssertEqual(request.password, "expectedpassword")
return context.eventLoop.makeSucceededFuture(RegisterUserResponse())
}
}
let server = startServer(with: RegisterUserParamsSuccessMock(), in: AuthorizationUserTests.evtLoop)
let mgr = // client service manager which incapsulates grpc client
let exp = self.expectation(description: "signup response wait")
// this is a specific implementation where the manager calls the grpc client,
// waits for the response and calls a completion block. This is a specific use case of course
mgr.signUpUser(email: "[email protected]", password: "expectedpassword") {
switch $0 {
case .success:
exp.fulfill()
default: break
}
}
self.wait(for: [exp], timeout: 1)
}
As I said this isn't as bad as it seems, but there is some boilerplate to setup:
- Mock server implementation. This is easy, simply implement the provider protocol and set each method so that fails with a not implemented grpc status
- Mock server specialization for the specific test. This usually involves the specific method implementation where user will test whatever it needs.
- Mock server implementation start. This too is simple: just create an event loop, set destination at localhost with a port and there it goes. However there are some caveats: for instance by looking at the test suite source code, specifically those that set up echo provider, target port is set to 0. This won't work on iphone simulator (or an actual device), the port should be set to something else (like 8000)
If we want to make something effective we should do something to mitigate the boilerplate.
@glbrntt I like the current approach because it makes it impossible to accidentally mis-use the library, at least for unary-response cases, because you have to return a response future. It also gives us a neat way of cancelling any further request processing once any send future fails. I am open to discussing this, however, but I'm not sure how it relates to the testing discussion.
For the client, we could probably go ahead and implement test stubs that simply return a set of provided responses and a status, similar to the stubs we already provide for the non-NIO variant, independent of any server refactoring. WDYT?
@glbrntt I like the current approach because it makes it impossible to accidentally mis-use the library, at least for unary-response cases, because you have to return a response future. It also gives us a neat way of cancelling any further request processing once any send future fails. I am open to discussing this, however, but I'm not sure how it relates to the testing discussion.
Sorry, I conflated two ideas here. What I was trying to get across is that the concern about having to fulfil the promise suggests that we might not want our API to expose the underlying use of NIO at the highest level. (I mentioned the stream observer/writer because it's a fairly simple pattern used across a number of gRPC implementations, it's not necessarily the right way forward here.)
The other concern I have with the current approach is that it's very easy for a user to block the event loop in their code (not necessarily a wait()
, but simply by spending too much time on the event loop). At the moment the user would have to manage running their code on a separate thread/pool and then hop back onto the event loop.
For the client, we could probably go ahead and implement test stubs that simply return a set of provided responses and a status, similar to the stubs we already provide for the non-NIO variant, independent of any server refactoring. WDYT?
That sounds good. I think we can probably ignore metadata for now.
The other concern I have with the current approach is that it's very easy for a user to block the event loop in their code (not necessarily a
wait()
, but simply by spending too much time on the event loop). At the moment the user would have to manage running their code on a separate thread/pool and then hop back onto the event loop.
Vapor for example also expects a response future to be returned. Also, unless we dispatch all request handlers to a "blocking IO thread pool", the event loop would still be blocked by the user. And dispatching handlers to a background thread feels like the wrong approach, anyway. I.e. if the user is using blocking code inside their gRPC request handler, they are very likely holding NIO wrong, and at that point, we can't work around that as a framework.
To summarize: if the user is using anything NIO-based (including gRPC), they need to know that their code should not block the thread. We can not lift that weight off their shoulders.
Returning to the status promise fulfilment code smell:
The fact that we need this might indicate a flaw in our design; however, I am not aware of a better design that would alleviate this issue without putting more burden on the user.
At the moment the user has to succeed
/fail
the promise, usually by mapping an .ok
status at the end of a send queue and then cascading to the promise, something like:
sendQueue.map {
GRPCStatus.ok
}.cascade(to: context.status)
Why don't we provide a function for the streaming contexts instead, e.g. sendEnd
?
sendQueue.map {
GRPCStatus.ok
}.whenComplete {
context.sendEnd($0)
}
All this would do is fulfil the promise in the call handler. The test stub then wouldn't require the fakeSucceeded()
.
Re: offloading from the event loop:
Vapor for example also expects a response future to be returned.
I'm not proposing that we remove this ability.
To summarize: if the user is using anything NIO-based (including gRPC), they need to know that their code should not block the thread. We can not lift that weight off their shoulders.
Sure, and I agree with that.
But we can provide a way (not the only way) for users to run their code off of the event loop. This could extend to the more computationally expensive parts of gRPC (message de/serialization, compression). The goal here is to provide a way to minimize the load on the event loop used for the connection so that we can increase throughput and reduce variation in latency (de/serializing messages is most likely the culprit here).
All this would do is fulfil the promise in the call handler. The test stub then wouldn't require the
fakeSucceeded()
.
But wouldn't whenComplete
only run in the case of a successfully-fulfilled future, resulting in a dangling response promise when the handler runs into an error somewhere?
Maybe we could somehow wire up the fake context's promise like a real one instead?
But we can provide a way (not the only way) for users to run their code off of the event loop. This could extend to the more computationally expensive parts of gRPC (message de/serialization, compression). The goal here is to provide a way to minimize the load on the event loop used for the connection so that we can increase throughput and reduce variation in latency (de/serializing messages is most likely the culprit here).
That's reasonable, but I would rather cross that bridge when we get there. I would expect 99.9% of users to never run into latency issues caused by e.g. message (de-)serialization blocking the event loop — this stuff is usually way too fast to cause any issues.
But wouldn't
whenComplete
only run in the case of a successfully-fulfilled future, resulting in a dangling response promise when the handler runs into an error somewhere?
whenComplete
operates on the Result
type, i.e. Result<Value, Error>
.
The exact shape of a possible sendEnd
would of course be up for discussion if we take this route (result type vs overloads for status or error, or even different functions on the context).
Maybe we could somehow wire up the fake context's promise like a real one instead?
Not sure what you mean here, can you elaborate?
Hi, I came across this thread when looking for some best practices on how to test client implementations with mocks. Is there currently any way to do this without creating a dummy server as @WilliamIzzo83 suggested?
I attempted to inject the top level Service protocolsm but those return UnaryCalls which appear to not be easily mocked. Any suggestions? Or is there currently no good way of testing services without actually hitting a server
We had dedicated mock structures (called TestStub
) for this purpose in the non-NIO variant of gRPC Swift, but these have not been implemented for the NIO variant yet. You could try applying a similar concept to the NIO variant, however.