grpc-swift
grpc-swift copied to clipboard
unavailable (14): Transport became inactive
Describe the bug
- I am using the following: 1.1. iOS (Swift) 1.2. Cocoapods
- gRPC-Swift (1.7.3):
- CGRPCZlib (= 1.7.3)
- Logging (< 2.0.0, >= 1.4.0)
- SwiftNIO (< 3.0.0, >= 2.32.0)
- SwiftNIOExtras (< 2.0.0, >= 1.4.0)
- SwiftNIOHTTP2 (< 2.0.0, >= 1.19.2)
- SwiftNIOSSL (< 3.0.0, >= 2.14.0)
- SwiftNIOTransportServices (< 2.0.0, >= 1.11.1)
- SwiftProtobuf (< 2.0.0, >= 1.9.0)
- SwiftNIO (2.38.0):
- SwiftNIOCore (= 2.38.0)
- SwiftNIOEmbedded (= 2.38.0)
- SwiftNIOPosix (= 2.38.0)
- SwiftNIOConcurrencyHelpers (2.38.0):
- CNIOAtomics (= 2.38.0)
- SwiftNIOCore (2.38.0):
- CNIOLinux (= 2.38.0)
- SwiftNIOConcurrencyHelpers (= 2.38.0)
- SwiftNIOEmbedded (2.38.0):
- _NIODataStructures (= 2.38.0)
- SwiftNIOCore (= 2.38.0)
- SwiftNIOExtras (1.10.2):
- SwiftNIO (< 3, >= 2.32.0)
- SwiftNIOFoundationCompat (2.38.0):
- SwiftNIO (= 2.38.0)
- SwiftNIOCore (= 2.38.0)
- SwiftNIOHPACK (1.20.1):
- SwiftNIO (< 3, >= 2.35.0)
- SwiftNIOConcurrencyHelpers (< 3, >= 2.35.0)
- SwiftNIOCore (< 3, >= 2.35.0)
- SwiftNIOHTTP1 (< 3, >= 2.35.0)
- SwiftNIOHTTP1 (2.38.0):
- CNIOHTTPParser (= 2.38.0)
- SwiftNIO (= 2.38.0)
- SwiftNIOConcurrencyHelpers (= 2.38.0)
- SwiftNIOCore (= 2.38.0)
- SwiftNIOHTTP2 (1.20.1):
- SwiftNIO (< 3, >= 2.35.0)
- SwiftNIOConcurrencyHelpers (< 3, >= 2.35.0)
- SwiftNIOCore (< 3, >= 2.35.0)
- SwiftNIOHPACK (= 1.20.1)
- SwiftNIOHTTP1 (< 3, >= 2.35.0)
- SwiftNIOTLS (< 3, >= 2.35.0)
- SwiftProtobuf (1.19.0)
- Below is how I init my
GRPCChannel
andClient
let bytes: [UInt8] = Array(GlobalConstant.rootCert.data(using: .utf8)!)
let trustRoots = NIOSSLTrustRoots.certificates(try NIOSSLCertificate.fromPEMBytes(bytes))
let tlsConfig = GRPCTLSConfiguration.makeClientConfigurationBackedByNIOSSL(certificateChain: [],
privateKey: nil,
trustRoots: trustRoots,
certificateVerification: .noHostnameVerification,
hostnameOverride: GlobalConstant.certHost,
customVerificationCallback: nil)
let keepalive = ClientConnectionKeepalive(interval: .seconds(15), timeout: .seconds(10))
let channelLogger = Logging.Logger(label: "XIMSDK.Channel \(config.clusterID)")
let channel = try GRPCChannelPool.with(
target: .hostAndPort(serverInfo.host, Int(serverInfo.port)),
transportSecurity: .tls(tlsConfig),
eventLoopGroup: group
) {
// Configure keepalive.
$0.keepalive = keepalive
$0.backgroundActivityLogger = channelLogger
$0.connectionBackoff = ConnectionBackoff(initialBackoff: 1.0,
maximumBackoff: 10.0,
multiplier: 1.6,
jitter: 0.2,
minimumConnectionTimeout: 20.0,
retries: .unlimited)
$0.idleTimeout = .zero
}
let clientLogger = Logging.Logger(label: "XIMSDK.Client \(config.clusterID)")
let callOptions = CallOptions(timeLimit: .timeout(.seconds(20)), logger: clientLogger)
let client = PB.Client(channel: channel, defaultCallOptions: callOptions, interceptors: XimClientInterceptorFactory())
- During the running of my app, I sometimes have this error
unavailable (14): Transport became inactive
but I don't know why? So I intercepted the log when the error occurred2022-05-23T14:21:05+0800 [XIMSDK] error Cryptor.swift:286 getSignalKey(peerAddresses:) get signal key failed, unavailable (14): Transport became inactive
Additional information
2022-05-23T14:21:05+0800 [BGIM.Client] debug ClientTransport.swift:333 _transportActivated(channel:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_request_id = 28C3590B-EF3C-403D-A1D9-0440F76A129C activated stream channel
2022-05-23T14:21:05+0800 [BGIM.Channel] debug GRPCIdleHandlerStateMachine.swift:676 streamCreated(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 0980E81F-EE82-4259-B0E6-B751E824A661/0 h2_active_streams = 1 h2_stream_id = HTTP2StreamID(11) poolmanager.id = ObjectIdentifier(0x0000000280b16e00) HTTP2 stream created
2022-05-23T14:21:05+0800 [BGIM.Channel] debug GRPCIdleHandlerStateMachine.swift:698 streamClosed(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 0980E81F-EE82-4259-B0E6-B751E824A661/0 h2_active_streams = 0 h2_stream_id = HTTP2StreamID(11) poolmanager.id = ObjectIdentifier(0x0000000280b16e00) HTTP2 stream closed
2022-05-23T14:21:05+0800 [App] info AvcServiceType.swift:55 receive(_:context:) REQUEST Received headers: /avc.AVCApi/GetGroupCall - [([], ":status", "200"), ([], "content-type", "application/grpc"), ([], "trailer", "Grpc-Status"), ([], "trailer", "Grpc-Message"), ([], "trailer", "Grpc-Status-Details-Bin")]
2022-05-23T14:21:05+0800 [App] info AvcServiceType.swift:60 receive(_:context:) REQUEST SUCCESS: 'GLH5LL7R' - '/avc.AVCApi/GetGroupCall'
2022-05-23T14:21:05+0800 [XIMSDK.Client GLH5LL7R] debug ConnectionPool.swift:370 _enqueueWaiter(deadline:promise:logger:initializer:) grpc_request_id = 0485F3C3-DDAF-406A-84D9-A5856CDE661A pool.waiter.id = ObjectIdentifier(0x0000000283d9c780) pool.waiters.count = 0 waiting for a connection to become available
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug ConnectionPool.swift:426 _shouldBringUpAnotherConnection() pool.reservations.capacity = 0 pool.reservations.count = 1 pool.reservations.load = inf pool.reservations.loadThreshold = 0.9 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) stream reservation load factor greater than or equal to threshold, bringing up additional connection if available
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:40 send(_:promise:context:) > Starting '/ximclient.Client/GetGroupMember' RPC, headers: []
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:45 send(_:promise:context:) > Sending request with 'XIMSDK.Ximclient_GetGroupMemberReq:
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:49 send(_:promise:context:) > Closing request stream
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug PlatformSupport.swift:199 makeClientBootstrap(group:logger:) grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) making client bootstrap with event loop group of type NIOTSEventLoop
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug PlatformSupport.swift:203 makeClientBootstrap(group:logger:) grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) Network.framework is available and the EventLoopGroup is compatible with NIOTS, creating a NIOTSConnectionBootstrap
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug ConnectionManager.swift:613 channelActive(channel:multiplexer:) connectivity_state = connecting grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) activating connection
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug TLSVerificationHandler.swift:55 userInboundEventTriggered(context:event:) grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) TLS handshake completed, negotiated protocol: h2
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:529 receiveSettings(_:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 h2_settings_initial_window_size = 1048576 h2_settings_max_concurrent_streams = 250 h2_settings_max_frame_size = 1048576 h2_settings_max_header_list_size = 1048896 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 settings update
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug ConnectionManager.swift:735 ready() connectivity_state = active grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) connection ready
2022-05-23T14:21:05+0800 [XIMSDK.Client GLH5LL7R] debug ClientTransport.swift:333 _transportActivated(channel:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_request_id = 0485F3C3-DDAF-406A-84D9-A5856CDE661A activated stream channel
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:676 streamCreated(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 h2_active_streams = 1 h2_stream_id = HTTP2StreamID(1) poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 stream created
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:698 streamClosed(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 h2_active_streams = 0 h2_stream_id = HTTP2StreamID(1) poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 stream closed
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:71 receive(_:context:) < Received headers: [':status': '200', 'content-type': 'application/grpc', 'trailer': 'Grpc-Status', 'trailer': 'Grpc-Message', 'trailer': 'Grpc-Status-Details-Bin']
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:77 receive(_:context:) < Received response with 'XIMSDK.Ximclient_GetGroupMemberRsp:
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:84 receive(_:context:) < Response stream closed /ximclient.Client/GetGroupMember with status: 'ok (0)' and trailers: ['grpc-status': '0']
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:40 send(_:promise:context:) > Starting '/ximclient.Client/GetSignalKey' RPC, headers: []
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:45 send(_:promise:context:) > Sending request with 'XIMSDK.Ximclient_GetSignalKeyReq:
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:49 send(_:promise:context:) > Closing request stream
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug ConnectionManager.swift:768 idle() connectivity_state = ready grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/12 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) idling connection
2022-05-23T14:21:05+0800 [XIMSDK] error Cryptor.swift:286 getSignalKey(peerAddresses:) get signal key failed, unavailable (14): Transport became inactive
2022-05-23T14:21:05+0800 [App] info MessagesController.swift:103 pushSendFailure(message:) push send failure notifacation begin
2022-05-23T14:21:05+0800 [XIMSDK.Client GLH5LL7R] debug ConnectionPool.swift:370 _enqueueWaiter(deadline:promise:logger:initializer:) grpc_request_id = 4C8EE20B-E6F3-4B3F-ABC5-3526AF3F900C pool.waiter.id = ObjectIdentifier(0x0000000283d02cb0) pool.waiters.count = 0 waiting for a connection to become available
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug ConnectionPool.swift:426 _shouldBringUpAnotherConnection() pool.reservations.capacity = 0 pool.reservations.count = 1 pool.reservations.load = inf pool.reservations.loadThreshold = 0.9 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) stream reservation load factor greater than or equal to threshold, bringing up additional connection if available
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:40 send(_:promise:context:) > Starting '/ximclient.Client/PushUpdateBadge' RPC, headers: []
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:45 send(_:promise:context:) > Sending request with 'XIMSDK.Ximclient_PushUpdateBadgeReq:
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:49 send(_:promise:context:) > Closing request stream
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug PlatformSupport.swift:199 makeClientBootstrap(group:logger:) grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) making client bootstrap with event loop group of type NIOTSEventLoop
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug PlatformSupport.swift:203 makeClientBootstrap(group:logger:) grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) Network.framework is available and the EventLoopGroup is compatible with NIOTS, creating a NIOTSConnectionBootstrap
2022-05-23T14:21:05+0800 [XIMSDK.Client GLH5LL7R] debug ConnectionPool.swift:370 _enqueueWaiter(deadline:promise:logger:initializer:) grpc_request_id = 84DA27EA-91BC-4231-A773-F89BD95A1AA1 pool.waiter.id = ObjectIdentifier(0x0000000283d57070) pool.waiters.count = 1 waiting for a connection to become available
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:40 send(_:promise:context:) > Starting '/ximclient.Client/PushUpdateBadge' RPC, headers: []
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:45 send(_:promise:context:) > Sending request with 'XIMSDK.Ximclient_PushUpdateBadgeReq:
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:49 send(_:promise:context:) > Closing request stream
2022-05-23T14:21:05+0800 [App] info MessagesController.swift:108 pushSendFailure(message:) push send failure notifacation succeed
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug ConnectionManager.swift:613 channelActive(channel:multiplexer:) connectivity_state = connecting grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) activating connection
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug TLSVerificationHandler.swift:55 userInboundEventTriggered(context:event:) grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) TLS handshake completed, negotiated protocol: h2
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:529 receiveSettings(_:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 h2_settings_initial_window_size = 1048576 h2_settings_max_concurrent_streams = 250 h2_settings_max_frame_size = 1048576 h2_settings_max_header_list_size = 1048896 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 settings update
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug ConnectionManager.swift:735 ready() connectivity_state = active grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) connection ready
2022-05-23T14:21:05+0800 [XIMSDK.Client GLH5LL7R] debug ClientTransport.swift:333 _transportActivated(channel:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_request_id = 4C8EE20B-E6F3-4B3F-ABC5-3526AF3F900C activated stream channel
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:676 streamCreated(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 h2_active_streams = 1 h2_stream_id = HTTP2StreamID(1) poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 stream created
2022-05-23T14:21:05+0800 [XIMSDK.Client GLH5LL7R] debug ClientTransport.swift:333 _transportActivated(channel:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_request_id = 84DA27EA-91BC-4231-A773-F89BD95A1AA1 activated stream channel
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:676 streamCreated(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 h2_active_streams = 2 h2_stream_id = HTTP2StreamID(3) poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 stream created
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:698 streamClosed(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 h2_active_streams = 1 h2_stream_id = HTTP2StreamID(1) poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 stream closed
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:71 receive(_:context:) < Received headers: [':status': '200', 'content-type': 'application/grpc', 'trailer': 'Grpc-Status', 'trailer': 'Grpc-Message', 'trailer': 'Grpc-Status-Details-Bin']
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:77 receive(_:context:) < Received response with 'XIMSDK.Ximclient_PushUpdateBadgeRsp:
2022-05-23T14:21:05+0800 [App] debug XIMSDKWrapper(BGIM).swift:312 setupIfNeeded() updateBadge succeed
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:84 receive(_:context:) < Response stream closed /ximclient.Client/PushUpdateBadge with status: 'ok (0)' and trailers: ['grpc-status': '0']
2022-05-23T14:21:05+0800 [XIMSDK.Channel GLH5LL7R] debug GRPCIdleHandlerStateMachine.swift:698 streamClosed(_:logger:) grpc.conn.addr_local = 10.0.2.112 grpc.conn.addr_remote = 43.255.214.50 grpc_connection_id = 4E238305-2913-4740-878F-5338E417787A/13 h2_active_streams = 0 h2_stream_id = HTTP2StreamID(3) poolmanager.id = ObjectIdentifier(0x0000000280bb4dc0) HTTP2 stream closed
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:71 receive(_:context:) < Received headers: [':status': '200', 'content-type': 'application/grpc', 'trailer': 'Grpc-Status', 'trailer': 'Grpc-Message', 'trailer': 'Grpc-Status-Details-Bin']
2022-05-23T14:21:05+0800 [XIMSDK] info Interceptors.swift:77 receive(_:context:) < Received response with 'XIMSDK.Ximclient_PushUpdateBadgeRsp:
let channel = try GRPCChannelPool.with( target: .hostAndPort(serverInfo.host, Int(serverInfo.port)), transportSecurity: .tls(tlsConfig), eventLoopGroup: group ) { // Configure keepalive. $0.keepalive = keepalive $0.backgroundActivityLogger = channelLogger $0.connectionBackoff = ConnectionBackoff(initialBackoff: 1.0, maximumBackoff: 10.0, multiplier: 1.6, jitter: 0.2, minimumConnectionTimeout: 20.0, retries: .unlimited) $0.idleTimeout = .zero }
Thanks for the bug report. Could you try setting $0.idleTimeout
to a non-zero value, say 30 seconds?
Setting to zero means that the connection will be closed as soon as there are no longer any active RPCs which is rarely the desired behaviour.
Thanks for your reply @glbrntt My original intention is never to close the connection, how can I set it?
let channel = try GRPCChannelPool.with( target: .hostAndPort(serverInfo.host, Int(serverInfo.port)), transportSecurity: .tls(tlsConfig), eventLoopGroup: group ) { // Configure keepalive. $0.keepalive = keepalive $0.backgroundActivityLogger = channelLogger $0.connectionBackoff = ConnectionBackoff(initialBackoff: 1.0, maximumBackoff: 10.0, multiplier: 1.6, jitter: 0.2, minimumConnectionTimeout: 20.0, retries: .unlimited) $0.idleTimeout = .zero }
Beyond setting the keep alive as you have, you can set the idleTimeout
to a high value. However, doing so isn't necessarily helpful, as this controls how long the client will wait before closing a connection which has no RPCs running on it. The server is free to choose its own idle timeout which is usually shorter than the client timeout. I would suggest just using the client default value (of 30 minutes).
For what it's worth: gRPC will automatically establish a connection if one isn't open at the time you need an RPC.
This error is also thrown if the system (iOS) closes any TCP sockets while your app is in the background. When the app enters foreground again, this error will be thrown and any queued/ongoing RPC request will be aborted with this error. gRPC will reopen any closed channels/sockets, so some sort of retry mechanism could fix this.
A simple retry mechanism could look like this:
extension GRPCClient {
func makeUnaryCallWithRetry<Request, Response>(
_ unaryFunction: @escaping (Request, CallOptions?) -> UnaryCall<Request, Response>,
_ request: Request,
callOptions: CallOptions? = nil,
shouldRetry: @escaping (Error) -> Bool = { _ in true },
retries: Int = 1
) -> EventLoopFuture<Response> {
let unaryCall = unaryFunction(request, callOptions)
let eventLoop = unaryCall.eventLoop
return unaryCall.response
.flatMapError { error in
guard retries > 0 else {
return eventLoop.makeFailedFuture(error)
}
guard shouldRetry(error) else {
return eventLoop.makeFailedFuture(error)
}
return self.makeUnaryCallWithRetry(
unaryFunction,
request,
callOptions: callOptions,
shouldRetry: shouldRetry,
retries: retries - 1
)
}
}
}
Then use it like this:
func getFoo(_ completion: @escaping (Result<Void, Error>) -> Void) {
fooServiceClient.makeUnaryCallWithRetry(
fooServiceClient.getFoo,
GetFooRequest(),
shouldRetry: { error in
if let grpcStatus = error as? GRPCStatus {
return grpcStatus.code == .unavailable
}
return false
},
retries: 2
).whenComplete { result in
print(result)
}
}
Hello! Here's my little contribution to this issue.
Ever since I adopted the settings recommended in the documentation I started to see this really often
let connectionBuilder = secure ?
ClientConnection.usingPlatformAppropriateTLS(for: MultiThreadedEventLoopGroup(numberOfThreads: 1)) :
ClientConnection.insecure(group: MultiThreadedEventLoopGroup(numberOfThreads: 1))
let channel = connectionBuilder
.withConnectivityStateDelegate(connectionManager, executingOn: queue)
.withKeepalive(
ClientConnectionKeepalive(
interval: .seconds(15),
timeout: .seconds(10)
)
)
.connect(host: host, port: port)
self.channel = channel
My app pings the server every ~20 seconds +- a random interval to keep in sync with the blockchain. Apparently new calls can recreate the channel without needing to do anything but calls in flight are cancelled no matter which seems odd.
Most of the issues cleared once I moved away from ClientConnection
to GRPCChannelPool
like this
let transportSecurity = secure ?
GRPCChannelPool
.Configuration
.TransportSecurity
.tls(
.makeClientConfigurationBackedByNIOSSL()
) :
GRPCChannelPool
.Configuration
.TransportSecurity
.plaintext
self.channel = try! GRPCChannelPool.with(
target: ConnectionTarget.hostAndPort(host, port),
transportSecurity: transportSecurity,
eventLoopGroup: MultiThreadedEventLoopGroup(numberOfThreads: 1)
)
I had hoped that the default ClientConnection keepAlive
settings would help but they did so in a different way. It made the problem so evident that I noticed there had to be another way.
There's an issue though, which is that the Channel Pool does not have an apparent way to use the connection delegate to hook up to the transport state.
My app pings the server every ~20 seconds +- a random interval to keep in sync with the blockchain. Apparently new calls can recreate the channel without needing to do anything but calls in flight are cancelled no matter which seems odd.
Do your keep alive settings align with the servers keep alive settings? If not then the server may tell the client to go away. If in-flight calls are being cancelled, I suspect that this is what is happening.
This is great feedback. I'll check that out. I don't know what the settings are. Thank you
Any success? I'm getting the same error. On my logs, looks like the stream is created and then deactivated right away.
Any success? I'm getting the same error. On my logs, looks like the stream is created and the
I can't call it a success but, you need to play around with your streaming and single call deadline/timeout times to keep the channel alive as long as you want it.
@pacu Are you able to share what worked for you? I've been trying to play around with those settings but nothing worked. It fails right away, doesn't wait for any time at all... 😭
@pacu Are you able to share what worked for you? I've been trying to play around with those settings but nothing worked. It fails right away, doesn't wait for any time at all... 😭
Happy International Workers Day!
Here's our lightwalletd GRPC service implementation.
https://github.com/zcash/ZcashLightClientKit/blob/main/Sources/ZcashLightClientKit/Modules/Service/GRPC/LightWalletGRPCService.swift
That was useful, thanks! Turns out I was missing some TLS configuration.