grpc-swift icon indicating copy to clipboard operation
grpc-swift copied to clipboard

unavailable (14): Transport became inactive

Open YunXu6139 opened this issue 4 years ago • 9 comments

Describe the bug

Below is how I init my GRPCChannel

    let group =  PlatformSupport.makeEventLoopGroup(loopCount: 4)
    let keepalive = ClientConnectionKeepalive(
      interval: .seconds(15),
      timeout: .seconds(10)
    )
    let channel = ClientConnection.insecure(group: group)
            .withKeepalive(keepalive)
            .connect(host: host, port: 2080)

At our app home page, there are several api requests , when a request failed with Invalid HTTP response status: 503, the requests starts at almost same time but haven't have response back all failed with unavailable (14): Transport became inactive But when I comment the request that response 503, other requests success

To reproduce

Expected behaviour

These requests should success expect the with the 503 response

YunXu6139 avatar Sep 23 '21 11:09 YunXu6139

Is the 503 from your application being promoted into a connection teardown anywhere? Can you use a tool like Wireshark to confirm that the connection is still up?

Lukasa avatar Sep 23 '21 13:09 Lukasa

Is the 503 from your application being promoted into a connection teardown anywhere? Can you use a tool like Wireshark to confirm that the connection is still up?

Hi, In my situation, just one specific request response with 503. And after our coworkers in server side fix this 503 issue, everything goes well. But we encounter this problem again recently. I found and tried this reply https://github.com/grpc/grpc-swift/issues/1421#issuecomment-1143349301, it works. Also I compare the requests duration between using GRPCChannel and GRPCChannelPool, there is no difference. But I still cannot understand that why one request failed, the channel just closed regardless of other ongoing requests when use GRPCChannel.This is unreasonable if GRPCChannel is your recommended implementation. Would you change this behavior of GRPCChannel or GRPCChannelPool is our only choice ?

This is the code how I use GRPCChanel previously:


public class XYGRPCClient: NSObject, GRPCClient {
    
    public var changedHeader: [String: String] = [:]
        
    public var callOptions: CallOptions {
        return CallOptions(customMetadata: HPACKHeaders(), timeLimit: TimeLimit.timeout(TimeAmount.seconds(60)))
    }
    
    public var channel: GRPCChannel
    
    public var defaultCallOptions: CallOptions {
        get {
            return self.callOptions
        }
        set {}
    }
        
    public override init() {

        let group = MultiThreadedEventLoopGroup(numberOfThreads: 4)
        
        self.channel = ClientConnection.insecure(group: group).connect(host: "", port: 1082)
        super.init()
    }
    
    public func updateGRPCChannel(host: String, port: Int, needCert: Bool, certs: [NIOSSLCertificate]? = nil) {
        
        let group = MultiThreadedEventLoopGroup(numberOfThreads: 4)

        if !needCert {
            self.channel = ClientConnection.insecure(group: group).connect(host: host, port: port)
        } else if let pem = certs {
            self.channel = ClientConnection.usingPlatformAppropriateTLS(for: group).withTLS(trustRoots: .certificates(pem)).connect(host: host, port: port)
        } else {
            self.channel = ClientConnection.insecure(group: group).connect(host: host, port: port)
        }
    }
}

This is the code how I use GRPCChannelPool current:


public class XYGRPCClient: NSObject, GRPCClient {
    
    public var changedHeader: [String: String] = [:]
        
    public var callOptions: CallOptions {
        return CallOptions(customMetadata: HPACKHeaders(), timeLimit: TimeLimit.timeout(TimeAmount.seconds(60)))
    }
    
    public var channel: GRPCChannel
    
    public var defaultCallOptions: CallOptions {
        get {
            return self.callOptions
        }
        set {}
    }
    
    public override init() {

        let group = MultiThreadedEventLoopGroup(numberOfThreads: 4)
        
        self.channel = ClientConnection.insecure(group: group).connect(host: "", port: 1082)
        super.init()
    }
    
    public func updateGRPCChannel(host: String, port: Int, needCert: Bool, certs: [NIOSSLCertificate]? = nil) {
        
        let group = MultiThreadedEventLoopGroup(numberOfThreads: 4)
        
        var transportSecurity: GRPCChannelPool.Configuration.TransportSecurity
        if !needCert {
            transportSecurity = GRPCChannelPool.Configuration.TransportSecurity.plaintext
        } else if let pem = certs, !pem.isEmpty {
            let tlsConfig = GRPCTLSConfiguration.makeClientConfigurationBackedByNIOSSL(trustRoots: .certificates(pem))
            transportSecurity = GRPCChannelPool.Configuration.TransportSecurity.tls(tlsConfig)
        } else {
            transportSecurity = GRPCChannelPool.Configuration.TransportSecurity.plaintext
        }
        let config = GRPCChannelPool.Configuration.with(target: ConnectionTarget.host(host, port: port), transportSecurity: transportSecurity, eventLoopGroup: group)
        do {
            self.channel = try GRPCChannelPool.with(configuration: config)
        } catch let error {
            debugPrint(error.localizedDescription)
        }
    }
}

YunXu6139 avatar Sep 25 '23 02:09 YunXu6139

But I still cannot understand that why one request failed, the channel just closed regardless of other ongoing requests when use GRPCChannel.This is unreasonable if GRPCChannel is your recommended implementation.

Many RPCs can run concurrently on a single connection so if the server decides to close the connection abruptly then all RPCs on that connection will fail. It sounds like you are running into that behaviour.

The current design makes it difficult to improve on this, however we're aiming to make RPCs more resilient to dropped connections in v2.

glbrntt avatar Sep 25 '23 08:09 glbrntt

But I still cannot understand that why one request failed, the channel just closed regardless of other ongoing requests when use GRPCChannel.This is unreasonable if GRPCChannel is your recommended implementation.

Many RPCs can run concurrently on a single connection so if the server decides to close the connection abruptly then all RPCs on that connection will fail. It sounds like you are running into that behaviour.

The current design makes it difficult to improve on this, however we're aiming to make RPCs more resilient to dropped connections in v2.

Thanks for the explanation.

YunXu6139 avatar Sep 28 '23 06:09 YunXu6139

@glbrntt Sorry, some other factors affected my judgement, GRPCChannelPool doesn't work at all. We still encounter this problem. My first question: Is the way I use GRPCChannelPool wrong?
The way we use GRPCChannelPool: 1 We have a global instance of the above XYGRPCClient class, which conforms GRPCClient protocol 2 We call the XYGRPCClient's updateGRPCChannel function at the app's didFinishLaunchingWithOptions method to init GRPCPoolChannel . 2 We call the makeUnaryCall(path: request: callOptions: interceptors: responseType:) method of XYGRPCClient's global instance to sending requests, like this:


public final class Account_AccountClient {

  /// Asynchronous unary call to register.
  ///
  /// - Parameters:
  ///   - request: Request to send to register.
  /// - callback: Result<GEUserDeleteMessagesResponse, Error>) -> Void.
  /// - Returns: A `UnaryCall` with futures for the metadata, status and response.
  @discardableResult
  public static func register(request: Account_RegisterReq, callback: @escaping (Result<Account_RegisterFakeResp, Error>) -> Void) -> Bool {
    guard let client = grpcCient  else { return false }
    let api = "/account.Account/register"

    let call = client.makeUnaryCall(path: api,
                              request: request,
                              callOptions: client.defaultCallOptions,
                              interceptors: [GRPCInterceptor()],
                              responseType: Account_RegisterFakeResp.self)

    call.response.whenCompleteBlocking(onto: .main, callback)

    return true
  }

/// Asynchronous unary call to login.
  ///
  /// - Parameters:
  ///   - request: Request to send to login.
  /// - callback: Result<GEUserDeleteMessagesResponse, Error>) -> Void.
  /// - Returns: A `UnaryCall` with futures for the metadata, status and response.
  @discardableResult
  public static func login(request: Account_LoginReq, callback: @escaping (Result<Account_LoginFakeResp, Error>) -> Void) -> Bool {
    guard let client = grpcCient  else { return false }
    let api = "/account.Account/login"

    let call = client.makeUnaryCall(path: api,
                              request: request,
                              callOptions: client.defaultCallOptions,
                              interceptors: [GRPCInterceptor()],
                              responseType: Account_LoginFakeResp.self)

    call.response.whenCompleteBlocking(onto: .main, callback)

    return true
  }

}

My second question: If the above way we sending requests is right. Do you have any other suggestions to avoid the problem?

YunXu6139 avatar Oct 20 '23 10:10 YunXu6139

@glbrntt Sorry, some other factors affected my judgement, GRPCChannelPool doesn't work at all. We still encounter this problem.

When you say "doesn't work at all", do you mean every request fails? Could you provide a bit more information on what failures look like.

glbrntt avatar Oct 20 '23 12:10 glbrntt

I send a requests A, then send a request B. B response firstly with a 404 error or a 502 error. My A request will response with a "unavailable (14): Transport became inactive" error immediately. No matter I use GRPCChannelPool or ClientConnection as I post above. @glbrntt

YunXu6139 avatar Oct 23 '23 02:10 YunXu6139

gRPC doesn't respond with 404 or 502. If the server responds with either of these then grpc will close the connection so this doesn't seem like a grpc-swift issue to me. It sounds more like you have a proxy between your client and server which is causing issues, so I'd suggest looking there.

glbrntt avatar Oct 23 '23 07:10 glbrntt