grpc-web
grpc-web copied to clipboard
Request goes through envoy and into the grpc service but the response doesn't come back
I have a working app that uses grpc-web
via app -> envoy -> grpc service
so far so good, but if I deploy the app to gke
(Google Kubernetes Engine) and turn on TLS I start getting 2 UNKNOWN: No status received
replies.
I can see the request reaching the grpc service just fine and in turn the service replies, but the response is not reaching the app, and I'm extremely puzzled as I cannot see what could be going wrong, if this works without TLS perhaps the TLS is messing something?
Technically the deployed service looks like app -> gke ingress + ssl -> envoy + tls -> service
note the addition of the google ingress controller which terminates SSL and then passes the request to envoy over TLS and in turn to the service, if things are going well as a request, what could be causing the response to go missing?
if my grpc service returns an error it does make it back
with BloomRPC I can see
{
"error": "3 INVALID_ARGUMENT: invalid credentials"
}
Any other response that is successful
{
"error": "2 UNKNOWN: No status received"
}
if I test with grpcurl I can see some trailers are "missing"
grpcurl -d @ -import-path protos -proto accounts.proto -vv mydomain:443 my.Service/Login <<EOM
{
"username": "username",
"password": "w00t"
}
EOM
Resolved method descriptor:
rpc Login ( .my.LoginRequest ) returns ( .my.AuthenticationResponse );
Request metadata to send:
(empty)
Response headers received:
alt-svc: clear
content-type: application/grpc
date: Fri, 26 Feb 2021 04:12:08 GMT
server: envoy
via: 1.1 google
x-envoy-upstream-service-time: 3412
Response trailers received:
(empty)
Sent 1 request and received 0 responses
ERROR:
Code: Internal
Message: server closed the stream without sending trailers
We had a very similar issue happen recently, and we tracked it back to an istio-proxy sidecar that was being loaded on the ingress pod. The ingress pod itself appeared to be working correctly, but the sidecar was somehow interfering with the gRPC traffic. Disabling the sidecar worked for us. Most of the comments I was able to find referred to this happening with a downgrade of HTTP/2.0 traffic to HTTP/1.1 (gRPC requires HTTP/2.0). I was not able to confirm whether this was happening in the sidecar we disabled, or if it was something else. Check any proxies or load balancers in the path and verify they are not interfering with the gRPC traffic. Try to rule them out one by one by trying your grpcurl command at each proxy point. If you do have sidecars on your pods, verify they are not the culprit by disabling them.
Hi @dvaldivia,
I have a similar setup and i'm experiencing this exact behavior, Did you manage to solve this?
if i hit my grpc service from inside the cluster (with grpcurl -plaintext .....
) I do get successful responses back.
So i know it's istio and/or google misconfig somewhere.
(I do not use istio mesh sidecars. i only use istio as gateway)