grpc-web
grpc-web copied to clipboard
Is there any way to circumvent the browsers limitations for concurrent connections http2
Hi there,
after hours of reading and getting stuck in a cycle of always the same web search results without ever getting a clear statement or solution I decided to open a request for clarification here.
I found those:
https://github.com/grpc/grpc-web/issues/432 --> no proxy used
and
https://github.com/grpc/grpc-web/issues/81 --> nginx related, what to do with envoy?
which seem to be nearest to my question, but.. I don't get it.
So, from scratch: we're using grpc web for our browser client, envoy as proxy and some grpc backend services.
We have in our application many oneToMany (take for example stock prices) subscriptions. Chrome handles 6 server streaming requests and that's it. From my understanding, grpc-web can use http/2 transport with some grpc protocol differences. Chrome should be using http/2 transport cause it supports it (?), but it doesn't. This way I'm limited to 6 open server streams which is literally nothing.
What am I missing? And whats the point in grpc web GA when it's not really usable in web clients?
Hope someone can bring some light into the dark...
Greetings, Maik
@MderM Sounds like you're saying Chrome limits you to only 6 streaming connections, therefore it's not usable for your app? I haven't had to have that many connections open in any of my use-cases. Maybe a workaround would be to do application-level multiplexing. Make a single streaming request to a back-end service, and then subscribe to all of the other sources on the server-side and merge the responses?
@jonahbron We have long running requests, such as Login/Stream(LoginsState) and stock pricings Subscribe (Stock)/Stream(StockPrice) that shall stay alive as long as the web client is open. I think it is a perfectly fitting use case for a web app and the grpc framework, which itself encourages users to make many simultaneous requests. In my understanding GA means that such use cases can be handled and it is a kind of a bummer for me, that one cannot make more than 6 concurrent server streaming requests. That made think I just oversaw/missed some detail. In Firefox one can configure up to 255 connections, but I thought grpc-web implementation is able to make use of http/2 which brings the multiplexing mechanics, although with a slightly changed protocol.
We could do the merging/app level multiplexing, but in this case we'd have to restructure our whole API to one changing subscription. But in the end such a restructuring would end the benefit of having decent service descriptions that protobuf brings.
@MderM It is odd that Chrome has such a low cap. GRPC-Web does not yet deal with HTTP2 directly, since access to it is not supported in browsers today. Chrome might choose to use HTTP2 to relay the message to the proxy if it's supported, but the application client has no control over that.
@MderM FYI the Improbable GRPC-Web implementation supports using the WebSockets API for connections. This official blog post mentions that if option is required, devs should consider using that alternate implementation.
https://grpc.io/blog/state-of-grpc-web#conclusion
I can vouch for that implementation working great. I was using it in my application (and made several contributios to it) before the official implementation was published and stable.
This SO post talks about WebSockets and the connection cap: https://stackoverflow.com/questions/32697909
@jonahbron Thanks for that. That blog post was one of my cycle steps I came across again and again. I think I will give improbable a try even though it's marked as alpha. But I really cannot understand how this project can be the official implementation, the use case I described is really common in my work field (fintech). In addition the documentation should really point to such issues.
@MderM, have you had any success in using multiple stream connections by switching to the Improbable GRPC-Web implementation? We have run into the same issue as you have...
@angelomelonas Nope, never got it running. We switched to a secure context (TLS) and configured the envoy proxy to only take http2 via alpn. In the secure/http2 context you can use up to 100 concurrent requests, which was the next hardcoded limit in chrome...
Reopen this to improve the documentation.
Note grpc-web has no control over the actual HTTP version the U-A negotiates with the "network", e.g. any forward proxy or firewall could still limit the HTTP version to HTTP/1.1.
I.e. if you are deploying long-lived streams or having to support many tabs initiating concurrent requests, having a way to guarantee HTTP/2 deployment is important.
For HTTP/1.1, the 6 connection limit is mostly a chrome thing ... For HTTP/2, the 100-stream limit is enforced by Envoy (which also matches the limit to google.com).
@MderM thanks! I will look into it. Would you mind helping out if I get stuck along the way?
@wenbozhu I'm pretty sure I saw a constant value of 100 in Chromium code. Also in Firefox we were able to increase this limit which allowed us to hold several thousands of open streams. So Envoy shouldn't be a problem here. @angelomelonas Feel free to ask! If I can help, I will do it.
@MderM Thanks for the information. Do you have any pointers to chrome or firefox config. The limit enforced by proxies is mostly for security reasons as I understand ... so we probably should document the limit (100) as part of the spec.
In FF about:config -> network.http.spdy.default-concurrent would be the option for concurrent http2 streams. -> network.http.max-persistent-connections-per-server would be the option for http1.x, but this is limited to a max of 255.
In Chrome both values are hardcoded afaik.
Envoy doesn't limit http2 streams by default. Have a look here: https://www.envoyproxy.io/docs/envoy/latest/api-v2/api/v2/core/protocol.proto#envoy-api-msg-core-http2protocoloptions. Also, in my project we never experienced such limit proxy-vise.
@angelomelonas Feel free to ask! If I can help, I will do it.
@MderM I have configured Envoy as mentioned in this SO post. However, I cannot figure out how to resolve the CORS issue caused by the OPTIONS header sent from the gRPC-Web client to the Envoy proxy. How did you get around this issue?
@angelomelonas you will want to have the cors settings from the official grpc web documention: https://github.com/grpc/grpc-web/tree/master/net/grpc/gateway/examples/helloworld
cors:
allow_origin:
- "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
enabled: true
will do the trick
Thanks, @MderM, I updated my Envoy configuration as you suggested, but unfortunately, the issue still persists. I do now also get a net::ERR_CERT_AUTHORITY_INVALID error...
So with the new error, I think the CORS issue might be gone. You will now want to have any valid certificates. But thats gets a bit out of focus and is off topic for this issue. You'll have to dig around envoy documentation about what certificate types are valid.
@MderM, this might seem like a stupid question, but do I need to configure the gRPC server to use TLS as well? And with the gRPC-Web client, do you pass in the certificates when you connect to the server? E.g.,
connectClient(host: { hostname: string; port: number; }) { this.chatServiceClient = new ChatServiceClient( "https://" + host.hostname + ":" + host.port, null, null); }
where the one null could be the certificates.
@MderM, this might seem like a stupid question, but do I need to configure the gRPC server to use TLS as well?
We didn't do that. We just used TLS from browser to envoy to ensure we get a http2 connection
And with the gRPC-Web client, do you pass in the certificates when you connect to the server? E.g.,
connectClient(host: { hostname: string; port: number; }) { this.chatServiceClient = new ChatServiceClient( "https://" + host.hostname + ":" + host.port, null, null); }where the one
nullcould be the certificates.
Nope. As far as I remember the whole change to http2 was done via envoy config.
I am starting to think that it's perhaps a domain issue (because of the TLS setup in the envoy.yaml file). You don't perhaps have an example envoy.yaml file that I can use as a reference? I have been looking at this example, but I have yet to figure it out (nor could I get their example up and running).
Have a look at this:
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 8080 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stream_idle_timeout: 0s
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/s1/" }
route:
cluster: s1
prefix_rewrite: "/"
max_grpc_timeout: 0s
- match: { prefix: "/s2/" }
route:
cluster: s2
prefix_rewrite: "/"
max_grpc_timeout: 0s
cors:
allow_origin:
- "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
enabled: true
http_filters:
- name: envoy.grpc_web
- name: envoy.cors
- name: envoy.router
tls_context:
common_tls_context:
alpn_protocols: "h2"
tls_certificates:
- certificate_chain:
filename: "etc/cert/cert.pem"
private_key:
filename: "etc/cert/cert.key"
clusters:
- name: s1
connect_timeout: 15s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: host.docker.internal, port_value: 7000 }}]
- name: s2
connect_timeout: 15s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
hosts: [{ socket_address: { address: host.docker.internal, port_value: 8000 }}]
Thank you so much @MderM! I really appreciate your effort.
That helped me realise that my issue is most likely the way I generate the certificates... I use the following (as provided by the Envoy documentation): openssl req -nodes -x509 -newkey rsa:4096 -keyout example-com.key -out example-com.crt -days 365
That gives me a .crt and a .key file. I see you are using a .pem file... How do you generate the keys?
That gives me a
.crtand a.keyfile. I see you are using a.pemfile... How do you generate the keys?
I didn't do that. But I think Google knows a library that produces such certs if it isn't possible with openssl
Hi @MderM (and anyone else who might run into this problem in the future),
I have finally managed to get everything working (gRPC-Web + Envoy + HTTPS)! Using TLS with Envoy, I can now open many more connections (rather than the default limit of 6 imposed by Chrome).
Please see this code review on how to go from a normal HTTP project to an HTTPS project
The READMEs and so on are still incomplete, so hopefully, by the time anyone reads this, I will have merged and updated the project. Please note that after running the create-cert.sh script you have to add the ca.crt to your machine's Trusted Root Certificates.
It has taken me a few days to figure out. If anyone has any questions, feel free to ask me!
Once, again, thanks to @MderM for pointing me in the right direction :)
Hey @MderM great that using TLS allows us to use more streams, but did you ever manage to get around the hard coded limit of 100 in Chrome?
Unfortunately not. Our project using groceries lies dormant but we planned to go with a container object for all kinds of subscriptions so that would only be one open connection for all streams.
Ah, that's too bad. Thanks for the info anyway.
hi, you can use @protobuf-ts/grpcweb-transport to directly call rpc instead envoy proxy,, you also need with @protobuf-ts/protoc and with @protobuf-ts/plugin to compile proto files to typescript