Firebase 9.4.2 Performance
Hi i had my own implementation of 9.3.0 with a costum http2 client using
public static HttpAsyncClientBuilder defaultHttpAsyncClientBuilder() {
PoolingAsyncClientConnectionManager connectionManager = new PoolingAsyncClientConnectionManager();
connectionManager.setMaxTotal(200);
connectionManager.setDefaultMaxPerRoute(100);
connectionManager.setDefaultConnectionConfig(ConnectionConfig.custom().setTimeToLive(-1L, TimeUnit.MILLISECONDS).build());
connectionManager.setDefaultTlsConfig(TlsConfig.custom().setVersionPolicy(HttpVersionPolicy.NEGOTIATE).build());
return HttpAsyncClientBuilder.create().setH2Config(H2Config.custom().setMaxConcurrentStreams(100).build()).
setHttp1Config(Http1Config.DEFAULT).setConnectionManager(connectionManager).
setRoutePlanner(new SystemDefaultRoutePlanner(ProxySelector.getDefault())).disableRedirectHandling().disableAutomaticRetries();//
}
it reached up to 1.7 qps. to upper limit of what was possible, but there were rare server crashes, due too.. idk.
Now im using the latest version and no matter what i do, ill get like 180 qps max?
The default implementation of h2config uses 250 concurrent streams (shouldnt it be 100?). And the Reactor class uses cpu count IO Threads. So kinda 10.
Im now using this configuration, before the default from the wiki page.
def h2Config = H2Config.custom() .setMaxConcurrentStreams(100) .setInitialWindowSize(1048576 * 2) // .setPushEnabled(false) .build()
def ioReactorConfig = IOReactorConfig.custom()
.setIoThreadCount(Runtime.getRuntime().availableProcessors() * 2)
.setSoTimeout(Timeout.ofMilliseconds(60000))
.build()
def client = H2AsyncClientBuilder.create()
.setH2Config(h2Config)
.setIOReactorConfig(ioReactorConfig)
.disableRedirectHandling()
.disableAutomaticRetries()
.build()
options = FirebaseOptions.builder()
.setCredentials(GoogleCredentials.fromStream(resource))
.setHttpTransport(new ApacheHttp2Transport(client))
.build()
So the latest firebase version sends at 1/10 of the original speed?
I couldn't figure out how to label this issue, so I've labeled it for a human to triage. Hang tight.
Hi @SmikeSix2, are you using a custom ThreadPoolExecutor? During the 9.4.0 release we made a change to limit to our thread count in our default ThreadPoolExecutor to address memory usage problems. If you were passing your own custom transport and client in 9.3.x with no issues then that limiting could be causing the bottleneck and not the transport itself. Can you try setting a custom ThreadPoolExecutor that fits with your environment's resources to see if you see any improvements.
If that isn't the solution, could you provide a bit more context on how you setup your transport before and after 9.4.0 so we can investigate further?