Dan Mayer
Dan Mayer
@SyntaxRules-avant do you have an estimate on the number of files. The N+1 is more per file than per LOC. * Have you seen code files you do not expect...
OK, so 4491 files isn't that many... but I could see things timing out in 30s especially on a slower redis... I did verify that this is currently and N+1...
note that two improvements landed in 6.0.2 that should help folks that were seeing slowness on reporting thanks to @makicamel folks can give it a try and let us know...
OK, folks I am finally making some progress on this... I have a small reproducible benchmark that shows the slowdown as additional files are added to a project. I will...
OK, found some interesting things... while the hashredis store is faster in some cases and removes a rare race condition on the traditional redis store... It is far less performant...
OK, folks while there are likely a few options to improve the performance since the HashRedis algo stores and pulls data via a key per file, even with pipelining it...
Also if someone can give me some latency stats around their Redis calls that would be helpful. I am able to add arbitrary latency to my Redis benchmarks... but basing...
OK, yeah It requires some extra work but I could make the search work across the paging. I don't think I can' do the normal table sorting as that requires...
yeah at the moment loading a single file will still load the entire coverage report and then grab the one file... so we could definitely improve the way we load...
ok @frsantos I have a big win on loading single files with the hash redis store, while the N+1 is problematic when building a full report... It made it easy...