prom-client icon indicating copy to clipboard operation
prom-client copied to clipboard

perf: improve getMetricAsString for long strings + less allocations

Open lassic opened this issue 1 year ago • 13 comments

Do all replacements for escaping in a single pass (less large string copies due to chained replace calls).

  • Note that [email protected] (which was a really good improvement) is sometimes on-par or a bit better than this, but not by much, and I think in terms of memory this is a bit more efficient (for long strings) so perhaps there's a place for a different strategy for different inputs, but I didn't want to get into that complexity here.

@shappir you improvement was really good, I was trying to also address allocations here (chained replace), would actually love your take here as well, since it's possible the best solution is some combination.

image image

lassic avatar Sep 11 '23 19:09 lassic

exciting! mind adding a changelog entry as well?

Sure, I'll have a look at the changelog.

lassic avatar Sep 12 '23 06:09 lassic

Here's an example of a benchmark vs. [email protected] which shows that in some cases, in terms of ops/sec [email protected] wins out. That's why I would love @shappir to have a look as well since our PRs are not compatible.

image image

lassic avatar Sep 12 '23 07:09 lassic

Cool, let's hold off a tiny bit and see if @shappir is able to provide some feedback here 🙂

SimenB avatar Sep 12 '23 07:09 SimenB

Thank you - reviewing

shappir avatar Sep 12 '23 07:09 shappir

Thank you - reviewing

Thank you. I'll note that I got to this because we saw getMetricsAsString participating in many "blocked event loop" stacks, but not only due to CPU, but (what seems like) causing young space to run out more often and cause GC. I was trying to reduce (large) allocations of strings due to replace chains, and was hoping also a single pass is more efficient. The single pass theory seems only partly true, it's very possible that v8 optimizes the native replace function better than passing a JS replace func (makes sense).

lassic avatar Sep 12 '23 08:09 lassic

Have you had time to review this yet, @shappir? 🙂

SimenB avatar Sep 26 '23 08:09 SimenB

Sorry for the delay. Reviewing right now.

shappir avatar Sep 26 '23 20:09 shappir

I measured just the string escape implementation outside the context of prom-client (looped 100,000 over existing and new escape implementations). When replacing '\' and '\n' the existing implementation consistently came out as 2.5 as fast as new implementation. When also replacing '"' it came out as 1.5 as fast. This is because the new implementation has the same speed in both cases while the existing implementation is slower when replacing more chars.

My conclusion is that using a single pass will be faster when replacing 4 or more chars, but not for less. As a result, it seems not to be appropriate for this use-case.

shappir avatar Sep 26 '23 20:09 shappir

I measured just the string escape implementation outside the context of prom-client (looped 100,000 over existing and new escape implementations). When replacing '' and '\n' the existing implementation consistently came out as 2.5 as fast as new implementation. When also replacing '"' it came out as 1.5 as fast. This is because the new implementation has the same speed in both cases while the existing implementation is slower when replacing more chars.

My conclusion is that using a single pass will be faster when replacing 4 or more chars, but not for less. As a result, it seems not to be appropriate for this use-case.

Thanks @shappir I agree, see in my original comments and benchmarks, it's quite clear that the new implementation is better for many replacements (big strings) and slower for very short ones. In addition, I also mentioned the memory allocations benefits of using less replace chains (which create new strings) and we've seen this have an effect on GC and the eventloop. So I'm coming back to my comment of "so perhaps there's a place for a different strategy for different inputs, but I didn't want to get into that complexity here." - perhaps the gains for large strings are significant enough to justify this? choosing a different replacement strategy according to string length?

lassic avatar Sep 26 '23 20:09 lassic

perhaps the gains for large strings are significant enough to justify this? choosing a different replacement strategy according to string length?

That seems reasonable to me at least 🙂

SimenB avatar Sep 27 '23 07:09 SimenB

Perhaps I wasn't clear enough or I'm not understanding you:

Based on my tests (using your implementation but external to prom-client) the performance ratios I see are consistent across strings of different lengths. I tested strings of length ~1,000, ~10,000 and ~100,000 and got the same ratios. For me, in all these cases the existing escape function was 1.5 to 2.5 faster.

The new escape function is faster only when the length of the replace chain gets longer. But since currently the length of the chain is capped at 3, this has no impact.

But on this, my current recommendation is to not merge this PR.

Note: I did not measure GC impact specifically. The existing method may indeed create more garbage which could impact execution speed, even externally to prom-client itself.

shappir avatar Sep 27 '23 07:09 shappir

@shappir I understand, and yes, I misunderstood your initial comment, thanks for the clarification. However, I'm not sure I understand then why the project benchmarks are different than the ones you are running. Is it because you are running more dedicated benchmarks on this area with more loops, and the original benchmarks have a larger deviation? Still, it's pretty consistent for me, so I wonder what's the difference. Anyway, I'll keep looking at this, but unless someone thinks otherwise I guess it's not overwhelming enough for me to push this right now.

@SimenB Let me know what you want to do. Feel free to close the PR or whatever you think.

lassic avatar Sep 27 '23 07:09 lassic

Final note: interestingly this person conducted a similar comparison, and the results match my findings https://pablotron.org/2022/03/09/fastest-js-html-escape/#the-winner-h9 (I also checked using replaceAll instead of replace and it made no noticeable difference. Apparently it's mostly beneficial when the RegExp is not known in advance.)

shappir avatar Sep 27 '23 09:09 shappir