YJIT: Enhance the `String#<<` method substitution to handle integer codepoint values.
This PR extends YJIT's method substitution for String#<< to handle integer codepoints as well. If the string is ASCII-8BIT and the codepoint is a byte value, YJIT will dispatch to rb_str_buf_cat_byte as a fast path for working with binary strings. Otherwise, it'll dispatch to the general rb_str_concat just as vm_opt_ltlt would.
rb_str_buf_cat_byte currently works with both ASCII-8BIT and US-ASCII, but this YJIT side only optimizes for ASCII-8BIT. It could be extended easily enough with an additional comparison. The encoding indices for a handful of encodings, including both ASCII-8BIT and US-ASCII are fixed and sequential, so we could also do a range check. For the time being, I've omitted the handling of US-ASCII. I'd like to get feedback on the simplified PR and extend it with US-ASCII handling if needed (which I'm also not convinced we do).
Please advise if the mechanism I'm using to handle polymorphic dispatch is incorrect. We already have jit_rb_str_concat as a method substitution for String#<<, but it deliberately only handled string arguments. I could have merged the two, but that struck me as being complicated. However, I also don't know if it's fine to call between methods like this. And, it ends up duplicating some of the type checks to ensure we dispatch to the correct type depending on what we see at compile time. I was unsure on how to handle the runtime guards, so please pay extra attention to that.
:white_check_mark: All Tests passed!
:heavy_multiplication_x:no tests failed :heavy_check_mark:33929 tests passed(1 flake)
Hi Kevin.
Commenting to give you as much feedback as possible.
This PR extends YJIT's method substitution for
String#<<to handle integer codepoints as well. If the string isASCII-8BITand the codepoint is a byte value, YJIT will dispatch torb_str_buf_cat_byteas a fast path for working with binary strings.
I think that the idea of this PR is a good one. It seems like a sensible specialization to implement.
I wrote some comments on the code itself.
Two things I would like to see in a PR like this:
- Benchmark results to know how it affects performance on protoboeuf and our headline benchmarks (if any impact)
- Maybe a dump of machine code generated when concatenating a few bytes in a loop just to see if we're doing anything obviously inefficient
Please advise if the mechanism I'm using to handle polymorphic dispatch is incorrect. We already have
jit_rb_str_concatas a method substitution forString#<<, but it deliberately only handled string arguments. I could have merged the two, but that struck me as being complicated. However, I also don't know if it's fine to call between methods like this. And, it ends up duplicating some of the type checks to ensure we dispatch to the correct type depending on what we see at compile time. I was unsure on how to handle the runtime guards, so please pay extra attention to that.
Seems sensible. Duplicating checks that occur at run-time would not be good but the checks you duplicated occur at compilation time. It makes sense to write the logic for this in a separate function to avoid ending up with a megafunction that is hard to follow.
Before this commit Protoboeuf+YJIT is about 5.28x slower than Google's protobuf implementation, but after this commit it is 4.33x slower.
Before:
ruby 3.4.0dev (2024-07-09T17:22:29Z master 6f6aff56b1) +YJIT [arm64-darwin23]
Warming up --------------------------------------
encode upstream 14.000 i/100ms
encode protoboeuf 2.000 i/100ms
Calculating -------------------------------------
encode upstream 140.854 (± 0.7%) i/s - 714.000 in 5.069352s
encode protoboeuf 26.672 (± 3.7%) i/s - 134.000 in 5.028086s
Comparison:
encode upstream: 140.9 i/s
encode protoboeuf: 26.7 i/s - 5.28x slower
After:
ruby 3.4.0dev (2024-07-09T19:35:29Z yjit-optimize-stri.. 00cc8e4429) +YJIT [arm64-darwin23]
Warming up --------------------------------------
encode upstream 13.000 i/100ms
encode protoboeuf 3.000 i/100ms
Calculating -------------------------------------
encode upstream 137.078 (± 0.7%) i/s - 689.000 in 5.026818s
encode protoboeuf 31.678 (± 3.2%) i/s - 159.000 in 5.023590s
Comparison:
encode upstream: 137.1 i/s
encode protoboeuf: 31.7 i/s - 4.33x slower
I like Alan's idea of embedding some of the checks in a C function to save on code size and maybe make the code a bit more readable.
Also thanks Kevin for persisting. You picked a hard problem for your first big PR! 😉
An alternative to what you have is to make a new Rust/C function that essentially does the logic that you currently inline into every site (
if codepoint.is_fixnum() { rb_str_buf_cat_byte } else { rb_str_concat }). That way you only generate one ccall at each site.
Okay. I think you were simplifying, but for completeness the check is really (in pseudo-code) if codepoint.is_fixnum() && receiver.is_binary_string() && codepoint.is_byte(). rb_str_buf_cat_byte does not have that logic. The caller is expected to call it only when those conditions hold (actually, rb_str_buf_cat_byte also works on US-ASCII strings, but I omitted that in YJIT) and there are assertions at the start of rb_str_buf_cat_byte that checks those conditions are held.
If you write it in C, it could go into string.c and you can additionally avoid removing
staticfrom rb_str_buf_cat_byte().
I'll have to introduce a new function. I don't think updating rb_str_buf_cat_byte to do those checks is the right way to go. It would mean duplicating checks in the interpreter to derive information we already have.
Yes, you should make clear that the new function you add is a YJIT helper and by not putting it in headers, express that it has many preconditions that are hard to meet and is not for general usage.
I see that you modified the regular string concat to fold it into the C call as well. The C code looks well structured, but the downside of that is that we don't necessarily benefit from type information that YJIT might have.
I benchmarked this PR locally on my macbook:
master: ruby 3.4.0dev (2024-07-29T15:07:53Z master 86a762ce56) [arm64-darwin23]
cat: ruby 3.4.0dev (2024-07-26T19:34:15Z yjit-optimize-stri.. 3c3aba4119) [arm64-darwin23]
---------- ----------- ---------- -------- ---------- ----------- ----------
bench master (ms) stddev (%) cat (ms) stddev (%) cat 1st itr master/cat
protoboeuf 78.7 0.9 81.8 1.1 0.97 0.96
str_concat 39.7 6.4 42.7 4.0 0.92 0.93
---------- ----------- ---------- -------- ---------- ----------- ----------
At the moment it seems like it's slightly slower, so not a win at the moment. Initially Aaron had reported a big speedup on protoboeuf with your original patch.
I'm sorry about all the back and forth. I don't fully understand why the original patch had such a big speedup in the first place compared to this. It might be worth digging more into where the overhead is coming from and doing more experiments. Maybe we could organize a pairing session with @XrXr ?
Initially Aaron had reported a big speedup on protoboeuf with your original patch. ... I don't fully understand why the original patch had such a big speedup in the first place compared to this.
It looks like the speed-up was on the encoding benchmark and you ran the decoding benchmark. https://github.com/ruby/ruby/pull/11032#issuecomment-2218808415
Initially Aaron had reported a big speedup on protoboeuf with your original patch. ... I don't fully understand why the original patch had such a big speedup in the first place compared to this.
It looks like the speed-up was on the encoding benchmark and you ran the decoding benchmark. #11032 (comment)
Mildly embarrassed 😅 Nevertheless, not ideal that it would slow down decode.
It does seem slower on encode, too but we should probably run these benchmarks on an AWS machine as well:
master: ruby 3.4.0dev (2024-07-29T15:07:53Z master 86a762ce56) [arm64-darwin23]
cat: ruby 3.4.0dev (2024-07-26T19:34:15Z yjit-optimize-stri.. 3c3aba4119) [arm64-darwin23]
----------------- ----------- ---------- -------- ---------- ----------- ----------
bench master (ms) stddev (%) cat (ms) stddev (%) cat 1st itr master/cat
protoboeuf 76.2 2.9 78.6 0.8 1.00 0.97
protoboeuf-encode 89.8 1.2 92.6 1.4 0.96 0.97
str_concat 43.8 4.8 44.0 4.0 0.96 0.99
----------------- ----------- ---------- -------- ---------- ----------- ----------
Also, @nirvdrum, have you rebased this PR on master recently? Just to make sure it's a fair comparison.
I put the code that moves the older jit_rb_str_concat into a separate commit. It should be easy to compare performance with and without that change.
I've been rebasing the branch each time I edit it (although, it's possible I've forgotten to pull first). It's currently based on 86a762ce56cd44db69ccff2e02587ed748a3ad04, which matched the version of master you were using. But, I pushed that about 7 minutes after your comment (sorry, I didn't see it before our I would have mentioned it). It looks like the version you used was based off 83b0cedffe5a9bd6c5288fa0d9614d6ca39e63e8.