[compiler] Add JSX inlining optimization
This adds an InlineJsxTransform optimization pass, toggled by the enableInlineJsxTransform flag. When enabled, JSX will be transformed into React Element object literals, preventing runtime overhead during element creation.
TODO:
- [ ] Add conditionals to make transform PROD-only
- [ ] Make the React element symbol configurable so this works with runtimes that support
react.elementorreact.transitional.element - [ ] Look into additional optimization to pass props spread through directly if none of the properties are mutated
The latest updates on your projects. Learn more about Vercel for Git ↗︎
| Name | Status | Preview | Comments | Updated (UTC) |
|---|---|---|---|---|
| react-compiler-playground | ✅ Ready (Inspect) | Visit Preview | 💬 Add feedback | Sep 18, 2024 3:31pm |
This should not be shipped in code that's compiled for npm since it is not compatible with multiple React versions.
It's also important that it doesn't ship in an RSC layer which needs a runtime to do the warm up pass.
(In addition, previous testing showed that this is not actually better than a minimal runtime in most environments due to the additional compilation cost.)
To clarify the last statement.
Most testing of inline objects have compared it against the React runtime which has overhead mainly due to legacy features like the refs and the backwards compat with defaultProps. A proper A/B test should test the performance of inlining compared to an optimized runtime function that does the same thing that the inlining would do. Hopefully in React 20+ that can just be the same thing in the plain React.jsx function assuming deprecations land.
In that comparison it was previous discovered that the compilation time for VMs like V8 that can JIT a callsite to determine which hidden class to create but figuring that out for each callsite means it has to transition through the hidden class discovery like object with $$typeof -> object with type -> object with key -> .... Which significantly slowed down initialization compared to just making it a call and then that one call getting in a JIT:ed callsite. Runtime was slightly faster but start up significantly slower.
This effect wasn't as big in JSC possibly due to faster compiler but also due to lack of other call optimizations not being as good so the relative size isn't necessarily there. That's why you might see inlining being slightly faster in JSC. (Most comparisons like Bun's inlining 1) isn't comparing against an optimized function so not a proper A/B test 2) doesn't care about start up time. So that data is not applicable here.)
For Hermes, since the compilation time is entirely offline the cost of inlining isn't as big so the runtime overhead of the extra call might make it worth it there but it comes as the cost of larger byte code which may or may not matter.
Basically I'm skeptical overall it is actually good to inline in most circumstances and it would be better to just have a single shared runtime. Given that the output is not multi-version compatible (i.e. the strategy of using React Compiler before publishing no longer works) and it's actually worse in the largest platform (Chrome)*. And it's not compatible with JSX prewarming in RSC.
*) This could use re-measurement though since it's based on stale data.
@sebmarkbage For now this optimization is meant to be turned on for Hermes only. I benchmarked and profiled the inlining on some test apps and saw some regressions in V8 as you described. In some apps interactions were faster, but memory regressed significantly. Running with --jitless was more stable so that lines up with your observations around JIT issues.
With Hermes, inlining showed wins to interaction performance and memory, even compared to a React 20 style jsx function. With this implementation we can test in production against real apps and understand the full impact of the change and if its worth maintaining the optimization for Hermes only.
Nice! Let's ship and iterate