remix
remix copied to clipboard
perf(remix-server-runtime): Performance improvements for large apps
Closes: #4733
- [ ] Docs
- [x] Tests
Testing Strategy: All existing tests are still passing. Since this is only performance-related, not sure of any new tests to add. Also running these patches in production with no issues so far.
In apps with many routes, there were some performance issues causing some remix overhead for every request. In my production app (700+ routes), 200ms on average was being added to every request, before my loaders/actions even ran. I narrowed this down to route parsing.
- The
requestHandlerincreateRequestHandlerwas callingcreateStaticHandlerDataRouteson every request, which is an expensive recursive function that creates a nested set of data route objects. In production,build.routesshouldn't be changing between requests, so the result is now cached and only recalculated ifloadContextchanges. In development, nothing will be cached sincecreateRequestHandleris called on every request. - Both
createStaticHandlerDataRoutesandcreateRouteswere recursive functions which repeatedly filtered the entire list of routes. I extracted agroupRoutesByParentIdfunction to create an object indexed byparentId. The two functions now call this once, and pass the results down recursively avoiding the repeat filtering. In my app, this dropped each of these functions from 50-100ms down to around 1ms each. - After these fixes, the remix overhead before running loaders/actions dropped from 200ms to around 1ms.
- With those two functions much quicker now, I tried removing the caching in
requestHandler. That increased the overhead back up to 10-20ms when the server is under load. So I'm inclined to leave the caching in place as a cheap way to shave those few ms off the response time.
🦋 Changeset detected
Latest commit: 72a3c7079426e5676272e5a433f7382914ba56cd
The changes in this PR will be included in the next version bump.
This PR includes changesets to release 18 packages
| Name | Type |
|---|---|
| @remix-run/react | Patch |
| @remix-run/server-runtime | Patch |
| @remix-run/testing | Patch |
| @remix-run/cloudflare | Patch |
| @remix-run/deno | Patch |
| @remix-run/dev | Patch |
| @remix-run/node | Patch |
| @remix-run/cloudflare-pages | Patch |
| @remix-run/cloudflare-workers | Patch |
| create-remix | Patch |
| @remix-run/css-bundle | Patch |
| @remix-run/architect | Patch |
| @remix-run/express | Patch |
| @remix-run/netlify | Patch |
| @remix-run/vercel | Patch |
| @remix-run/serve | Patch |
| remix | Patch |
| @remix-run/eslint-config | Patch |
Not sure what this means? Click here to learn what changesets are.
Click here if you're a maintainer who wants to add another changeset to this PR
Hi @dmarkow,
Welcome, and thank you for contributing to Remix!
Before we consider your pull request, we ask that you sign our Contributor License Agreement (CLA). We require this only once.
You may review the CLA and sign it by adding your name to contributors.yml.
Once the CLA is signed, the CLA Signed label will be added to the pull request.
If you have already signed the CLA and received this response in error, or if you have any questions, please contact us at [email protected].
Thanks!
- The Remix team
Thank you for signing the Contributor License Agreement. Let's get this merged! 🥳
Just updated to remove the caching logic in server.ts, as @brophdawg11 mentioned an API change will be coming that will not require this in the future.
Thanks @dmarkow! I copied your approach over to where we do the same route-tree generation client side as well and switched to use default param values instead of ||=.
From a perf testing standpoint, I created a net-new app with npx create-remix and did the following to generate 1000 routes:
cd app/routes
for N in $(seq 0 1 1000); do cp index.tsx route-${N}.tsx; done
Then I booted up the app with npm run dev mode and ran ab -n 1000 http://localhost:3000/route-50 to check the perf change. Looks like it cuts the base request time in half at 1000 routes 👍
### dev branch
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 1
Processing: 90 111 60.6 105 995
Waiting: 90 110 60.5 105 992
Total: 90 111 60.6 105 995
Percentage of the requests served within a certain time (ms)
50% 105
66% 107
75% 108
80% 109
90% 110
95% 113
98% 142
99% 392
100% 995 (longest request)
### With these changes
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 3
Processing: 44 53 33.0 48 408
Waiting: 44 52 32.8 47 407
Total: 44 53 33.0 48 408
Percentage of the requests served within a certain time (ms)
50% 48
66% 49
75% 50
80% 50
90% 52
95% 60
98% 148
99% 265
100% 408 (longest request)
As stated this isn't an issue in prod since we only pay the cost once on app startup. but the client-side change should squeek out a small perf improvement on hydration.
🤖 Hello there,
We just published version v0.0.0-nightly-79859c9-20230203 which includes this pull request. If you'd like to take it for a test run please try it out and let us know what you think!
Thanks!