posthog.com
posthog.com copied to clipboard
Increase first-load performance by disabling Netlify post-processing
TLDR: I was poking around and noticed we were serving a suspiciously large amount of JavaScript to the client, and eventually found that the culprit was Netlify's post-process bundling.
Note: If you're going to be testing performance, compare the live site with this branch deploy I set up, since it seems like Preview Deploys on Netlify have worse performance (Perhaps they aren't served on the edge?)
Overview
Welcome to my mini blog post!
After #3754 was merged, I decided to take a quick look at our client-side bundle size, and noticed that we were loading an extremely large single JS file from a Cloudfront domain.
This file was really large (>500kb) and was larger than even our main app
bundle. Additionally, over 55% of the file was unused for the initial page load (according to Lighthouse), which meant we were downloading most of this file for no reason!
Poking around some more, I noticed that when I built Gatsby locally, there was no such file, so eventually I determined it had to be something special Netlify was doing.
Culprit
Eventually, I found out that the last step of Netlify's build process is to run 'post-processing' on artifacts before it deploys them. This includes 'bundling JS', which basically takes the nicely code-split bundles that Gatsby generates, and mashes them together into one big file. Gatsby uses webpack to split up bundles so pages load only the JS and CSS they need. This is why there was so much dead code: we were loading in code that the homepage (or most other pages) didn't actually need, and may have been used by only one specific page!
Fix
All that was needed to fix this was adding the following config to netlify.toml
to disable bundling for JS and CSS files
[build.processing]
skip_processing = false
[build.processing.css]
bundle = false
minify = false
[build.processing.js]
bundle = false
minify = false
[build.processing.html]
pretty_urls = true
[build.processing.images]
compress = true
Benefits
I've included some numbers below, but overall, I noticed around a 25% decrease in the amount of data that had to be download initially. This also correlated with a much higher Lighthouse score, most noticeably on mobile. (While desktop did improve, the effects were less dramatic)
Page | Perf Score (Desktop) | Perf Score (Mobile) | TTI | Initial load |
---|---|---|---|---|
/ | 100 (+1%) | 84 (+4%) | 1.7s (-23%) | 1.5mb (-28%) |
/docs | 100 (+1%) | 86 (+9%) | 1.3s (-32%) | 900kb (-35%) |
/pricing | 99 (+1%) | 85 (+50%) | 1.3s (-38%) | 900kb (-44%) |
/questions | 100 (+5%) | 88 (+151%) | 1.1s (-50%) | 1mb (-38%) |
These numbers are far from scientific, and are just a result of me running local Lighthouse tests, but even still I think it shows a very clear improvement on the initial page speed and size.
Effects
Overall, it will probably be hard to consciously notice the improvements to page load if you're on a desktop with very fast internet, but if you're either on mobile or somewhere without a blazing fast connection things will feel drastically different. As a company that's so globally-minded, I think it makes sense for us to have a website that people from anywhere can access.
Conclusion
It's still probably good to do a bit more testing, but from trying things out I've seen better performance across all the devices I've tried it on.
This is not a super high-priority change and is more something I decided to explore and got sucked down a debugging rabbit hole.
Other notes
Hasura seems to use Gatsby for their site, and they load JS in the same way as this PR, directly serving the bundle that Gatsby generates with no post-processing.
this could definitely get converted into a mini blog post depending on what @andyvan-ph thinks!
Yes, this is great. Would definitely be up for a blog post on this.
From an SEO pov, this is really important. For the performance reasons you outlined, Google really prefers serverside rendering of javascript, so this is a big performance win.
We should see an increase in our (already good) web vitals scores, and hopefully a drop in the average response time in crawl stats. Win win.
Sure @andyvan-ph! I'm gonna do some more testing to make sure the performance improvements are replicable, but once this gets merged I'm happy to go through and convert this to a little blog post? Think it could rank quite well (people searching for gatsby or how to improve load times) and there's some other things I didn't mention that I could go into.
Worth noting that disabling Netlify's asset optimization disables the use of CloudFront. These features work alongside each other, AFAIK. Interested in seeing how much that affects site speeds/lighthouse scores in different areas (CloudFront vs. Netlify). Could just roll our own if it’s a significant difference.
Also probably worth investigating subsequent and cached page loads/requests! These are the things I’d be thinking about if I came across a blog post like this.
@smallbrownbike Totally! Yeah I still need to do some more investigation this weekend before I'm sure that this actually makes an improvement across the board. On that note, does Netlify track client-side metrics anywhere (FCP, TTI, etc)?
Worth noting that disabling Netlify's asset optimization disables the use of CloudFront. These features work alongside each other, AFAIK.
Just to unpack this for my less techincal brain, does this mean we'd be running the website without a CDN / caching if we made this change?
So taking a closer look, it looks as though this only disables serving JS directly from Cloudfront. Images are still served through Cloudfront, and Netlify would now serve JS using it's own CDN / cache so performance should remain consistent. This is the same way that it serves all HTML files, and I wouldn't even be surprised if it's backed by Cloudfront behind the scenes.
No movement on this since July, so I'm going to close this for now (but it does sound interesting and can always be reopened).