311-data
311-data copied to clipboard
Audit site performance using Lighthouse
Overview
We need to audit the site's performance (particularly the map and the reports) using Lighthouse so that we can quantify our performance improvements to the site.
Action Items
- [x] Add documentation on how to do this in the client readme
- [x] Run an initial audit on dev.311-data.org
- [x] Think about how we want to use this in the long run--require a Lighthouse audit for each PR? Run it as part of CI? Do it on an ad hoc basis?
Resources/Instructions
https://developer.chrome.com/docs/lighthouse/overview/
Took a first stab at the lighthouse documentation. Since Lighthouse is a dev tool that helps bring to light issues regarding improvements to front end performance/design, it may be a good idea to run it on a discretionary basis by any developer working on the front-end as a best practice to address/highlight/log potential issues whenever new changes are made to the front end. Upon first run on the 311-data site, the audit does present several opportunities where we can improve on the site. Let me know if you have any questions or if you want me to update the documentation in any way
Thanks @edwinjue. Could you share the results of the Lighthouse report here?
Sure, not a problem.
I just ran the audit again on the map page. I noticed a significant improvement from when I ran it yesterday. Google explains why scores may fluctuate
Domain: https://dev.311-data.org/map
Results Per Google, the metrics scores and the perf score are colored according to these ranges:
0 to 49 (red): Poor 50 to 89 (orange): Needs Improvement 90 to 100 (green): Good
To provide a good user experience, sites should strive to have a good score (90-100). A "perfect" score of 100 is extremely challenging to achieve and not expected. For example, taking a score from 99 to 100 needs about the same amount of metric improvement that would take a 90 to 94. (source)
-
Desktop browser results:
-
Yesterday: https://drive.google.com/file/d/1GyNymtH17tMLC32bPiNYFbbE8vOj4LFr/view?usp=sharing (pdf)
- Performance: 24
- Accessibility: 98
- Best Practices: 92
- SEO: 86
- PWA: n/a
-
Today: https://drive.google.com/file/d/1ivIoRjVksXQXTBH3pCjn8rhGKaMWb58Z/view?usp=sharing (pdf)
- Performance: 50 (increase)
- Accessibility: 98
- Best Practices: 92
- SEO: 83 (decrease)
- PWA: n/a
-
Yesterday: https://drive.google.com/file/d/1GyNymtH17tMLC32bPiNYFbbE8vOj4LFr/view?usp=sharing (pdf)
-
Mobile browser results:
-
Today: https://drive.google.com/file/d/1tQ0Q70wjkVF-xrPGtM7ZNpdGsVgaUbWv/view?usp=sharing (pdf)
- Performance: 17 (< desktop)
- Accessibility: 98
- Best Practices: 92
- SEO: 86 (> desktop)
- PWA: n/a
-
Today: https://drive.google.com/file/d/1tQ0Q70wjkVF-xrPGtM7ZNpdGsVgaUbWv/view?usp=sharing (pdf)
Conclusion:
Using today's performance score, the perceived load time/performance is much quicker on a desktop browser (50)
than a mobile browser (17)
because the first contentful paint
, time to interactive
and speed index
metrics are approximately 5 times quicker when loading on a desktop browser than on a mobile browser. Additionally, the mobile browser experienced a total blocking time
, which is the sum of all periods between first contentful paint
and time to interactive
, of 1,920ms versus only 90ms on a desktop browser. In terms of performance, the most significant improvement opportunities can be achieved, in short, by:
-
enabling text compression on the server-side (to minimize total network bytes)
(solution) -
serving static assets with an efficient cache policy
(solution) -
avoiding enormous network payloads
(solution)
There are some less significant considerations and improvement opportunities not mentioned that one can learn more about by following the Learn more
links within the body of the Lighthouse audit (see pdfs). Addressing the most significant performance issues will surely have a more pronounced improvement for mobile users than for desktop users.
How scores are weighted: https://web.dev/performance-scoring/#weightings
Thanks Edwin, this is super interesting!
It's interesting that Lighthouse recommends enabling text compression--I'm see that almost everything that we retrieve from the server is already compressed using gzip
. You can see that we are using GZipMiddleware
in the server, and when I look at "Network" with Chrome DevTools, I see content-encoding: gzip
in the Response Headers. Any ideas why Lighthouse could be flagging this?
bundle.js should be compressed though--we're getting it from a CDN, not the server, so that explains why it's not currently compressed.
Note: we haven't gotten any requests from Socrata in the past week, so running a Lighthouse report right now would not be reflective of typical expected behavior.
Hi Nich, apologize for the delayed response but if you are seeing content-encoding: gzip
in the response headers, the content is being compressed. However, here are some step-by-step instructions provided by an engineer at Google on how to enable compression in case something may have been overlooked.
Hi @edwinjue and @nichhk - has this issue been completed? It looks like all of the actions have been checked. If so, please close this issue. Thanks!