Are CSP audits too strict?
Feature request summary
The csp-xss audits added in Lighthouse v8 are quite strict and quite opinionated. While they do expose best practices, some of the options are advanced CSP configurations and I think a lot of sites will struggle to pass these audits.
I have always found Lighthouse strikes the balance well between pushing for best practices versus realistic, achievable targets for the average website (with a little effort), but I fear the balance here is off, and the worry is that many people will just ignore this audit as "unachievable" and/or turn it off in any automated Lighthouse runs rather than using this to improve their security - which I presume is the intention of this audit. Or worse still rush into a complex topic like CSP without fully understanding it (more on this later). As I say to me Lighthouse audits should be tough but achievable.
Having strict settings on a specific tool like the CSP evaluator (that is being used as the basis for this audit) is fine and to be expected. But having this on a more generic tool like Lighthouse, may need a different thought process to make it more palatable to the wider audience who may not be security experts.
In my opinion we should not push too hard for advanced configurations to be adopted by those that fail to understand what it means. We know that not enough sites use a CSP at all, never mind some of these advanced options, and I think initially we should aim to push, and reward, getting basic CSPs in place rather than set the bar too high and insisting on an advanced CSP.
I appreciate that the CSP Audit is not yet included in the Best Practice score, but I think some of these settings should be reconsidered before that happens.
In particular, I think the following could perhaps be considered to be Warnings rather than High for now and not fail the audit for missing these:
strictDynamic- using hashes and nonces are advanced techniques that require server side set up to send them. So simple blogs and many simple hosting providers and CMS's and tools cannot support these. Based on last year's Web Almanac data 0.225% of mobile websites used strict-dynamic. While they do offer the best protection, and I agree this should be flagged to users, I personally think failing the audit for not using these is a step too far given current adoption of this, and should be instead considered for future, with just a warning for now. Personally I think that would drive adoption better than jumping straight to a full fail now.missingObjectSrc- The name of this audit is misleading is it also flags when the object-src is provided but not set to'none'. Perhaps there should be two audits: 1) if missing (High - so fail) and 2) if present but set to anything but none (Medium - so warning but not fail, as some sites explicitly choose to allow object src, e.g. to have interactive SVGs).
I also wonder if this setting is too much to push now:
reportingDestinationMissing- personally I find CSP reports far too noisy to really be useful (mostly due to lots and lots of bad extensions), but that's a different issue. Again I think this is an advanced setting IMHO. However since it's currently a Medium rather than High and (as I understand it) won't fail the audit, I can live with this one. Personally I'd make it an Info one though.
I'm one of the HTTP Archive maintainers, and happy to get some more stats here if you want. The July HTTP Archive crawl is due to finish in the next week and it should have been run with v8.0.0, but I suspect very few will be passing which to me makes this audit too strict, at least until adoption is driven up.
What is the motivation or use case for changing this?
Keep audits realistic and achievable, but with warnings and suggestions for improvements to drive forward best practices, rather than mandating it too early.
This will also reduce the chance of breaking sites with inexperienced developers jumping into complex topics like CSP and going straight to advanced settings without necessarily understanding them (see past history with HSTS preload or HPKP).
How is this beneficial to Lighthouse?
Stops users giving up on audits as unachievable and losing confidence in Lighthouse as a useful tool.
using hashes and nonces are advanced techniques that require server side set up to send them
This is not entirely true. CSP hashes can be used to implement a strict CSP for a static site (e.g. https://web.dev).
While they do offer the best protection, and I agree this should be flagged to users, I personally think failing the audit for not using these is a step too far given current adoption of this, and should be instead considered for future, with just a warning for now.
Unfortunately, CSPs which use an allowlist instead of hashes/nonces aren't just less secure, they offer no protection against XSS at all in most cases. ~95% of unique CSPs are trivially bypassable as of 2016. Removing nonces/hashes as a requirement to pass would give a false sense of security.
missingObjectSrc- The name of this audit is misleading is it also flags when the object-src is provided but not set to'none'. Perhaps there should be two audits: 1) if missing (High - so fail) and 2) if present but set to anything but none (Medium - so warning but not fail, as some sites explicitly choose to allow object src, e.g. to have interactive SVGs).
This is a change I would be willing to adopt. CSP evaluator surfaces this as a medium warning if object-src is set but not 'none'.
However since it's currently a Medium rather than High and (as I understand it) won't fail the audit, I can live with this one. Personally I'd make it an Info one though.
We can discuss making this a "Low" severity or "Info".
Keep audits realistic and achievable, but with warnings and suggestions for improvements to drive forward best practices, rather than mandating it too early.
We are not mandating it. We agree that a strict CSP is not easy to adopt, which is why the audit is just an unscored diagnostic. This is how we hope to make the audit palatable, while still being honest about potential bypasses in a weak CSP.
I think that mandating or even encouraging a report-uri is a bit much; this is a tracking/fingerprinting vector especially if combined with the inclusion of script-sample or access logs. Privacy tools like content blocking addons often block reporting for this reason.
For instance, I know that some browser addons (such as Tridactyl and Canvas Fingerprint Defender) trigger CSP violations; by identifying these violations (e.g. with script-sample), I can determine which addons a user has installed to better identify them. Reporting is not an opt-in feature in any of the major browsers AFAIK, so this could constitute tracking without prior consent.
Collecting violations can be useful during early stages of CSP development, but logging should be done with care. I personally think that violation reporting should only be used to test out a CSP and should be removed afterwards.
Lighthouse is an influential tool, so advice it gives can have a significant impact on the Web. If maintainers still believe that report-uri/report-to should be the norm (I do hope otherwise), perhaps a middle ground could be to also include recommendations for reducing the sensitivity of logged data and warning against the use of script-sample in production.
Some measures that the linked web.dev docs could recommend include sanitizing IPs from access logs, only logging a percentage of requests, and fuzzing timestamps. I am by no means an expert on the subject of sanitizing logs; these are just some uninformed ideas.
Thanks for weighing in with important feedback folks!
We've decided to make a few changes:
- We will adopt the
missingObjectSrc/object-srcis notnonechanges. - We will relax the reporting requirement.
- We will still enforce
strictDynamicbut the audit will not be scored (turn red) until there is plentiful ecosystem support and adoption has reached at least 2%.
Thanks all. Good improvements IMHO.
@adamraine:
CSP hashes can be used to implement a strict CSP for a static site (e.g. https://web.dev).
This would appear to be difficult-to-impossible to implement on a Jamstack-style static site, even using a Cloudflare Worker or other similar tooling, as opposed to a conventionally hosted site.
This would appear to be difficult-to-impossible to implement on a Jamstack-style static site, even using a Cloudflare Worker or other similar tooling, as opposed to a conventionally hosted site.
I know it's possible on web.dev which uses eleventy, but that's just one static-site generator. I'm interested to know what makes it "difficult-to-impossible" in your opinion?
I know it's possible on web.dev which uses eleventy, but that's just one static-site generator. I'm interested to know what makes it "difficult-to-impossible" in your opinion?
That's the one my site uses, as well, so I'm in the same boat as web.dev and quite a few others that use Eleventy. For that matter, I don't understand how any SSG-created static site could implement it. If we were talking about a conventional static site hosted on, say, Apache, it would be a relative cinch; but this is a different set of circumstances, of course.
Are you aware there is a nonce and a hash solution? For SSG you would use a hash since you can't generate nonces due to lack of backend.
Are you aware there is a nonce and a hash solution? For SSG you would use a hash since you can't generate nonces due to lack of backend.
Have seen that, but am uncertain how to get the hosting platform and the SSG in sync on that. Would welcome any such suggestions; haven't had any luck finding it.
Web.dev uses eleventy and firebase. This PR started implementing CSP https://github.com/GoogleChrome/web.dev/pull/5008/
There may be a way to extract it to a plugin for other eleventy users
I currently use Cloudflare Pages; am thinking this might be some way it can be done with a Cloudflare Worker, but it'll have to be spelled out by better brains than mine. :-) I'll study the PR and see if something jumps out at me. I've also used Vercel and am not aware that it can be done (readily) there.
What you need to find is a way to affect the hosting configuration from eleventy build pipeline. You need to be able to set headers (the hash parts of the CSP) based on the JS build artifacts (if any) of eleventy, or the referenced external scripts. Adam might know more to help here.
FYI that PR is the majority of the work but IIRC some followup ones were necessary to tweak a thing or two, not sure if relevant to understanding how to add CSP to an eleventy site
Yes, figuring out how to get the same nonce or hash in both Eleventy and CFP at build time is the issue. Easy enough to generate them on the Eleventy site, but the Eleventy-to-CFP (and vice versa) communication of them is what remains to be determined.
It's important to note that https://github.com/GoogleChrome/web.dev/pull/5008 set headers using the firebase config. Unfortunately, I don't know how to set the CSP for other backends.
If you cannot set an HTTP header, you can deliver the CSP in a <meta> tag although the HTTP header is preferable. It is still possible to avoid all high-severity Lighthouse warnings if the CSP is defined in a <meta> tag.
I haven't explored it too much, but there is this elventy template which generates a hash-based CSP in a <meta> tag.
I can do the headers, yes; currently doing that in a Cloudflare Worker pointed to my domain, which in fact is how I implement the CSP. What I can't figure out is how to tell both Eleventy and the CFW to use the same nonce or hash. But will check that template's repo for clues. Thanks.
One more question: if using strict-dynamic, am I correct that this precludes the use of third-party scripts (such as for YouTube embeds), since one obviously can't control them but can only identify their sources as trusted?
One more question: if using
strict-dynamic, am I correct that this precludes the use of third-party scripts (such as for YouTube embeds), since one obviously can't control them but can only identify their sources as trusted?
strict-dynamic allows you to use third-party scripts if they are permitted by a CSP nonce or hash. With CSP hashes, you will need to use an inline script loader.
The scripts in question would be in an iframe over which I have no control, so that's a no-go. Besides, it would appear this is a non-starter for an SSG-based site because the documentation says the nonce must be:
a random nonce that is regenerated on every server response.
. . . obviously not the way that works with such a site. C'est la guerre.
By the way: it seems odd that a site with no CSP can score "100" on Best Practices, yet a site with a CSP that otherwise is very tight but fails to use strict-dynamic will get downrated.
Also of interest re Cloudflare (not just Cloudflare Pages) in particular: https://support.cloudflare.com/hc/en-us/articles/216537517-Using-Content-Security-Policy-CSP-with-Cloudflare
will get downrated.
Backing up, let me address the underlying feedback from the original comment about this audit being "too strict". This audit is specifically not scored. It is not a component of the Best Practices score.
CSP is certainly a best-practice for security on the web, and we'd be missing a large gap if we ignored it in Lighthouse. We are hesitant to make it part of the category score for all the reasons that have been mentioned here (mostly surrounding implementation complexity).
If the audit continues to point out a way to further improve a CSP, but as a developer you understand that, for example, lack of strict-dynamic is not posing a risk to your users, then the correct path is for you to ignore the advice. However, in general Lighthouse cannot make that assessment for you.
RE: iframes, the resources loaded in an iframe are not controlled by the parent frame's CSP. There's no need to figure out what scripts the iframe loads. It's already protected in the web security model b/c it's an iframe. The only part of the parent frame's CSP that is relevant is frame-src (or child-src), which controls what origins iframes can use.
Another one of interest that I may try (Cloudflare-specific): https://github.com/moveyourdigital/cloudflare-worker-csp-nonce
The scored / not-scored confusion happens occasionally. Would you help us out a bit and describe what made you believe this audit impacted the score? Presently, we display the audit with a grey circle to indicate this (where scored/not scored have green/red), but IIRC that's it... we don't have a legend or anything ... so we could certainly do better here. But is there something else to it?
@connorjclark I appreciate your indulging me. :-) If you're asking me that last question: I know that, ever since I put a CSP on my site, the best I can get in Best Practices is 93, with the CSP — and specifically, the script-src part — always tagged as the reason. Without the CSP in place at all, I always scored 100. Also, I see many other sites touting their "100" scores in Lighthouse; when I look at their headers and heads, I see no CSP in evidence. I grant you, it's anecdotal, but I believe it to be the case.
All that said, that last Cloudflare-related link I provided seems to have some promise.
It'd be pretty nice if we gave a score breakdown somehow in the report, even if only printed to the console. Might solve some of the confusion here. We have various levels of weights for audits, and not making this information visible is somewhat stripping the usefulness of our scoring system.
In the meantime, you could cross reference what audits are failing with https://github.com/GoogleChrome/lighthouse/blob/0b0fbc4/lighthouse-core/config/default-config.js#L553 to see what exactly is the cause of your score drop. If you need help, you can share a report or the URL to your site and I'll take a look.
@connorjclark https://lighthouse-dot-webdotdevsite.appspot.com//lh/html?url=https%3A%2F%2Fwww.brycewray.com%2F
. . . presumably due to:
{id: 'csp-xss', weight: 0, group: 'best-practices-trust-safety'},
Incidentally, the note about report-uri in that link I gave doesn't quite square with the advice in:
https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/report-uri

It's the inspector-issues, which uses a new protocol where Chrome will tell us about certain issues ... and we mistakenly accept any error there as worthy of failing the inspector-issues audit. Certainly not our intention :)
Team, we could
- make
inspector-issues0 weight - ignore "CSP" for calculating failure
I'm leaning towards 1) but I don't have a strong argument right now.
Incidentally, the note about report-uri in that link I gave doesn't quite square with the advice in: developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/report-uri
@adamraine do you recall why we aren't suggesting report-to here? Probably should be doing both.
EDIT actually might it be working as intended because the CSP is blocking resources and chrome is pointing that out? Maybe we just need to show the actual error message in the report.
I'm pretty sure this bug has been fixed in a later version than 8.0.0 https://github.com/GoogleChrome/lighthouse/issues/11862.
EDIT: looks like it was 8.1.0, "Best practices" gets 100 on the latest version of Lighthouse :)
@adamraine do you recall why we aren't suggesting report-to here? Probably should be doing both.
report-uri is being deprecated, but it's replacement report-to has even less browser support. FWIW we do recommend using both in the documentation.
On Tue, Jul 27, 2021 at 01:17:08PM -0700, Patrick Hulce wrote:
- We will still enforce
strictDynamicbut the audit will not be scored (turn red) until there is plentiful ecosystem support and adoption has reached at least 2%.
I think strict-dynamic should only be enforced on pages that load multiple scripts. For pages that load one or zero external scripts, I don't see enough benefit over just using a plain nonce/hash.
-- /Seirdy
I'm pretty sure this bug has been fixed in a later version than 8.0.0 #11862.
EDIT: looks like it was 8.1.0, "Best practices" gets 100 on the latest version of Lighthouse :)
That would explain the web.dev downrating, then, because it's still on 8.0.0. Chrome is still on 7.5.0. Guess I'll have to wait until those catch up. In the meantime, have just turned off CSP for now and am back to 100 on Best Practices. 😄
In addition, it appears I can't use the nonce-worker.js method of injecting nonces because my non-monetized site's free plan can't have more than one Cloudflare Worker pointed to the same domain(s), and I've already got one handling my caching and, until now, CSP (now converted to a CSP-Report-Only). Guess the only option at this point would be to combine the two into one Worker but combining two CFWs, especially these two, looks to be not for the faint-hearted — or, in my case, faintly skilled.
I think strict-dynamic should only be enforced on pages that load multiple scripts. For pages that load one or zero external scripts, I don't see enough benefit over just using a plain nonce/hash.
That will surely blow most of the web out of the water regarding CSPs, but OTOH perhaps that's the intent. Not mine to say.
I think strict-dynamic should only be enforced on pages that load multiple scripts. For pages that load one or zero external scripts, I don't see enough benefit over just using a plain nonce/hash.
You don't need strict-dynamic to avoid all high severity warnings on the Lighthouse audit. You only need to avoid using an allowlist. For example, the following CSP should avoid all high-severity checks:
script-src 'nonce-random123' 'unsafe-inline'; object-src 'none'; base-uri 'none'
The warning mentioning strict-dynamic isn't super clear about this, so we can make a note to update it along with the other changes:
Host allowlists can frequently be bypassed. Consider using 'strict-dynamic' in combination with CSP nonces or hashes.