fetch
fetch copied to clipboard
Block subresource requests whose URLs include credentials.
Hard-coding credentials into subresource requests (e.g.
https://user:pass@host/
) is problematic from a security perspective,
as it's allowed folks to brute-force credentials in the past, enables
session fixation attacks for sites using basic auth, and can allow
attackers access to well-known, poorly-coded devices (such as users'
routers). Moreover, the ability to hard-code credentials leads to
inadvertant leakage via XSS on the one hand, and poor development
practice on the other. Sifting through HTTPArchive, for example, yields
a number of credentials for test servers and other internal
architecture.
Usage of the http://user:pass@host/
pattern has declined significantly
in the last few years; given that low usage, closing this small
security hole seems quite reasonable.
Kicking off another discussion, @annevk. Same issue as https://github.com/whatwg/fetch/pull/464 regarding top-level vs non. I've called that out with an XXX span; we'll remove that if/when we decide that this kind of thing is worth doing.
If we do this we can simplify https://fetch.spec.whatwg.org/#concept-http-redirect-fetch as well (the "includes credentials" pieces) since CORS requests are always subresource requests.
I haven't verified this in Edge, but I'm told that MS hasn't supported this for years: https://support.microsoft.com/en-us/help/834489/internet-explorer-does-not-support-user-names-and-passwords-in-web-site-addresses-http-or-https-urls
Note also that the way the standard does HTTP authentication at the moment is by using this syntax. So if we can no longer use this syntax, that would also have to change somehow. Search for 401 in https://fetch.spec.whatwg.org/#concept-http-network-or-cache-fetch.
It's a bit complicated, so I might be misreading, but I don't think the suggested change breaks that bit of the algorithm. By putting the check at the top of main fetch, we're dealing with the URL that comes from a page (or redirect). The implementation detail of HTTP-network-or-cache fetch doesn't call back into main fetch, but calls into itself (in 24.4).
Fair. What about XMLHttpRequest? https://xhr.spec.whatwg.org/#the-open()-method also uses this and does call into fetch. Or would you stop honoring those parameters?
I'd suggest blocking those as well; they're included in the Chrome metric noted above, and they have similar properties from a security perspective. Basically, I think basic/digest auth is ~fine as a browser-mediated mechanism of allowing users to sign into sites and maintain state in some way. I think it's significantly less fine when the credentials are controlled by the page.
@youennf any thoughts from WebKit on this?
@travisleithead any thoughts from Edge?
@mcmanus @valenting any thoughts from Gecko/Necko?
FYI: I'm pretty sure Edge (and IE) already implement this. See https://groups.google.com/a/chromium.org/d/msg/blink-dev/lx-U_JR2BF0/dxXzIYjwBwAJ and https://support.microsoft.com/en-us/help/834489/internet-explorer-does-not-support-user-names-and-passwords-in-web-site-addresses-http-or-https-urls. @travisleithead can, I'm sure, confirm that.
IE/Edge restricts it to HTTP(S) URLs, which is a little different, though I suspect in practice that only affects FTP.
IE/Edge restricts it to HTTP(S) URLs, which is a little different, though I suspect in practice that only affects FTP
Indeed. Also, let's kill FTP (in #464). :)
Firefox changes tracked in bug 1340200
@mikewest same questions as for the ftp
URL PR. Since Mozilla seems tentatively on board and we've waited over a week, I'm happy with doing this. Will you file bugs against Safari and Edge?
This one I can certainly write tests for, so let's let me do that before we land the spec patch. I plan to land this change in Chrome after the next branch point, and I'm not sure there's much point upstreaming tests before any implementation passes. Perhaps we can put this on hold for a week or three?
I'll file bugs against Safari and Edge to ensure they're aware. Edge shouldn't have much work to do.
I'm happy to wait three weeks. We'll also need to rebase as this probably conflicts with the FTP change.
What about the case when receiving error/challenge 401 with (WWW-Authenticate: Basic realm=...) for a URL for which the browser already has the credentials cached. May be I am wrong, but I think that Chrome is using the same erroneous approach when having the cached credentials - after 401 error received, instead of putting the Authorisation header in the GET retry with the cached credentials (base64 encoded), it is composing a new URL with the credentials exposed in it.
I found this via the message
[Deprecation] Subresource requests whose URLs contain embedded credentials (e.g.
https://user:pass@host/
) are blocked. See https://www.chromestatus.com/feature/5669008342777856 for more details.
Chrome 60.0.3095.5 gave me when visiting a site with Basic-authentication. When loading the resources it contains (which do not have authentication information in their URLs in the markup) Chrome will not load them giving the explanation quoted above.
Like @ykraynov I wonder whether this is what you have in mind here or whether Chrome may be overdoing it here.
I'm developing Selenium Tests with Cucumber. We have a test environment with Basic Authentication.
Unfortunately Chrome completes relative URLs on the Website with the full Authentication URL,
e.g.: /page
becomes https://user:[email protected]/page
and is blocked by Chrome.
So many links e.g. for CSS are broken.
I expect, after Chrome has been authentificated by the URL, adding only the URL without authentification information to relative links and not to block them, e.g. https://domain.com/page
, as it does with the address bar (hiding the auth information).
In my opinion it doesn't make sense, that Chrome completes URLs with information that is not allowed by itself.
If this case is not considered in the specs, other browsers may implement it the same way in the future. I think we need to specify, that after auth, relative links are completed without authentication.
Same issue here. I have a local bookmark https://user:[email protected] and it doesn't work since all subresource requests are blocked (all using relative URL's). Seems like a bug to me.
FYI: Relating to the Chrome Bug, I have created an issue: https://bugs.chromium.org/p/chromium/issues/detail?id=731618
When can we expect a fix on this? It has messed up all my test cases.
Until the fix is released, you can use the switch
--disable-blink-features=BlockCredentialedSubresources
to run your tests.
I have a couple log viewer apps that are behind a vpn that I am now not able to access because of this. I think the premise of
Usage of the http://user:pass@host/ pattern has declined significantly in the last few years;
is wrong. Just because you're not getting it in your analytics doesn't mean it's not happening. I personally just don't allow you to collect these analytics, and I'd wager a bet there are many more like me.
How about instead of a breaking change, and pushing your opinions on others, you just issue a warning about it when it happens, and allow the warning to be dismissed?
This is a feature people use, and you just took it away without just cause IMHO.
I think the most frustrating part is you page: https://www.chromestatus.com/feature/5669008342777856
states:
Consensus & Standardization Firefox: No public signals Edge: No public signals Safari: No public signals Web Developers: No signals
I'm a Web Developer, and part of a company of 100+ others, we want this feature back without a launch flag. Hope that's a clear enough signal for ya.
This is the wrong place to complain about Chrome's behavior. You want https://bugs.chromium.org/p/chromium/issues/detail?id=731618 most likely.
Thanks @annevk! will go complain there as well
Is this the right place to complain about a change to the browser standard? Because I think getting rid of this feature is a mistake.
So to be perfectly clear, if I auth once, all requests to that domain should use that same auth until my browser session/window closes. This is how browsers have worked for ages, please keep it that way.
Constructive feedback on the proposed change is welcome, yes. And although HTTP authentication and credentials in a URL are related in some cases (e.g., with XMLHttpRequest), they're also different.
I guess I'm a little confused as to what the road block is then. So I'm going to write out what I think is going on, maybe you can steer me in the right direction.
Let's say I have a page requested at http://user:[email protected]/
with the following html:
<html>
<head>
<link rel="stylesheet" href="/a.css">
<link rel="stylesheet" href="http://user:[email protected]/b.css">
<link rel="stylesheet" href="http://user2:[email protected]/c.css">
<script>
// JS code that adds a link to a style sheet
addStyleLink('/d.css');
addStyleLink('http:/user:[email protected]/e.css');
addStyleLink('http://user2:[email protected]/f.css');
</script>
</head>
....
Under the proposed rule "Block subresource requests whose URLs include credentials", I think the following would happen:
-
a.css
would be requested withuser:pass
creds - this is currently broken in chrome? -
b.css
would not use creds to make the request, this is now blocked? -
c.css
should be requested withuser2:pass2
as the creds - or is this blocked? -
d.css
will be requested withuser:pass
creds -
e.css
request is blocked -
f.css
request is made withuser2:pass2
creds
Not sure how this should all play out in what should/should not be blocked from happening.
I'd prefer, and I think you'd step on a lot less peoples toes, if you just notified people they are doing something wrong, or it's a bad practice. At the end of the day, I think you'd get the results you want(steering people away from this practice), but not block peoples legitimate use(in their eyes) of this feature of the URL spec.
The bad practice is including credentials in URLs. I'm surprised you have code like that, since as far as I know Internet Explorer (and now Edge) has never supported it (at least not from IE6 onward).
I have code like /a.css
to the best of my knowledge. I have urls that I hand out to people like http://admin:[email protected]
that allows a need to know access to a site without too much fuss. Often this is behind a vpn, or is an obscure domain name.
I'm just trying to flesh out what exactly is being suggested here, because it's not clear(to me) what you are blocking.
If your code doesn't embed credentials in URLs it shouldn't be affected.