resource-timing icon indicating copy to clipboard operation
resource-timing copied to clipboard

Consider removing wildcard option of the `Timing-Allow-Origin` header to prevent browser history leakage

Open kdzwinel opened this issue 4 years ago • 42 comments

We crawled 50,000 websites and found that 95% of 1.1M third party requests using the 'Timing-Allow-Origin' header were using a wildcard. Wide usage of wildcard combined with the amount of detailed information that this API exposes about third party requests creates multiple opportunities for leaking user's browsing history.

  1. Extracting DNS cache information

domainLookupStart/domainLookupEnd properties (introduced in level 1) allow any website to extract some information from browser's DNS cache. In particular this can be used to detect a new private session (by checking if domainLookupStart !== domainLookupEnd for some popular services like google-analytics.com).

  1. Extracting HSTS information

redirectEnd - redirectStart !== 0 (level 1) may leak information about user visiting given website in the past through browser's enforcement of HSTS (HSTS redirects being instant compared to the regular 30X redirects).

  1. Extracting reused connections

secureConnectionStart === 0 (level 1) can reveal information about a connection being reused suggesting that user recently visited given website.

  1. Extracting information about cookies being set

Many applications are set up in a way that new users are getting 'set-cookie' header on response while users with cookies set are not getting that header. By observing size of the headers (transferSize - encodedBodySize) website can learn if cookies were sent with a given third-party request or not.

It's worth noting that issues 1 and 3 can be mitigated by the user agent by double-keying of the caches. However, since this technique is not a W3C standard it doesn't address our concerns. Similarly, issue 4 can be mitigated by blocking third party cookies, but it's not a standard behavior.

To mitigate above risks we suggest dropping wildcard functionality in the Timing-Allow-Origin header. This will force developers to list actual domains that they want to share this information with and greatly reduce amount of domains that can be scanned, using above techniques. If there are cases where wildcard is required developers will still be able to simulate it by setting timing-allow-origin value based on the value of the request's referer header. The other possible mitigation is to introduce randomness to the values returned by the API. As we understand those values are meant to be processed in bulk by website owners to uncover performance trends, there seems to be no need for those values to be always accurate or as precise as they are now.

kdzwinel avatar Feb 05 '20 14:02 kdzwinel

It's worth noting that issues 1 and 3 can be mitigated by the user agent by double-keying of the caches. However, since this technique is not a W3C standard it doesn't address our concerns.

https://github.com/whatwg/fetch/issues/904

  1. Extracting HSTS information

Doesn't ACAO: * expose the same?

  1. Extracting information about cookies being set

That indeed seems like a leak, similar to cache state until double-keying is applicable everywhere. At the same time, I don't see how increasing friction to TAO opt-in would somehow solve this.

As you stated, the solution seems to be the removal of third-party cookies. While all major browsers seem committed to take that route, it'll take us a couple of years before we get there. If this is a major issue in the wild, we can discuss mitigations in the meantime.

yoavweiss avatar Feb 10 '20 09:02 yoavweiss

At the same time, I don't see how increasing friction to TAO opt-in would somehow solve this.

The concern as it was discussed in PING is that the entire spec is, to the user, largely cost (the privacy risks mentioned above, among others) w/ no corresponding benefit (to the first approximation). Removing * (and possibly limiting the number of origins that could appear) would remove most of the cost to users, but still enable the functionality where it's needed and useful to most sites (folks debugging their own infrastructure, running automated tests on their own applications, etc).

In other words, let everyone access the feature seems like the wrong place on the cost/benefit curve, for the user.

pes10k avatar Feb 10 '20 23:02 pes10k

The concern as it was discussed in PING

Can you point me to the relevant discussion?

yoavweiss avatar Feb 11 '20 05:02 yoavweiss

sure thing. https://www.w3.org/Privacy/IG/summaries/PING-minutes-20200116#resource-timing-level-2---httpswwww3orgtrresource-timing-2-led-by-duckduckgo-team

pes10k avatar Feb 11 '20 05:02 pes10k

Thanks!

I'm still utterly unconvinced that adding friction is anything but privacy theatre. As @michaelkleber pointed out in the minutes, 3P providers will be motivated to enable timing and will not necessarily be deterred by the need to implement the extra 3 lines of code that server side origin mirroring requires. The main "feature" of it would be that it would break the great majority of current users. That doesn't seem like a worthy goal.

Seems like the problem here can be rephrased as "Exposing detailed timing information for credentialed cross-origin resource fetches increases risk of cross-origin state leak".

Mitigations should tackle this problem head on. A non-comprehensive list may include:

  • Apply extra restrictions when "actually credentialed" resources are involved - e.g. when the request contained a cookie or when the response has set one.
  • Eliminate the cookie header sizes from the reported values.
  • Fuzz the reported values
  • A mix of the above

yoavweiss avatar Feb 11 '20 07:02 yoavweiss

Also worth noting that any such mitigations would be a stop gap measure until all browsers can eliminate 3P cookies, which is the eventual solution to this issue.

yoavweiss avatar Feb 11 '20 07:02 yoavweiss

  1. There are many cases where origin mirroring isn't possible. The web is increasingly reducing referrer information. Firefox includes the option to disable, Brave doesn't reveal referrer information in almost all cases, referrer policy allows sites to say "don't send", Safari ITP restricts to eTLD+1 for some domains, etc. It seems likely that further restrictions on referrer are coming. In all these cases the ability of the site to mirror the origin is reduced or removed.
  2. Some of this leak is distinct from what 3p cookies shows (https://github.com/w3c/resource-timing/issues/221, history leak from #3 above, etc)
  3. Removing 3p cookies would help with some of this, but since the majority browser isn't planning on doing so for at least two years, "stop gap" measures would be very useful
  4. I think you estimate how much friction a few lines of header code add, and how much it can help avoid accidental information exposure. Caching, CDNs, intermediate proxies, etc all make origin-mirroring non-trivial for common-case websites.

would break the great majority of current users

I dont now how many sites this would break, and would be interested in numbers. But if this the numbers show this to be a significant issue, it seems that you could just only allow the v2 information on non-wildcard use. Then nothing would break; sites targeting the current standard would get what they already get, and sites that want the new information could get it w/ a more narrow TAO.

Additionally, it seems bad-practice to automatically opt-in sites sending TAO * to share v1 features, who may not (for privacy, security or other reasons) want to share v2 features. Removing '*' would at least reduce the scope of this problem

Apply extra restrictions when "actually credentialed" resources are involved

This sounds like a promising avenue. Can you say more about how this might be done, or how you see this possibly working out?

Eliminate the cookie header sizes from the reported values.

Seems like we agree that this is a good idea, so lets consider this TODO and set it aside from the rest of the discussion.

pes10k avatar Feb 11 '20 08:02 pes10k

  1. There are many cases where origin mirroring isn't possible. The web is increasingly reducing referrer information. Firefox includes the option to disable, Brave doesn't reveal referrer information in almost all cases, referrer policy allows sites to say "don't send", Safari ITP restricts to eTLD+1 for some domains, etc. It seems likely that further restrictions on referrer are coming. In all these cases the ability of the site to mirror the origin is reduced or removed.

Maybe. But I don't think you want the stop-gap mitigation to rely on future restrictions which may or may not happen.

2. Some of this leak is distinct from what 3p cookies shows (#221

#221 seems like a completely distinct issue. If there is a problem there, it's completely unrelated to TAO.

2. history leak from #3 above

That would be solved by double-key caching.

3. Removing 3p cookies would help with some of this, but since the majority browser isn't planning on doing so for at least two years, "stop gap" measures would be very useful

I agree that stop-gap mitigations are in order.

4. I think you estimate how much friction a few lines of header code add, and how much it can help avoid accidental information exposure. Caching, CDNs, intermediate proxies, etc all make origin-mirroring non-trivial for common-case websites.

Sure, it adds friction for non-motivated sites. But how is adding friction a helpful goal here?

I dont now how many sites this would break, and would be interested in numbers

Not necessarily immediate user-visible breakage, but the numbers stated on this issue indicate 95% of currently timed resources will no longer be.

But if this the numbers show this to be a significant issue, it seems that you could just only allow the v2 information on non-wildcard use. Then nothing would break; sites targeting the current standard would get what they already get, and sites that want the new information could get it w/ a more narrow TAO.

L2 has been shipping for the last ~5 years in Chrome and Firefox.

Additionally, it seems bad-practice to automatically opt-in sites sending TAO * to share v1 features, who may not (for privacy, security or other reasons) want to share v2 features. Removing '*' would at least reduce the scope of this problem

I don't think that adding more types of opt-ins is the way to go. If there are material risk differences with the addition of the L2 attributes, we should address those.

This sounds like a promising avenue. Can you say more about how this might be done, or how you see this possibly working out?

When a response is received, the browser should be aware of whether the request was sent with cookies, and whether the response has Set-Cookie in its headers. The browser can then apply different restrictions to those responses and how they are reported to Resource Timing.

In terms of Fetch integration, HTTP Network of cache fetch (step 17) seems like a good integration point to add a flag which indicated if cookies were sent.

Seems like we agree that this is a good idea, so lets consider this TODO and set it aside from the rest of the discussion.

Whether this is a good idea or not seems tightly coupled with the rest of the discussion.

yoavweiss avatar Feb 11 '20 08:02 yoavweiss

Maybe. But I don't think you want the stop-gap mitigation to rely on future restrictions which may or may not happen.

I don't think i understand you here; i listed things shipping now that would make origin mirroring difficult to impossible, and suggested more or coming. In those cases, removing * is not just friction, its concrete privacy improvement.

#221 seems like a completely distinct issue. If there is a problem there, it's completely unrelated to TAO.

I'm not following. That issue describes how resource timing v2 introduces a new privacy harm of sites being able to detect proxy use (extra troubling bc proxies are often used to increase privacy). Having * in TAO increases the scope of that harm.

That would be solved by double-key caching.

If the feature is only safe / privacy preserving to ship on platforms that double-key caches, thats important information to include in the spec.

Sure, it adds friction for non-motivated sites. But how is adding friction a helpful goal here?

The goal isn't to add friction, the goal is to prevent specs from including capabilities that allow for broad disclosure of privacy relevant behavior, when requiring the information leak to be narrowly tailored seems to remove the "copy-paste" foot-gun, and a very reasonable compromise (especially since this feature doesn't benefit users at first hop).

Not necessarily immediate user-visible breakage, but the numbers stated on this issue indicate 95% of currently timed resources will no longer be.

One way to read this is a 95% privacy improvement. If the claim is that this will break things, it would be good to have # showing that.

L2 has been shipping for the last ~5 years in Chrome and Firefox.

"The non-standardized version is already implemented, so it can't be changed", is not compatible with how horizontal review works. PING is trying to work with you to get the functionality you're saying is important working on sites you think its needed most.

The browser can then apply different restrictions to those responses and how they are reported to Resource Timing

Again, I think this could be a useful direction, but it hinges on what those restrictions are, and getting them into the mandatory parts of the spec.

Whether this is a good idea or not seems tightly coupled with the rest of the discussion.

I'm sorry but i'm not following. Are you saying the spec should, or shouldn't, eliminate the cookie header sizes from the reported values?

pes10k avatar Feb 11 '20 09:02 pes10k

I don't think i understand you here; i listed things shipping now that would make origin mirroring difficult to impossible, and suggested more or coming. In those cases, removing * is not just friction, its concrete privacy improvement.

Unless we can ensure those further restrictions on Referer ship in all browsers very soon, the minority cases where its already shipped are not super interesting.

I'm not following. That issue describes how resource timing v2 introduces a new privacy harm of sites being able to detect proxy use (extra troubling bc proxies are often used to increase privacy).

If I'm a website that wants to discriminate against people using a proxy by detecting protocol changes, extra TAO restrictions won't prevent me from doing that.

Having * in TAO increases the scope of that harm.

No, it does not. The nefarious website can setup its own servers that are using different protocols, setup their TAO headers as it wishes, and use that for protocol inspection.

I'm sorry but i'm not following. Are you saying the spec should, or shouldn't, eliminate the cookie header sizes from the reported values?

I'm saying that I think the WG should seriously consider that as a potential mitigation, that seems directly related to the risk.

yoavweiss avatar Feb 11 '20 09:02 yoavweiss

Unless we can ensure those further restrictions on Referer ship in all browsers very soon, the minority cases where its already shipped are not super interesting.

I'm really surprised by this response. Is it the WG's opinion that "minority cases" where browsers and sites are trying to improve privacy by reducing referrer information is not relevant / "super interesting"?

Plus, referrer policy is not a "minority case", its built into the platform, supported on all sites. It would not be possible to origin mirror for any site with a referrer policy.

If I'm a website that wants to discriminate against people using a proxy by detecting protocol changes, extra TAO restrictions won't prevent me from doing that.

No, it does not. The nefarious website can setup its own servers that are using different protocols, setup their TAO headers as it wishes, and use that for protocol inspection.

I'm still now following here. The claim in the issue is that resource timing v2 introduces new privacy harms, including detecting proxy usage. The ideal thing would be to remove these capabilities from the spec. However, the WG has identified these use cases as important in some cases and so worth supporting. So, the proposal in the issue is to at least reduce the scope of the harm, and reduce the from being the common case, to at least being a far smaller set of narrow cases. We're trying to help you, with a proposal that allows the functionality to be applied in narrow cases, where its most useful.

I'm sincerely trying to understand the WG's position:

  1. there is no harm here (that, for example, detecting proxy use is not considered a privacy harm)
  2. that there is a harm, but moving it from common case risk (i.e. *) to a dramatically reduced risk (i.e. small number of specified domains) is not a reasonable improvement
  3. That risk reduction is a useful strategy, but removing * doesn't actually reduce the amount of parties information will flow to?

The position in the issue, as i understand it, and as it was discussed in PING, is that there is risk here (of which proxy detection is one example), and that removing * both will have the practical effect of reducing the number of parties who get this information, and reduce the possibility of it flowing to unintended parties (i.e. * is a foot gun)

pes10k avatar Feb 11 '20 10:02 pes10k

I'm really surprised by this response. Is it the WG's opinion that "minority cases" where browsers and sites are trying to improve privacy by reducing referrer information is not relevant / "super interesting"?

First, let me clarify that I speak as editor and WG chair, but not as "the WG". Being a co-chair of the group doesn't mean that I can speak for its members.

The WG hasn't been consulted yet. I intend to raise this issue to the group on our upcoming call.

Second, reducing referrer information can be interesting on its own, but I don't deem it relevant as a mitigation for the issue raised.

Specifically - the issue is hinged on 3P cookies and single-keying of caches. Eliminating 3P cookies and double-keying the various browser caches is the ultimate solution here. But, we need a stop-gap, as shipping the above would take us up to 2 years. If our stop-gap relies on future RP reductions that aren't yet in place (in the browsers that haven't yet limited 3P cookies and/or double keyed their caches), it won't be a very good stop gap.

I'm still now following here. The claim in the issue is that resource timing v2 introduces new privacy harms, including detecting proxy usage. The ideal thing would be to remove these capabilities from the spec. However, the WG has identified these use cases as important in some cases and so worth supporting. So, the proposal in the issue is to at least reduce the scope of the harm, and reduce the from being the common case, to at least being a far smaller set of narrow cases. We're trying to help you, with a proposal that allows the functionality to be applied in narrow cases, where its most useful.

I'm sincerely trying to understand the WG's position:

  1. there is no harm here (that, for example, detecting proxy use is not considered a privacy harm)
  2. that there is a harm, but moving it from common case risk (i.e. *) to a dramatically reduced risk (i.e. small number of specified domains) is not a reasonable improvement
  3. That risk reduction is a useful strategy, but removing * doesn't actually reduce the amount of parties information will flow to?

The position in the issue, as i understand it, and as it was discussed in PING, is that there is risk here (of which proxy detection is one example), and that removing * both will have the practical effect of reducing the number of parties who get this information, and reduce the possibility of it flowing to unintended parties (i.e. * is a foot gun)

My position is that we should discuss this in #221, as the issue is completely unrelated to TAO restrictions. It is also my position that if you're trying to defend yourself from a specific attack, the mitigation should do something to prevent the attacker from doing its thing, and that unrelated restrictions won't help you much.

yoavweiss avatar Feb 11 '20 10:02 yoavweiss

Thank you @yoavweiss for taking a look and @snyderp for jumping in!

whatwg/fetch#904

Thanks, that's useful, but as far as I understand, this only discusses HTTP cache which doesn't address neither issue 1 (requires a partitioned DNS cache) nor issue 3 (requires a partitioned sessions/sockets). A more broad approach (similar to Chrome's Storage Isolation Project) was recently proposed here: https://github.com/privacycg/proposals/issues/4 .

Doesn't ACAO: * expose the same?

I don't know, but assuming it does, isn't TAO: * still increasing that leakage?

the numbers stated on this issue indicate 95% of currently timed resources will no longer be.

Please note that this number doesn't represent how often resource timing data is being actually collected.

Mitigations should tackle this problem head on.

Those mitigations look really good to me 👍Fuzzing seems to be a good idea for many of the resource timing properties (as a stop gap measure for all browsers that don't implement cache partitioning). Thinking a bit ahead here: isn't it possible to determine the non-fuzzed value by collecting enough fuzzed samples? Are browsers doing fuzzing in any other API (I'd love to read up on the prior art)?

When a response is received, the browser should be aware of whether the request was sent with cookies, and whether the response has Set-Cookie in its headers. The browser can then apply different restrictions to those responses and how they are reported to Resource Timing.

If restrictions will only be applied to responses that match specific criteria, wouldn't it make it easier to detect them? e.g. fuzzed transferSize would be trivial to detect if attacker has knowledge about the expected transferSize values.

My position is that we should discuss this [nextHopProtocol] in #221

I agree that we should discuss this separately to keep this issue focused.

kdzwinel avatar Feb 12 '20 11:02 kdzwinel

I don't know, but assuming it does, isn't TAO: * still increasing that leakage?

Maybe, If there are many properties that turn on TAO: * without having ACAO: * on the same resources. Since the recommendation is for any resource available on the internet to have ACAO: * enabled, and since TAO is more sensitive than ACAO, I doubt that's the case.

yoavweiss avatar Feb 13 '20 08:02 yoavweiss

If there are many properties that turn on TAO: * without having ACAO: * on the same

We (DuckDuckGo) will have another 50k website crawl soon, I'll try to get that data.

kdzwinel avatar Feb 14 '20 15:02 kdzwinel

I would be opposed to the proposed change because:

  1. It would break too much Timing-Allow-Origin usage and there's no clear reasoning for the benefit of this breaking change other than 'people may not be aware of what they're opting into'. I do think that the fact that we've added additional power to TAO over time can be a problem for developers that are not aware of these changes. This is a hard problem, as having a new header for every new measurement does not seem desirable either.
  2. It goes against our current goal of aligning TAO as much as possible with CORS, and * is already supported. This means that if/when we make CORS imply TAO (see https://github.com/w3c/resource-timing/issues/178), we'll implicitly have TAO * support because there's already ACAO * support. Thus, there's no point in removing the standalone TAO *. That said, integration with CORS will likely also mean there will be some restrictions for credentialed resources, which I think is desirable. Perhaps we could instead work on implementing those restrictions now, instead of completely getting rid of TAO *?

npm1 avatar Feb 14 '20 20:02 npm1

I looked into the use of TAO: * and ACAO: * in 50k sites (5.4M requests), and the overlap between the two is only 10%.

  • third party requests seen: 5422241
  • third party requests with ACAO: *: 1442542 (26.6% of all)
  • third party requests with TAO: *: 1190192 (22% of all)
  • third party requests with both ACAO: * + TAO: *: 542026 (10% of all)
  • popularity of TAO: * among all TAO headers: 96%

jdorweiler avatar Feb 18 '20 16:02 jdorweiler

Since you're digging into that, from the ~12% that have TAO * but not ACAO *, how often does this happen because they have ACAO with some other value, and how often is it that they're missing the ACAO header altogether?

npm1 avatar Feb 18 '20 17:02 npm1

  • 1.2% TAO: * and any non-wildcard ACAO
  • 10.7% TAO: * and no ACAO header

jdorweiler avatar Feb 18 '20 19:02 jdorweiler

@kdzwinel

secureConnectionStart === 0 (level 1) can reveal information about a connection being reused suggesting that user recently visited given website.

I believe this is a bug in the implementation of RT by several browsers

https://bugs.chromium.org/p/chromium/issues/detail?id=1039080 https://bugs.webkit.org/show_bug.cgi?id=205768

andydavies avatar Feb 21 '20 15:02 andydavies

@kdzwinel

Is this study published anywhere / is there anyway we can get a look at the data in more detail?

Of the 1.1M 3rd-party requests mentioned above, what services are in them e.g. sites users login into e.g. social networks, vs sites content is served from e.g image CDNs etc.

This will force developers to list actual domains that they want to share this information with and greatly reduce amount of domains that can be scanned, using above techniques. If there are cases where wildcard is required developers will still be able to simulate it by setting timing-allow-origin value based on the value of the request's referer header.

Is this is a realistic option for many third-party tags providers?

Consider a provider who hosts their common code on cloud storage and serves it via a dumb CDN, how would they set TAO to the relevant origin?

More intelligent CDNs can enable this via config at the edge but what happens when there's no referrer to reflect back in TAO, or vary on?

andydavies avatar Feb 21 '20 16:02 andydavies

Is this is a realistic option for many third-party tags providers?

More intelligent CDNs can enable this via config at the edge but what happens when there's no referrer to reflect back in TAO, or vary on?

You've identified exactly the point of the issue. That the feature should be limited to sites looking to debug their own resources, and that allowing everyone to time everything from everywhere is the wrong point on the privacy / functionality curve. There is both utility and risk in this functionality, so taking steps to restrict (but not remove) its availability to a smaller set of parties is PINGs suggestion for how to keep the functionality but reduce its privacy risk.

pes10k avatar Feb 22 '20 19:02 pes10k

Pete, that was not at all the PING suggestion (minutes here). Suggestions at that meeting included keeping wildcard but not making it the default; renaming it from * to unsafe-all; and cache partitioning. Dropping wildcard entirely was only presented as something that developers could overcome if they exerted more effort.

Nobody at PING voiced any support for the position you just voiced, which involved creating cases in which it is impossible to measure timing. Please do not abuse your role by claiming your own novel opinion is some group consensus.

michaelkleber avatar Feb 22 '20 22:02 michaelkleber

@michaelkleber

  1. The minutes you linked to explicitly say "Our suggestion would be to drop the ability to set wildcard".
  2. I didn't open this issue, so i would kindly appreciate you not suggest I'm putting words in other peoples' mouths.
  3. This issue (titled "Consider removing wildcard option") has been open as a PING issue for weeks now, linked to from the PING tracking issues repo.

pes10k avatar Feb 22 '20 22:02 pes10k

I've opened a distinct PING process issue to capture @michaelkleber's concern / complaint and make sure its addressed publicly, without clouding the discussion of the substance of this issue. https://github.com/w3cping/administrivia/issues/28

pes10k avatar Feb 22 '20 23:02 pes10k

That the feature should be limited to sites looking to debug their own resources, and that allowing everyone to time everything from everywhere is the wrong point on the privacy / functionality curve.

@pes10k So you're suggesting suggesting sites shouldn't be able to measure in detail the performance of 3rd party resources they're including in their pages?

andydavies avatar Mar 01 '20 20:03 andydavies

Sites are welcome to measure whatever they'd like, using their own machinery and browsers. Whats at issue here is what sites can use visitors' browsers to measure, and how broadly.

Since @kdzwinel / PING has identified privacy risk involved in the measurements, it seems prudent to reduce the number and frequency of those measurements.

No wild cards is one way of doing so. Something like an analytics / debugging mode would be another way of doing so (e.g. users who want to help the site by providing more information to the site, can signal they'd like to do so. Crawlers and other automation / measurement infrastructure might just always have such a mode on).

pes10k avatar Mar 02 '20 23:03 pes10k

I believe this is a bug in the implementation of RT by several browsers

Thanks for taking a look @andydavies! Those issues are definitely related, but I'm not quite sure if fixing them adresses our concern here. As far as I understand, after bug is fixed, secureConnectionStart === fetchStart would still reveal reused connection.

Is this study published anywhere / is there anyway we can get a look at the data in more detail?

Unfortunately not, although we are contemplating releasing raw crawl data. In the meantime we welcome everyone to double-check our findings using other sources. Perhaps HTTP Archive has similar data?

Nobody at PING voiced any support for the position you just voiced

That's our (I did this review togheter with @dharb and @jdorweiler) recommendation. We are happy to discus alternative mitigations (like the one proposed by @pes10k above).

kdzwinel avatar Mar 03 '20 11:03 kdzwinel

What about valid cases of wildcard usage? Mainly by common/shared CDNs such as Google Hosted Libraries where webfonts, css, and js libraries are loaded by several websites. Removing '*' would not allow those websites to measure latency on those resources.

marcelduran avatar May 05 '20 20:05 marcelduran

@marcelduran this feature isn't about whether sites can measure latency (there are already many many ways they can do so, like automation / bots); this issue is about whether sites should be able to use visitors' browsers to measure third-party latency, and subject those users to the corresponding privacy and security risks, w/o the users' knowledge or consent. (re: https://github.com/w3c/resource-timing/issues/222#issuecomment-593669920)

pes10k avatar May 05 '20 20:05 pes10k