attribution-reporting-api
attribution-reporting-api copied to clipboard
Additional Debugging Asks For Attribution Reporting Api
The attribution reporting API has a set of mechanical and privacy-oriented restrictions, which introduces discrepancies between existing third-party cookie based conversion measurement and api-based conversion measurement. Some restrictions such as LDP noise are very challenging to reproduce or simulate, and can lead to unknown gaps when developers are comparing cookie-based measurement to the api-based measurement data. The debugging reports should allow an ad-tech to accurately compare their existing 3rd party cookie based conversions with the conversions reported by APIs, and help them understand which cookie-based conversions will not be reported, and for what reason. It’s possible to improve the integration and debugging experience by introducing the following debugging reports. All the asks below are applicable to event and aggregate API, we have called out asks that are specific to event/aggregate API separately.
- Source Debug Reports: Source Debug Reports enable comparing all candidate source events against source events that were registered, and examine the reason for rejected / deleted source events. This report also enables identifying source events lost due to unknown/network losses. Examples of specific behavior as listed below:
- Source registration success reports when a source is registered successfully with Chrome.
- Source registration failure reports when the source registration is rejected in Chrome, for example, a source hits API rate limits. Failure reports should also be sent with Unknown reason if Chrome doesn't want to reveal the reason.
- Source noise reports when noise is added for the source.
- Source deletion reports when a source is deleted, for example, hitting mpc limits(Event API), budget exhaustion(Aggregate API).
- Trigger Debug Reports: Trigger Debug Reports enable comparing all candidate trigger events against trigger events that were registered, and examine the reason for rejected / deleted trigger events. This report also enables identifying trigger events lost due to unknown/network losses. Examples of specific behavior as listed below:
- Trigger registration success reports when a trigger is registered successfully with Chrome.
- Trigger registration failure reports when a trigger is rejected by Chrome, for example, cross-domain iframe, https etc. Failure reports should also be sent with Unknown reason if Chrome doesn't want to reveal the reason.
- Trigger deletion report when trigger gets deleted, for example, a new trigger will be deleted if a trigger already exists with the same deduplication_key.
3. Attribution Debug Reports: Attribution Debug Reports enable comparing all candidate attribution events against attribution events that were generated, and examine the reason for rejected / deleted attribution events. This report also enables identifying attribution events lost due to unknown/network losses. Examples of specific behavior as listed below:
- Attribution failure reports, for example, hitting reporting limits, Empty set after attribution filter matching.
- All the attribution reports all the way up to the declared source expiry time, even if the source is beyond MPC limits( 3 for navigation, 1 for event) for event reports or the source has exhausted budget for aggregatable reports. It is expected that the attribution reports are sent even after the source deletion report is sent.
Hey @chandan-giri I have two quick questions about this ask:
- If we send all the failure reports mentioned here, is there still a significant need for success reports? Those will consume more bandwidth and I want to make sure that they are needed.
- “Source deletion reports when a source is deleted, for example, hitting mpc limits(Event API), budget exhaustion(Aggregate API).” Isn’t this redundant with the last bullet, which captures debug reports for all attributions ignoring MPC limits / budget? Do you need both?
I also wanted to point out that for source / trigger failures, our current design limits some potential failures due to a requirement that the opt-in is only discovered after parsing the response JSON. Obviously we will also not be able to send debug reports if there's no response header indicating the request is using the Attribution Reporting API either.
Let me also cc @linnan-github who is looking into this.
Hi @csharrison ,
Please see the responses for your comment
"If we send all the failure reports mentioned here, is there still a significant need for success reports? Those will consume more bandwidth and I want to make sure that they are needed." Success and failure report both are important for debugging purposes. For debugging, it is important that there is a clean slice where impression and conversion registrations are successful. In cases where Chrome's failure report cannot reach Ad tech due to network failures, or Chrome is not able to send a failure report in some failure scenarios due to design limitation, Ad-tech will assume it as a successful registration which can cause under reporting. Also for cases where users have consented, it will be helpful to get success reports which would give a clean slice of traffic to analyze for EEA. For bandwidth concerns, we can limit the reports by setting a bit(e.g success_reporting) for a sample of users.
"Source deletion reports when a source is deleted, for example, hitting mpc limits(Event API), budget exhaustion(Aggregate API).” Isn’t this redundant with the last bullet, which captures debug reports for all attributions ignoring MPC limits / budget? Do you need both?" In terms of the source deletion reports V.S. sending all the debug reports by ignoring mpc/budget, we want both. The rationale is as follows: Source deletion reports would be treated as a done marker for a given source given we would not expect any real reports after the deletion report. This is critical to delay related debugging where we can stop expecting reports for a given source after the deletion report is sent. All the debug reports for all attributions ignoring MPC limits / budget would be used to evaluate the API for different configs using the reports outside the existing privacy budgets to give feedback to Chrome. Also, we do expect a boolean field in the debug report to represent which it is within mpc limit / budget.
"I also wanted to point out that for source / trigger failures, our current design limits some potential failures due to a requirement that the opt-in is only discovered after parsing the response JSON. Obviously we will also not be able to send debug reports if there's no response header indicating the request is using the Attribution Reporting API either." We want to get a failure report if the JSON response parsing fails since the response can get truncated or malformed. Can Chrome give out failure reports if the JSON parsing fails in all scenarios without relying on the error_reporting bit in the response?