cwa-documentation
cwa-documentation copied to clipboard
Why is the attenuation weight for Medium range higher than for Close range?
Hi,
i have a question regarding the attenuation and the associated weights.
My understanding is that the exposure time is weighted based on attenuation. Isn't the assumption that smaller attenuation pose a higher risk than larger attenuation?
According to the risk calculation parameters and Figure 16 of the solution architecture, the weight for "Close" range is 0.8 and the weight for "Medium" range is 1.0. In a previous commit dated March 1, 2021, the "Close" range had a higher weight than the "Medium" range. This made sense.
Shouldn't the weight for "Close" range be higher than the "Medium" range, as it was before?
Is there a reason behind this change?
@mthiop this was the result of the analysis/optimization that Fraunhofer Institut did, see:
- CWA Blog: Project Team further improves Corona-Warn-App's risk calculation in response to current coronavirus situation
- Press release from Fraunhofer Institute: Fraunhofer unterstützt Entwicklung der Corona-Warn-App (from 2020 but gives you a good idea of the kind of analysis they did)
@mthiop Are you satisfied with @mlenkeit's answer and this issue can be closed?
I would actually like somebody from Fraunhofer to confirm or publish the analysis data.
@OlympianRevolution
I would actually like somebody from Fraunhofer to confirm or publish the analysis data.
The Science blog article: About the Effectiveness and Benefits of the Corona-Warn-App in the section What is the purpose of evaluating the CWA and what aspects play a role in the evaluation? includes a video showing how the calibration was carried out. It also says: "We also intend to provide more information about the results of these tests.". I don't remember seeing any further data published however.
For such a patently illogical conclusion it would be good to publish the data or at least paper, to make sure this is not a typo.
I couldn't find any explanations from a quick internet research for this special case in question here. However, there are theoretically statistical implications that would explain why the attenuation weights had been adopted in this way: When analysis of the former attenuation weight model reveals a high sensitivity (correct detection of high risk contacts) but a poor specifity (i.e., incorrect detection of no/low risk contacts resulting in too many high risc contacts/false positives), you would refine the model (e.g., by a receiver operating curve) to better balance sensitivity and specifity. That could mean that you reduce the weights for all attenuation buckets to enhance specifity, which results in a too low sensitivity. But by again increasing the weight of the medium range attenuation bucket the result could be again a better sensitivity with marginal loss in specifity, hence a better balanced model. Imagine a radio with an equilizer that is connected to loudspeakers that only have large bass membran and a powerful high tone unit. If you turn all equilizer frequencies to maximum, the sound may be distorted by low and high frequencies. If you then pull down the low and high frequencies of the equilizer but leave the middle frequencies untouched, the sound becomes balanced. That is what may have happened here.
But I agree to you, it would be good to have an official publication about the study.
@OlympianRevolution @vaubaehn thank you for your interest in the science behind this! It looks like there is indeed no scientific publication about this available yet. I'll let you know once something is published.
I suggest keeping the issue open until then.
@OlympianRevolution @vaubaehn in the meantime, you might find this one here interesting (if you have access): https://ieeexplore.ieee.org/document/9662591
Let's still keep the issue open. There might be something else I can refer you to, soon 😉
Here is an open link https://www.researchgate.net/publication/357270880_Contact_Tracing_with_the_Exposure_Notification_Framework_in_the_German_Corona-Warn-App
Sadly the optimized parameters section does not go in to detail about how the optimization was performed or how increasing the close range weight would decrease the F2 score. It also does not have any discussion of the counterintuitive short range weight. I suspect there may have been relatively little data at short range. And thus the short range weight may be due to chance. But without the data or eval code it is impossible to know.
@mlenkeit
Are you able to share an update with us here?
@Ein-Tim no, the document that I was referring to at the time is not yet finalized. I'll check again with the author...
@mlenkeit
Are you able to share an update with us here now?
@AnonymousUserUse yes and no. I have checked with the author and the document is still being finalized. I don't have an ETA though.