Cwa-documentation: Determination of the exposure configuration parameters

Created on 16 Jun 2020  路  11Comments  路  Source: corona-warn-app/cwa-documentation

question

Most helpful comment

I see that they have a more recent paper, where they actually tested the CoronaWarnApp configuration settings in a real life light rail train.

Conclusion:

We find that the Swiss and German detection rules trigger no exposure notifications on our data, while the Italian detection rule generates a true positive rate of 50% and a false positive rate of 50%. Our analysis indicates that the performance of such detection rules is similar to that of triggering notifications by randomly selecting from the participants in our experiments, regardless of proximity

All 11 comments

@stevesap Unfortunately the documents don't help. I always see the same non-helpful document provided to the same important question. I raised this issue here: https://github.com/corona-warn-app/cwa-documentation/issues/257

@christian-kirschnick We have found the main drivers, but we're curious about the details. It seems odd that such important parameter choices are not documented elsewhere.

@BenzlerJ has been helping on the issue I opened but I don't really know everything I'd like to know so that I can write it up as documentation for anyone interested.

@stevesap The transmission_risk.pdf works well for explaining the parameters with lots of references. This is very much not the case for the explanation of the risk assessment, which has very little information which parameters are chosen and even less for the reason why.

In particular, as written, I would consider this to be an extremely questionable choice: "All exposures for a diagnosis key that lasted less than 10 minutes in total (regardless of how close the smartphones came during that time) or during which the smartphones were more than 8 meters (73 dBm) apart on average (regardless of how long the exposure lasted) are discarded as harmless." (Emphasis mine)

Unless my understanding of the definition of an exposure is wrong, this would lead to an exposure being thrown away where Person A interacts for 10 minutes at close distance with Person B, but afterwards stay in the same larger area for another hour (say a club, or a protest rally). The average distance in that scenario would be higher than 8 meters, and thus the exposure would be discarded, although A and B interacted with each other for 10 minutes at a close distance.

What seems more intended here is summing the minutes during which distance was smaller than 8 meters, and discarding those events that do not reach 10 minutes.

Just for clarification: the cwa-server team just receives these values from the RKI and sets them accordingly in the configuration. Therefore, we'll move this discussion about why the values are the way they are, to the documentation repo.

@CosmicGans This is indeed a concern, see #322. I think combined with what's described in #228 it can lead to false negative attacks.

@CosmicGans The Apple doc says

The attenuation (transmission power - RSSI) can vary during an exposure event. Attenuation values >0 are weighted by the duration at each risk level and averaged for the overall duration. The framework measures and calculates this value.

which seems to be the source of that "on average" statement from cwa-risk-assessment.md:

All exposures for a diagnosis key that lasted less than 10 minutes in total (regardless of how close the smartphones came during that time) or during which the smartphones were more than 8 meters (73 dB attenuation) apart on average (regardless of how long the exposure lasted) are discarded as harmless.

Still, the CWA doc says nothing of the fact that it is a _weighted_ average, and the Apple doc is ambiguous about the exact details of how this is computed.

Sorry that we haven't gotten back to you earlier. We also missed to inform you about the addition by Fraunhofer IIS that we received a few weeks ago: https://github.com/corona-warn-app/cwa-documentation/blob/master/2020_06_24_Corona_API_measurements.pdf

That should explain the background of the determination of the exposure configuration parameters.

Mit freundlichen Gr眉脽en/Best regards,
SW
Corona Warn-App Open Source Team

Thank you for this document. I see that the "real-life train scenario" was done in a warehouse with people sitting on wooden chairs (except for a handful which were some kind of exhibition chairs). This looks very unrealistic compared to some experiments conducted by an Irish team in a real train carriage. That experiment concluded there would be significant obstacles to using the app productively in such scenario, due (for instance) to the metal in the real train seats, and the confined "cigar" shape of a train carriage).

Has a more realistic experiment been performed since? Where?

Thanks for providing the reference to the Irish project. I just reached out again to our contacts at Fraunhofer IIS. Will get back to you once we have additional information.

Mit freundlichen Gr眉脽en/Best regards,
SW
Corona Warn-App Open Source Team

I see that they have a more recent paper, where they actually tested the CoronaWarnApp configuration settings in a real life light rail train.

Conclusion:

We find that the Swiss and German detection rules trigger no exposure notifications on our data, while the Italian detection rule generates a true positive rate of 50% and a false positive rate of 50%. Our analysis indicates that the performance of such detection rules is similar to that of triggering notifications by randomly selecting from the participants in our experiments, regardless of proximity

Was this page helpful?
0 / 5 - 0 ratings