Cwa-documentation: [Security] [Privacy] Is corona infected information also shared with users who met only the 1 infected person in the last 14 days

Created on 20 Jun 2020  Â·  5Comments  Â·  Source: corona-warn-app/cwa-documentation

Use Case:

  • Carsten meets on day1: Anna and 10 further people
  • Carsten meets on day2: Nils alone

    • Nils only met Carsten within 14 days as he does home office and lives alone

  • Carsten meets on day3 also 20 different people
  • Carsten gets positive tested due to Corona and shares this information with the App as he thinks it is anomyous and he met more then 30 people

  • Anna gets a notification that one of the people she met has Coronona

    • Anna was in a group with more than 10 people so she cannot identify the person
  • Nils get the notification that one of the people he met in the last 14 days has Corona

    • Nils only met Carsten, so Nils now knows directly that Carsten has Corona

Question:

  • Is Nils also informed when Carsten shares his Corona infection via the App?

Possible Solution:

  • Share only information with users who have more than X (say 20) IDs in the contact protocol so that they cannot identify easily who was the infected person to keep privacy
question

All 5 comments

Carsten should probably notify Nils directly (as well as all other people he met with) about the fact that he has Corona, I think the idea behind this app is more to get notification from people that would normally never notify you (e.g. person sitting next to you on the train).

This is one of the risks which is inherent to the decentralized protocol.
Let me try to explain in more detail: If you're able to isolate and attribute one phone (i.e. you know that your phone only had contact with one other phone and you know who this other phone belongs to) you will always be able to attribute the infection if this person later on tests positive and decides to uploads their result. The solution you're suggesting unfortunately will not work. An attacker can always download the keys of those who test positive (and decide to share this), and manually compare them with keys they observed outside of the app (e.g. by running a separate Bluetooth device with software to record the RPIs).

Also: the phone doesn't know how many people it has been in contact with. It only sees and records RPIs (which change every 10 - 15 min) and only once a contact confirms they’re infected and chooses to upload their TEKs (i.e. their daily keys) the phone could find out that some of the RPIs it recorded are from one phone, but unless all the RPIs it recorded are from persons who reported as positive, it doesn’t know how many non-infected persons it was in contact with (which is one of the properties that make the protocol privacy preserving).
Additionally: even if the phone could find out how many other people it was in contact with (which it can’t), the responsible course of action would probably still be to notify everyone as soon as possible even if they didn’t have further contacts yet. After all: they might be infected with a dangerous disease and in addition to the risk for their own life they might soon go out to a large gathering and become the cause of the next superspreading event. That alone should be enough justify the impact on privacy in this case. It is also what manual contact tracers will do: they will contact Nils as soon as possible and Nils will immediately know that Carsten must have tested positive even if the contact tracers don’t tell him the name.

Btw: the case you’re describing is very similar to the comic on the frontpage of https://tracing-risks.com/. It’s one among multiple possible “attack vectors” which have been discussed. Cf. my comment here: https://github.com/corona-warn-app/cwa-documentation/issues/273#issuecomment-645322925

Kudos and a big thanks to @daimpi for explaining everything is such a great detail!

@vonwenckstern Unfortunately, I need to close this issue as a consequence. Thank you very much for your contribution.

Mit freundlichen GrĂĽĂźen/Best regards,
SW
Corona Warn-App Open Source Team

@daimpi Thank you very much for this information including the drawbracks of this approach.

I appreciate it that you are so open :).

For all people who cannot read so good English, I translated your information via Google translate to German:
tracing-risks.en.de.pdf

After reading the PDF, I think that the general approach should be different ( @SebastianWolf-SAP ):

  1. government tests randomly every week 500.000 people
  2. if a person is tested positive, government employees (only government ones) can use the App to get the contacts of this person from the last 14 days
  3. if now the government goes also to a person to make a corona test (based on step 2.), this person does not know whether (s)he was choosen randomly (step 1.) or based on a contact (step 2.)

With the large number of random tests and that the persons are not directly informed why they are tested, the attack sceneario with the job interview would not work anymore.


Back to the use case of this ticket: It would mean, if Carsten is tested positive, the government would also go to Nils; but now, Nils and tests him. However, since the government does not tell Nils why he is tested (random or because of Carsten), Nils cannot make his 100% conclusion anymore.

@EvgenyKusmenko: FYI

Glad you found this helpful :).

Disclaimer: I’m also not an infosec expert and I’m not associated with this project in any form, I’m just an interested user.

Regarding your suggestion: i see where you’re coming from. But if only the government would be allowed to see the TEKs of people who tested positive how would it figure out which people they were in contact with? With the current system probably the best that could be done would be to upload all the RPIs a positive user has seen. But those RPIs don’t identify users. In order to do this the government would further have to query all the TEKs (or just manage them centrally in the first place) and have a mapping of TEKs to real names/addresses (after all it has to know where to go to make the tests).
At this point we’ve arrived at a fully centralized system which poses a multitude of new problems:

  1. The server becomes a central point of failure for privacy if it gets compromised, as the mapping between TEKs and real identities fully de-anonymizes the users.
  2. This could o/c also be put to more nefarious use by e.g. intelligence agencies who get access to the mapping: by installing listening devices around points of interest they would be able to see who was in the vicinity and when. The existence of such scenarios alone (even if never executed) would probably be enough to seriously lower trust and depress uptake which kind of counteracts efforts to gain the widespread adoption that is so important for this kind of app. Afaik this was one of the reasons why the centralized approach was dropped by the German government in favor of a decentralized one.
  3. Afaik such a centralized approach is anyway not allowed by Google/Apple if you want to use their Exposure Notification API (GAEN). Dropping the use of their API opens up a whole other can of worms: from problematic interoperability between apps from different countries over battery usage to even getting basic BLE functionality while the app is running in the background on Apple devices to name just a few.

(Probably there are even more problems with this approach, those were just the first few that came to my mind.)
Maybe there are also some less problematic ways to realize a centralized system, but point 3 alone is probably already enough reason to abandon this idea (afaik France is using a centralized system without GAEN, but I don’t really know much about their implementation).

Was this page helpful?
0 / 5 - 0 ratings

Related issues

kbobrowski picture kbobrowski  Â·  3Comments

ndegendogo picture ndegendogo  Â·  3Comments

cougarten picture cougarten  Â·  3Comments

HolgerMayer picture HolgerMayer  Â·  3Comments

AndiLeni picture AndiLeni  Â·  3Comments