It would be nice if the developers team could provide a small tool (perl script or similar should probably do) to transparently download published diagnostics keys from the CDN. This would allow the public (including journalists) to make some independent assessment of the effectiveness of the Corona-Warn-App by comparing the number of published diagnostics keys (as actually retrieved from the CDN and directly corresponding to the number of people reporting their positive test result through the app) to the total number of newly-infected cases in Germany reported e.g. by RKI or other reliable sources (maybe taking into account a certain reporting delay).
In my opinion such a ratio would be much more relevant than any App Store download figures which will likely be published once the App has been launched.
Unfortunately your documentation still seems to be a bit fuzzy about the details of the download process/protocol for the diagnostics keys from the CDN.
Thank you for opening this issue. The project team will review and comment back soon. Corona-Warn-App team. JB
This should go a long way, once it is figured out where and how the key bundles are actually accessible.
Google also explains how to work with this data: https://github.com/google/exposure-notifications-server/blob/master/examples/export/README.md
We currently don't have anything planned in that direction. But it should be rather easy for the community to build something like that tailored to the respective use case - the source code is open and the architecture is well documented.
Mit freundlichen Gr眉脽en/Best regards,
SW
Corona Warn-App Open Source Team
Something that is missing in the other systems currently running is a way to know early on which TEKs are injected that are test TEKs.
Something to consider when you roll out the system.
Here's a Python script that reads Diagnosis Keys .zip files ("Exposure Key export files" as specified here):
https://github.com/mh-/diagnosis-keys
Thanks. That is what I have been looking for. Now has the official download URL for the data already been published? I understand that the newly uploaded diagnosis keys will be published once a day in a single Zip file; is that correct?
Will the Zip file also contain fake/dummy/test entries? That would of course dilute the statistics.
When a person publishes his or her data or day X, at first 14 diagnosis keys will be uploaded (and ultimately published?) on that day (corresponding to days X-14 through X-1), plus a fifteenth one corresponding to day X will be uploaded on day X+1, correct?
Is a ZIP file for day X created in the backend and then provided for download at some time in the early hours of day X+1? (all dates and times referring to UT)
Did I get this right? Is there any documentation explaining the procedure?
Dear maintainers, given that there are valid questions raised by the community, it would be helpful if you could reopen the issue and answer them quickly.
Yes, the source code is open, but the documentation is not complete (of course it can't be this quickly). That's why we need insider help.
Please respond in particular to the "test key" question raised by @pdehaye
Also, do we know what the official server is up, what is the URL?
Also, I don't like how @SebastianWolf-SAP says things are "rather easy" when there are valid unanswered questions.
I opened issue #258 not knowing about this issue - this discussion here is much more valuable. But since this one got closed, I will keep #258 open for now to show that there are questions to be answered.
Most helpful comment
Here's a Python script that reads Diagnosis Keys .zip files ("Exposure Key export files" as specified here):
https://github.com/mh-/diagnosis-keys