Steps to reproduce the issue:
apt-get update
Describe the results you received:
Running apt-get update
on Debian Stretch just now results in
Err:2 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages
Hash Sum mismatch
as well as
E: Failed to fetch https://apt.dockerproject.org/repo/dists/debian-stretch/main/binary-amd64/Packages.bz2 Hash Sum mismatch
I have cleaned the apt caches and tried again with the same result. Also, I'm not using a proxy.
Describe the results you expected:
No error.
Appears to be related to #23202.
Same problem on Debian Trusty
W: Failed to fetch https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/binary-amd64/Packages Hash Sum mismatch
E: Some index files failed to download. They have been ignored, or old ones used instead.
Similar problem on Travis CI with Ubuntu package. It was working an hour ago.
https://travis-ci.org/goalgorilla/drupal_social/builds/134719276
W: There is no public key available for the following key IDs:
1397BC53640DB551
W: Failed to fetch https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/binary-amd64/Packages Hash Sum mismatch
E: Some index files failed to download. They have been ignored, or old ones used instead.
Same on Debian Jessie:
W: Failed to fetch https://apt.dockerproject.org/repo/dists/debian-jessie/main/binary-amd64/Packages Hash Sum mismatch
Can easily be repoduced in a contatiner also:
FROM debian:8.4
RUN \
apt-get update && \
apt-get install -yq apt-transport-https ca-certificates && \
apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D && \
echo "deb https://apt.dockerproject.org/repo debian-jessie main" > /etc/apt/sources.list.d/docker.list && \
apt-get update
What about apt-get clean
? Does it help?
@Vanuan No, already tried.
_USER POLL_
_The best way to get notified of updates is to use the _Subscribe_ button on this page._
Please don't use "+1" or "I have this too" comments on issues. We automatically
collect those comments to keep the thread short.
The people listed below have upvoted this issue by leaving a +1 comment:
@ViGo5190
Same here!
Failed to fetch https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/binary-amd64/Packages Hash Sum mismatch
Things we've tried so far:
Re-add the GPG key
curl -fsSL https://get.docker.com/gpg | sudo apt-key add -
Blow the lists cache
sudo rm -rf /var/lib/apt/lists/*
Apt clean
apt-clean
None of them have resolved the issue
Tried to install via apt. Checksum mismatch with the following file:
https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/binary-amd64/Packages
Tried the following procedures which did not help:
sudo rm -rf /var/lib/apt/lists/*
Maybe this would help apt-get -o Debug::pkgAcquire::Auth=true update
figure out the issue.
Release contains:
MD5Sum:
49df2d605bb5914873fd826f7e7e8c6f 4917 Packages.bz2
InRelease contains:
b013253c327e2bc4be87825f02936344 4915 main/binary-amd64/Packages.bz2
the latter has been updated today, Date: Thu, 02 Jun 2016 11:06:54 UTC
while Release is from yesterday.
Running apt-get -o Debug::pkgAcquire::Auth=true update
on Ubuntu 14.04 yields
[Waiting for headers]201 URI Done: bzip2:/var/lib/apt/lists/partial/apt.dockerproject.org_repo_dists_ubuntu-trusty_main_binary-amd64_Packages
RecivedHash: SHA512:d6ca1f74e876031161d1abd6cf9ad0b45f60b19876468cfcf9cacd4956dfd13be43147227a8daa5536f1455bb75b353b178942bc1843d11f0188d00117483912
ExpectedHash: SHA512:d07a3f2c42a9b213e3f03f2f11c08154512baa9fbbaed19f3601865634b82cfdde0e65151a24e523017f29ecfd08a1dfc0af2c2117b025c46d683160892b0de6
https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/binary-amd64/Packages:
Computed Hash: SHA512:d6ca1f74e876031161d1abd6cf9ad0b45f60b19876468cfcf9cacd4956dfd13be43147227a8daa5536f1455bb75b353b178942bc1843d11f0188d00117483912
Expected Hash: SHA512:d07a3f2c42a9b213e3f03f2f11c08154512baa9fbbaed19f3601865634b82cfdde0e65151a24e523017f29ecfd08a1dfc0af2c2117b025c46d683160892b0de6
Relevant output of apt-get -o Debug::pkgAcquire::Auth=true update
:
Got Codename: debian-stretch
Expecting Dist:
Transformed Dist:
Signature verification succeeded: /var/lib/apt/lists/partial/apt.dockerproject.org_repo_dists_debian-stretch_InRelease
Get:2 https://apt.dockerproject.org/repo debian-stretch/main amd64 Packages [4,941 B]
0% [Connecting to ftp.de.debian.org] [Connecting to security.debian.org] [Connecting to mirror.netcologne.de] [Connecting to packages.dotdeb.org] [Connecting to www.deb-multimedia.org] [Connecting to ftp-stud.hs-esslingen.de] [Connecting201 URI Done: https://apt.dockerproject.org/repo/dists/debian-stretch/main/binary-amd64/Packages.bz2
ReceivedHash:
- SHA512:14844ddc767052951fb68eabc19a1935fb930c798d64fd86ace0dcce3aad2af887fc091ad90897a52f341f65dadac5f0dc31a35f9c70b5bcc582314187a336cf
- SHA256:0cee3ef5330e133cc6dfbf3d34f118806ce685a1ded4210c5c4f7ef7b43e9867
- SHA1:bcf84731c3d9fe4355ce73b3cd756decbf9b67cb
- MD5Sum:c99614887831f4d020e682c8222fe49b
- Checksum-FileSize:4933
ExpectedHash:
- Checksum-FileSize:4941
- SHA512:5de62937921a32be2e9cf14f65e6adda3499fd648f37ab5ccc9547a03d211be66c3a5cd15f272e5a3f0abc53fec3903f646410337917e4201bf2a7ed5ac8581d
- SHA256:ebc0ec8921482f40bdcf1fa9a7f39b7bd198d81a769643723201c109b3b617ea
- SHA1:a61818ebafdccbccdfdeee5e550b9241b8c32722
- MD5Sum:9cd9390adc1849ba5923a70d92af1927
https://travis-ci.org/goalgorilla/drupal_social/builds/134730044
Get:11 https://apt.dockerproject.org ubuntu-trusty/main amd64 Packages
201 URI Done: https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/binary-amd64/Packages.bz2
RecivedHash: SHA512:36e068ae0288732c51bd971ee74b6d27c8707f4d11840afcca617884de82e8c533c5259d8d97bb297966424bc58ac219879f4f5d12c4abe073799bb658f4bd87
ExpectedHash: SHA512:d07a3f2c42a9b213e3f03f2f11c08154512baa9fbbaed19f3601865634b82cfdde0e65151a24e523017f29ecfd08a1dfc0af2c2117b025c46d683160892b0de6
I get on Ubuntu Wily 15.10
E: Unable to locate package docker-engine
I got the same before on Ubuntu Xenial 16.04. Is docker even added to the Xenial repo yet?
Relevant output of apt-get -o Debug::pkgAcquire::Auth=true update
:
Got Codename: ubuntu-xenial
Expecting Dist:
Transformed Dist:
Signature verification succeeded: /var/lib/apt/lists/partial/apt.dockerproject.org_repo_dists_ubuntu-xenial_InRelease
Holen:12 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages [1.712 B]
Ign:12 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages
Holen:12 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages [1.430 B]
Ign:12 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages
Holen:12 https://apt.dockerproject.org/repo ubuntu-xenial/main amd64 Packages [4.815 B]
100% [12 Packages 4.815 B/4.815 B 100%]201 URI Done: https://apt.dockerproject.org/repo/dists/ubuntu-xenial/main/binary-amd64/Packages
ReceivedHash:
- SHA512:c7883bb7a1d0b5162431576408644a85003be4601724b6f2db275cd4b603a61f8dcd924e80158c40413942519c8a528f7940ffbe5370daa4b0a0d867afe3163d
- SHA256:de12840d76e571cb6f42e63ac570c59d5332d772fb295b6919d12214052bfa6b
- SHA1:9f9c05d3b7d8ca13e9e03c4f0f12757816f02301
- MD5Sum:65e1f5c451c230a091118b468c31bae7
- Checksum-FileSize:4815
ExpectedHash:
- Checksum-FileSize:4815
- SHA512:2becf6c2b9aae5b6823ea6d9f12988e22905a87a9a03fed844a761698eee614899d7b039e081e0b330539e716918b75e87a96c287a5efbe9fc3e847d44657798
- SHA256:f4ae20e2259740699fba3a79dd7fb557c472d172b578798071274f7ba4c400f3
- SHA1:8f34563e8170c5698dc7ba04dd3cf4c8a93100cf
- MD5Sum:31d143b7a15a8a38bc92a7559c995078
can we agree upon the fact that the hashsums are incorrect/repo needs administrative action?
I worked around this by downloading the new package manually and installing using dpkg
curl -OL https://apt.dockerproject.org/repo/pool/main/d/docker-engine/docker-engine_1.11.2-0~trusty_amd64.deb
dpkg -i docker-engine*.deb
Unfortunately dpkg installation doesn't seem to play well on Travis.
Also, this is what happens to me when manually installing on Debian Stretch, using https://apt.dockerproject.org/repo/pool/main/d/docker-engine/docker-engine_1.11.2-0~stretch_amd64.deb
:
$ sudo systemctl status docker.service
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2016-06-02 14:46:59 CEST; 58s ago
Docs: https://docs.docker.com
Main PID: 31269 (code=exited, status=1/FAILURE)
Jun 02 14:46:58 penny systemd[1]: Starting Docker Application Container Engine...
Jun 02 14:46:58 penny docker[31269]: time="2016-06-02T14:46:58.553905409+02:00" level=info msg="New containerd process, pid: 31293\n"
Jun 02 14:46:59 penny docker[31269]: time="2016-06-02T14:46:59.659258835+02:00" level=error msg="[graphdriver] prior storage driver \"aufs\" failed: driver not supported"
Jun 02 14:46:59 penny docker[31269]: time="2016-06-02T14:46:59.659395935+02:00" level=fatal msg="Error starting daemon: error initializing graphdriver: driver not supported"
Jun 02 14:46:59 penny systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Jun 02 14:46:59 penny docker[31269]: time="2016-06-02T14:46:59+02:00" level=info msg="stopping containerd after receiving terminated"
Jun 02 14:46:59 penny systemd[1]: Failed to start Docker Application Container Engine.
Jun 02 14:46:59 penny systemd[1]: docker.service: Unit entered failed state.
Jun 02 14:46:59 penny systemd[1]: docker.service: Failed with result 'exit-code'.
Update: As I somehow expected, this was an unrelated problem. I fixed it by running rm -rf /var/lib/docker/aufs
after finding this. So the manual install works for me for the time being.
ping @mlaventure @tiborvass PTAL!
ETA?
yes we need an ETA too, it's pretty urgent - our complete travis build chain is dead now -.-
Here are the relevant files for xenial,
https://apt.dockerproject.org/repo/dists/ubuntu-xenial/main/binary-amd64/
InRelease 02-Jun-2016 11:06 2.6K
Packages 02-Jun-2016 2:38 4.8K
Packages.bz2 02-Jun-2016 2:38 1.7K
Packages.gz 02-Jun-2016 2:38 1.4K
Release 02-Jun-2016 3:43 1.7K
Release.gpg 02-Jun-2016 3:43 801
We can see that these files have been regenerated earlier today.
The checksums (hashes) for these files should match what is inside the signed InRelease
file of checksums.
In the InRelease
file (https://apt.dockerproject.org/repo/dists/ubuntu-xenial/main/binary-amd64/InRelease), it says that this file was generated on Date: Thu, 02 Jun 2016 03:43:32 UTC
. However, the timestamp as shown by the Web server is 02-Jun-2016 11:06
.
Among the several causes for Hash Sum Mismatch
, this one is about some weird update of InRelease
with wrong checksums. In addition, InRelease
lists the Release
as being at 0 bytes long.
@simos So this should work on Xenial now? I thought docker still did not work on Xenial and we had to go back to Wily. (then again, I am an Ubuntu user since today so what do I know)
@bmoorthamers You can manually check which repositories have mismatched hashes. See my post above. At least trusty
, wily
and xenial
are currently (probably since earlier in the morning) affected.
i use, while waiting for main package fix, the experimental package, which is working. does anybody know if there are some big differences I need to be aware of, or is there somewhere a document describing them?
@theluk the Experimental build is built from master currently
To give an update; I raised this issue internally, but the people needed to fix this are in the San Francisco timezone, so they're not present yet.
As a temporary workaround, you can install docker 1.11.2-rc1 from the "test" repository; 1.11.2-rc1 is almost the same as the current release, apart from these three changes;
https://github.com/docker/docker/pull/23164, https://github.com/docker/docker/pull/23169, and https://github.com/docker/docker/pull/23176
Those changes should not make a functional difference (and the last change only affects some corner-cases)
You can install the RC, either by changing the "main" to "test" repository for APT, or using the install script;
curl -fsSL https://test.docker.com | sh
Hoping to get this fixed ASAP
To figure out whether this issue is fixed, you can visit, for example, the page at https://apt.dockerproject.org/repo/dists/ubuntu-xenial/main/binary-amd64/ and check the timestamp for the InRelease
file.
Currently it still says 11:06
(UTC) which is the version of the file that has the wrong checksums. If it says a later time, then it has probably been fixed.
Now the time is 13:25
(UTC) and we are still waiting.
Thanks guys!
thanks @thaJeztah installation of test worked fine!
Same probleme with Ubuntu Trusty on Travis CI:
W: Failed to fetch https://apt.dockerproject.org/repo/dists/ubuntu-trusty/main/binary-amd64/Packages Hash Sum mismatch
To give an update; I raised this issue internally, but the people needed to fix this are in the San Francisco timezone, so they're not present yet.
Does this mean that Docker -- a major infrastructure company -- does not have any on-call engineers available to fix this?
@mlafeldt guess you didn't pay for 24/7 support.
@mlafeldt commercial support does; open source is separate infrastructure
i am also facing the same issue on Wily and not able to install docker:
root@vikram-VirtualBox:/etc/apt/sources.list.d# cat docker.list
deb https://apt.dockerproject.org/repo ubuntu-wily main
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=15.10
DISTRIB_CODENAME=wily
DISTRIB_DESCRIPTION="Ubuntu 15.10"
NAME="Ubuntu"
VERSION="15.10 (Wily Werewolf)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 15.10"
VERSION_ID="15.10"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
Hit http://in.archive.ubuntu.com wily-backports/main Translation-en
Hit http://in.archive.ubuntu.com wily-backports/universe Translation-en
Fetched 4,789 B in 33s (145 B/s)
W: Failed to fetch https://apt.dockerproject.org/repo/dists/ubuntu-wily/main/binary-amd64/Packages Hash Sum mismatch
E: Some index files failed to download. They have been ignored, or old ones used instead.
http://in.archive.ubuntu.com/ubuntu/dists/wily-backports/universe/i18n/Translation-en: Computed Hash: SHA256:c03ff8f13394e66ce3b2d4645e779e658df189f96326c6eaa8f137a08eb0df30 Expected Hash: SHA256:c03ff8f13394e66ce3b2d4645e779e658df189f96326c6eaa8f137a08eb0df30
Fetched 737 kB in 28s (26.0 kB/s)
W: Failed to fetch https://apt.dockerproject.org/repo/dists/ubuntu-wily/main/binary-amd64/Packages Hash Sum mismatch
https://apt.dockerproject.org/repo/dists/ubuntu-wily/main/binary-amd64/
../
InRelease 02-Jun-2016 11:06 2.6K
Packages 02-Jun-2016 2:37 28K
Packages.bz2 02-Jun-2016 2:37 4.7K
Packages.gz 02-Jun-2016 2:37 4.5K
Release 02-Jun-2016 3:43 1.7K
Release.gpg 02-Jun-2016 3:43 801
We can see that these files have been regenerated earlier today.
The checksums (hashes) for these files should match what is inside the signed InRelease file of checksums.
In the InRelease https://apt.dockerproject.org/repo/dists/ubuntu-wily/main/binary-amd64/InRelease it says that this file was generated on Date: Thu, 02 Jun 2016 03:43:32 UTC . However, the timestamp as shown by the Web server is 02-Jun-2016 11:06.
I'm gobsmacked that this process isn't automated with the checksums calculated independently by separate Docker containers and in the event of a disputed calculation between them the upload is held until a human can intervene.
@thaJeztah so there is a different repo for commercial users thats not broken?
Here is a script for Ubuntu to get notified by a chime (plays audio file) when the repository checksums get updated,
https://gist.github.com/simos/7ee8258ec17101e44bbfa93606694ede
I think there is not much to say other than get an official response from Docker on this.
@krak3n yes, there are separate releases for the commercially supported version.
For people using Travis, I could fix it doing the following:
before_install:
- sudo apt-get install libsystemd-journal0
- pushd /tmp
- curl -OL https://apt.dockerproject.org/repo/pool/main/d/docker-engine/docker-engine_1.10.2-0~trusty_amd64.deb
- sudo dpkg --force-all -i docker-engine*.deb
- docker -v
- popd
@thaJeztah both changing the "main" to "test" repository for APT, or using the install script;
curl -fsSL https://test.docker.com | sh are not work.
W: Failed to fetch https://apt.dockerproject.org/repo/dists/ubuntu-trusty/InRelease Unable to find expected entry 'test/binary-amd64/Packages' in Release file (Wrong sources.list entry or malformed file)
E: Some index files failed to download. They have been ignored, or old ones used instead.
@xuedong09 instead of "test" use "testing"
I've tweeted @docker @dockerstatus (multiple times)... this is a major issue... surprised they've been so silent!
We're working on it, folks.
Thanks @crunis - that Travis fix works a treat.
Thanks for working on fixing this. It would be great if you'd publish the results of the post-mortem once fixed.
Thanks @hertzg and @thaJeztah changing the "main" to "testing" repository for APT work for me.
@xuedong09 Just keep in mind that's where we publish pre-release packages.
Such a interesting single point of failure for docker ecosystem
@babakgh I was thinking that too. Hopefully the post-mortem can suggest a good future prevention.
This is also affecting me.
I still get: https://apt.dockerproject.org/repo/dists/debian-jessie/main/binary-amd64/Packages Hash Sum mismatch
Reminds me of what happened with npm
and NodeJS:
http://www.thejournal.ie/programmer-break-internet-code-2679793-Mar2016/
And another, me too
W: Failed to fetch https://apt.dockerproject.org/repo/dists/debian-jessie/main/binary-amd64/Packages Hash Sum mismatch
Docker repo maintainers. You need:
I hope this never happens again. Docker was causing production test and deployment issues here (on TravisCI) with this although I'm not using a single Docker container in production. 😑
To all complainers and ragers:
There is a commercial, payed and well supported version of Docker.
FYI this is the community version, supported on best effort basis and NO MORE.
@vadviktor Is that the official position of Docker, because I'd like to quote that?
@therealmarv This problem should not affect your production or any deployment pipelines anyway since no one should rely on an Internet connection or an external repository to build and deploy software.
@vadviktor Best effort does not mean bring down everyone. It means that small bugs and defects are looked at eventually. You still need to keep everything running under best case scenarios.
For ubuntu trusty (14.04), switching from the "main" to "testing" APT repository worked great for me.
+1
I never knew Docker was a 2 tier organisation where the user base is split between the haves and have nots. Surly installing docker is global concern for everyone using the software and therefore the support that "commercial" people get should also apply to the community. A paid tier to an organisation is a good way to make money but that should go beyond basics, like being able to install your software.
Accidents happen, its how we deal with them and the lessons we take forward that matter. Most of this thread seems to be rampant with speculation. Thanks in advance to all Docker team members working on fixing this problem.
@vadviktor Do you work at Docker?
@vadviktor Where can I find this commercial apt repository? What product should I buy to get access to it?
@vadviktor Does not work at Docker nor maintain the project.
It appears to be working for Ubuntu Xenial now.
FYI, this issue is on HN https://news.ycombinator.com/item?id=11822562
for everybody raging over this downtime: here's a cute deer picture to calm down and pass the time in the meanwhile:
Trusty appears to be back up
Hi everyone. I work at Docker.
First, my apologies for the outage. I consider our package infrastructure as critical infrastructure, both for the free and commercial versions of Docker. It's true that we offer better support for the commercial version (it's one if its features), but that should not apply to fundamental things like being able to download your packages.
The team is working on the issue and will continue to give updates here. We are taking this seriously.
Some of you pointed out that the response time and use of communication channels seem inadequate, for example the @dockerststus bot has not mentioned the issue when it was detected. I share the opinion but I don't know the full story yet; the post-mortem will tell us for sure what went wrong. At the moment the team is focusing on fixing the issue and I don't want to distract them from that.
Once the post-mortem identifies what went wrong, we will take appropriate corrective action. I suspect part of it will be better coordination between core engineers and infrastructure engineers (2 distinct groups within Docker).
Thanks and sorry again for the inconvenience.
heh - got the catalog but the package is missing - guess I'll have another cofee :-)
@shykes Thanks for the update - lousy way to start your morning ...
Hope the day gets better from here
I'm sad that my deer picture got less +1s than the official answer.
I have already installed docker with https://get.docker.com | sh without any error.
Look's like the guys from Docker fixed the issue,
We localized the cause of the issue, and if should be resolved now, please try again.
It may be needed to clear the apt-cache;
apt-get clean && apt-get update
Thanks for the fix @thaJeztah
Well, that was quick for an unexpected issue, thanks.
@snario you're welcome; can't take credit for the fix, but happy to see it's been sorted out 😅
👍
Sadly with this on the top of hacker news there's going to be billions of comments. Big thanks for the quick fix @thaJeztah.
I wonder if we should lock this thread before they show up.
Up until now there have been workarounds
(either grab the .deb and install with dpkg, temporarily switch to the testing
repository, etc). These are not permanent solutions.
A fix
means that the source of this problem is resolved and we can mark this issue as Solved.
As posted earlier, you can use a script to get an audio notification as soon as the main docker repositories are fixed,
https://gist.github.com/simos/7ee8258ec17101e44bbfa93606694ede
Other than that, there is not much to do.
@simos see my earlier comment; https://github.com/docker/docker/issues/23203#issuecomment-223328829 the issue should be resolved
@thaJeztah I verified that the issue has been resolved. Tested on Ubuntu 15.10. Thanks to all the other Docker folks that helped resolve this issue quickly.
Thank you all for the reports: we're very sorry for this. We're looking into the details and the timeline of events that lead to this, and we'll make sure it doesn't happen again.
I'm closing the issue, but of course feel free to let me know if you see any remaining quirks.
Ubuntu 14.04 Here, issue solved !
Probably shouldn't be surprised, but it is shocking how many people risk their infrastructure with hard dependencies on external repos. I don't even do that with my home systems.
And then complain about Docker having a single point of failure?
@jalawrence Docker is the tip of the iceberg...
Did you hear about the recent problems with node.js and one dev pulling out one single package?
I am pretty sure that most php developers using Composer - the defacto package manager for that platform - also do not store complete copies of all their site's dependencies, and the fact that there has been no mishaps so far is more luck than anything.
The problem is that everybody and their dog now depends on $world, and caching all the dependencies locally is a sisyphean task. Shall I cache all of debian, all of packagist, all of cpan, all of rubygems, all of npm within a reverse proxy at my own expenses?
And then: if github, bitbucket or travis are down, what will my developers be able to do anyway? Do I want back to the day when I had to host all of that?
Most helpful comment
Hi everyone. I work at Docker.
First, my apologies for the outage. I consider our package infrastructure as critical infrastructure, both for the free and commercial versions of Docker. It's true that we offer better support for the commercial version (it's one if its features), but that should not apply to fundamental things like being able to download your packages.
The team is working on the issue and will continue to give updates here. We are taking this seriously.
Some of you pointed out that the response time and use of communication channels seem inadequate, for example the @dockerststus bot has not mentioned the issue when it was detected. I share the opinion but I don't know the full story yet; the post-mortem will tell us for sure what went wrong. At the moment the team is focusing on fixing the issue and I don't want to distract them from that.
Once the post-mortem identifies what went wrong, we will take appropriate corrective action. I suspect part of it will be better coordination between core engineers and infrastructure engineers (2 distinct groups within Docker).
Thanks and sorry again for the inconvenience.