In my logfiles I see a lot of errors like the following about every minute:
Oct 20 11:45:08 ip-10-128-32-121 dockerd[12654]: time="2016-10-20T11:45:08.958795388Z" level=error msg="Handler for GET /containers/012467134e116e692a6218b7534b78b826f0f6ef3dc2d99c99956e43f66277ac/json returned error: No such container: 012467134e116e692a6218b7534b78b826f0f6ef3dc2d99c99956e43f66277ac"
Oct 20 11:45:08 ip-10-128-32-121 dockerd[12654]: time="2016-10-20T11:45:08.959105963Z" level=error msg="Handler for GET /containers/dbf71d333d65340ea108f9dc8778a9e06c70173ba799c6fb38da5885d853bf89/json returned error: No such container: dbf71d333d65340ea108f9dc8778a9e06c70173ba799c6fb38da5885d853bf89"
It turns out that these uuids belong to .mount units from systemd:
CGroup: /
鈹斺攢system.slice
鈹溾攢var-lib-docker-overlay2-012467134e116e692a6218b7534b78b826f0f6ef3dc2d99c99956e43f66277ac-merged.mount
鈹溾攢var-lib-docker-overlay2-dbf71d333d65340ea108f9dc8778a9e06c70173ba799c6fb38da5885d853bf89-merged.mount
It turns out that these queries originate from the cadvisor plugin in kublet.
I'd like to monitor the error rate of API calls to the docker deamon and this makes it impossible and I think cadivsor should be smart enough to know about .mount units and monitor them appropriately.
Same issue here:
Nov 1 20:55:44 prom1 dockerd[25267]: time="2016-11-01T20:55:44.822064076Z" level=error msg="Handler for GET /containers/bdd3ca44b528d9fd5704fe5781f5f375d63e5103e0c7c81daa4377bfbb1d68d1/json returned error: No such container: bdd3ca44b528d9fd5704fe5781f5f375d63e5103e0c7c81daa4377bfbb1d68d1"
Nov 1 20:55:44 prom1 dockerd[25267]: time="2016-11-01T20:55:44.823293659Z" level=error msg="Handler for GET /containers/03458bf30794c6cff51fa38946e0067655191b3087cf1012facfeec71e02e44e/json returned error: No such container: 03458bf30794c6cff51fa38946e0067655191b3087cf1012facfeec71e02e44e"
Nov 1 20:55:44 prom1 dockerd[25267]: time="2016-11-01T20:55:44.823722987Z" level=error msg="Handler for GET /containers/6379b7b68f5b7261e70de3f676923d9eca0bca1ef3123f255e90b8e9c702daa2/json returned error: No such container: 6379b7b68f5b7261e70de3f676923d9eca0bca1ef3123f255e90b8e9c702daa2"
Nov 1 20:55:44 prom1 dockerd[25267]: time="2016-11-01T20:55:44.824549578Z" level=error msg="Handler for GET /containers/30d21f7cf69be1760be6f1f65762e863c88f87e6142854dfe6bdfe5ffef51e98/json returned error: No such container: 30d21f7cf69be1760be6f1f65762e863c88f87e6142854dfe6bdfe5ffef51e98"
Nov 1 20:55:44 prom1 dockerd[25267]: time="2016-11-01T20:55:44.824986987Z" level=error msg="Handler for GET /containers/d8625f2e50ea94ed315718980ffb0a9581d27dca1cccf236c90b01b4b483b21b/json returned error: No such container: d8625f2e50ea94ed315718980ffb0a9581d27dca1cccf236c90b01b4b483b21b"
root@prom1:~# df | grep mnt
/dev/dm-3 10474496 91188 10383308 1% /var/lib/docker/devicemapper/mnt/bdd3ca44b528d9fd5704fe5781f5f375d63e5103e0c7c81daa4377bfbb1d68d1
/dev/dm-2 10474496 81224 10393272 1% /var/lib/docker/devicemapper/mnt/03458bf30794c6cff51fa38946e0067655191b3087cf1012facfeec71e02e44e
/dev/dm-6 10474496 49392 10425104 1% /var/lib/docker/devicemapper/mnt/6379b7b68f5b7261e70de3f676923d9eca0bca1ef3123f255e90b8e9c702daa2
/dev/dm-5 10474496 53236 10421260 1% /var/lib/docker/devicemapper/mnt/30d21f7cf69be1760be6f1f65762e863c88f87e6142854dfe6bdfe5ffef51e98
/dev/dm-4 10474496 317108 10157388 4% /var/lib/docker/devicemapper/mnt/d8625f2e50ea94ed315718980ffb0a9581d27dca1cccf236c90b01b4b483b21b
It my case it's a devicemapper mounts
This generates about 18 000 msg per hour on a cluster with 125 pods.
same here
same problem
same here, use overlay2 as docker filesystem
Can confirm, happens for me on a DigitalOcean Droplet with Ubuntu 16.04. I'm using docker-compose:
Dec 21 09:24:55 xxx dockerd[20414]: time="2016-12-21T09:24:55.980184307-05:00" level=error msg="Handler for GET /containers/50fbd7cddccf663af19cf6704df5937817107643124c517f9c07974f8a5de852/json returned error: No such container: 50fbd7cddccf663af19cf6704df5937817107643124c517f9c07974f8a5de852"
Dec 21 09:24:55 xxx dockerd[20414]: time="2016-12-21T09:24:55.981949345-05:00" level=error msg="Handler for GET /containers/e86431e3968ad371e0ef2bad007cbfa84e529d67dcb72d0b0c7f2bd0c6ec52c1/json returned error: No such container: e86431e3968ad371e0ef2bad007cbfa84e529d67dcb72d0b0c7f2bd0c6ec52c1"
Dec 21 09:24:55 xxx dockerd[20414]: time="2016-12-21T09:24:55.982386727-05:00" level=error msg="Handler for GET /containers/62784a73ffe9295953ddc22dd695c1de2d84cae4b464350408bd97c7daf88446/json returned error: No such container: 62784a73ffe9295953ddc22dd695c1de2d84cae4b464350408bd97c7daf88446"
Dec 21 09:24:55 xxx dockerd[20414]: time="2016-12-21T09:24:55.982768326-05:00" level=error msg="Handler for GET /containers/063db19abeed6c3a4f10d99ab5718405ac6a976ac854805528ec6258971e2b40/json returned error: No such container: 063db19abeed6c3a4f10d99ab5718405ac6a976ac854805528ec6258971e2b40"
same problem. any updates on how to prevent this?
I think this should fix https://github.com/google/cadvisor/issues/1573
@jaylinski it works for you?
@fahimeh2010 I can't tell yet. I'm waiting for the fix to be released in a stable image (https://hub.docker.com/r/google/cadvisor/tags/).
Hi folks ,
I found this thread and I had the same issue :
level=error msg="Handler for GET /containers/7586102c883e87f63588bf45a761edaf39f3338e63c1646fbc4e39a8534bccaa/json returned error: No such container: 7586102c883e87f63588bf45a761edaf39f3338e63c1646fbc4e39a8534bccaa"
For 24 hours I had 28,800 error message , of course not with the same container ID :P , I use cadvisor as a container cadvisor:latest , I switch for cadvisor:canary (3 Mars 2017) and it fix my problem .
No more error log :)
I'm very happy, so THANKS a lot for your work !
I know the issue is close , but for other people like me who found this thread this FIX WORK !
Do you have any idea when this fix will be include in the stable release ?
I'm running google/cadvisor:v0.25.0 and got the same issue:
I0425 08:24:04.857137 1 factory.go:115] Factory "docker" was unable to handle container "/system.slice/run-docker-netns-a05ba53671e3.mount"
I0425 08:24:04.857152 1 factory.go:108] Factory "systemd" can handle container "/system.slice/run-docker-netns-a05ba53671e3.mount", but ignoring.
I0425 08:24:04.857169 1 manager.go:867] ignoring container "/system.slice/run-docker-netns-a05ba53671e3.mount"
I0425 08:24:04.857415 1 factory.go:115] Factory "docker" was unable to handle container "/system.slice/var-lib-docker-containers-3cb6d9ece7247f29df946bba4735753092be13089a3f3c30552f54e0bdfa3171-shm.mount"
docker version:
Client:
Version: 1.12.6
API version: 1.24
Go version: go1.6.3
Git commit: d5236f0
Built: Fri Mar 31 02:09:07 2017
OS/Arch: linux/amd64
Server:
Version: 1.12.6
API version: 1.24
Go version: go1.6.3
Git commit: d5236f0
Built: Fri Mar 31 02:09:07 2017
OS/Arch: linux/amd64
docker info:
Containers: 38
Running: 38
Paused: 0
Stopped: 0
Images: 36
Server Version: 1.12.6
Storage Driver: overlay2
Backing Filesystem: extfs
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null overlay host
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options: seccomp selinux
Kernel Version: 4.9.16-coreos-r1
Operating System: Container Linux by CoreOS 1298.7.0 (Ladybug)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 4.722 GiB
Name: coreos_k8s2_33
ID: NVAQ:W7FV:VJFC:XEI7:GCWR:PCKX:LDCU:TWJ3:Y65U:CSPX:N4IL:343R
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Insecure Registries:
127.0.0.0/8
@ntquyen Your logs dont have anything suspicious. People were being spammed with:
level=error msg="Handler for GET /containers/CONTAINER_UID/json returned error: No such container: CONTAINER_UID"
It is expected that .mount cgroups are ignored.
Most helpful comment
I think this should fix https://github.com/google/cadvisor/issues/1573