Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I have this bug after a power outage.
podman run --name nextcloud fedora
error creating container storage: the container name "nextcloud" is already in use by "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1". You have to remove that container to be able to reuse that name.: that name is already in use
podman ps --all | grep nextcloud has not output
Steps to reproduce the issue:
Dunno how to reproduce it, it appeared after a power outage and it's abrupt shutdown
Output of podman version:
host:
BuildahVersion: 1.6-dev
Conmon:
package: podman-1.0.0-1.git82e8011.fc29.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.12.0-dev, commit: 49780a1cf10d572edc4e1ea3b8a8429ce391d47d'
Distribution:
distribution: fedora
version: "29"
MemFree: 374931456
MemTotal: 8241008640
OCIRuntime:
package: runc-1.0.0-67.dev.git12f6a99.fc29.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc6+dev
commit: d164d9b08bf7fc96a931403507dd16bced11b865
spec: 1.0.1-dev
SwapFree: 8262250496
SwapTotal: 8380215296
arch: amd64
cpus: 4
hostname: asheville.intranet.zokormazo.info
kernel: 4.20.6-200.fc29.x86_64
os: linux
rootless: false
uptime: 12h 27m 2.91s (Approximately 0.50 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 6
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 8
RunRoot: /var/run/containers/storage
Output of podman info --debug:
debug:
compiler: gc
git commit: '"49780a1cf10d572edc4e1ea3b8a8429ce391d47d"'
go version: go1.11.4
podman version: 1.0.0
host:
BuildahVersion: 1.6-dev
Conmon:
package: podman-1.0.0-1.git82e8011.fc29.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.12.0-dev, commit: 49780a1cf10d572edc4e1ea3b8a8429ce391d47d'
Distribution:
distribution: fedora
version: "29"
MemFree: 374919168
MemTotal: 8241008640
OCIRuntime:
package: runc-1.0.0-67.dev.git12f6a99.fc29.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc6+dev
commit: d164d9b08bf7fc96a931403507dd16bced11b865
spec: 1.0.1-dev
SwapFree: 8262250496
SwapTotal: 8380215296
arch: amd64
cpus: 4
hostname: asheville.intranet.zokormazo.info
kernel: 4.20.6-200.fc29.x86_64
os: linux
rootless: false
uptime: 12h 27m 32.11s (Approximately 0.50 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ConfigFile: /etc/containers/storage.conf
ContainerStore:
number: 6
GraphDriverName: overlay
GraphOptions: null
GraphRoot: /var/lib/containers/storage
GraphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
ImageStore:
number: 8
RunRoot: /var/run/containers/storage
Additional environment details (AWS, VirtualBox, physical, etc.):
Bare metal f29
Some more info:
My containers.json on /var/lib/containers/storage/overlay-containers has a reference to this container:
{
"id": "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1",
"names": [
"nextcloud"
],
"image": "dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4",
"layer": "5078a913609383e102745769c42090cb62c878780002adf133dfadf3ca9b0e55",
"metadata": "{\"image-name\":\"docker.io/library/nextcloud:14.0.3\",\"image-id\":\"dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4\",\"name\":\"nextcloud\",\"created-at\":1544648833,\"mountlabel\":\"system_u:object_r:container_file_t:s0:c151,c959\"}",
"created": "2018-12-12T21:07:13.804209323Z"
}
But podman doesn't know about it. podman prune doesn not help neither
@Zokormazo I'm no podman dev, but maybe try adding sudo to your command: sudo podman ps --all
I had to
sudo podman run -p 5432:5432 ...because podman 1.0 needed elevated permission for port bindings (fixed in v1.1). Got confused afterwards withpodman ps --alloutput being empty. But runningsudo podman ps --alldid the trick.
That container is probably a relic from a partially failed container
delete, or was made by Buildah or CRI-O. You should be able to force it's
removal, even if we don't see it, with Podman rm -f
On Wed, Mar 6, 2019, 05:48 Julen Landa Alustiza notifications@github.com
wrote:
Some more info:
My containers.json on /var/lib/containers/storage/overlay-containers has a
reference to this container:{
"id": "31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1",
"names": [
"nextcloud"
],
"image": "dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4",
"layer": "5078a913609383e102745769c42090cb62c878780002adf133dfadf3ca9b0e55",
"metadata": "{\"image-name\":\"docker.io/library/nextcloud:14.0.3\ http://docker.io/library/nextcloud:14.0.3%5C",\"image-id\":\"dbcf87f7f2897ca0763ece1276172605bd18d00565f0b8a86ecfc2341e62a3f4\",\"name\":\"nextcloud\",\"created-at\":1544648833,\"mountlabel\":\"system_u:object_r:container_file_t:s0:c151,c959\"}",
"created": "2018-12-12T21:07:13.804209323Z"
}But podman doesn't know about it. podman prune doesn not help neither
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/2553#issuecomment-470061054,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHYHCCJTpVbzZpK2bciVGYAfvs9TIq0Eks5vT5zzgaJpZM4bgkhX
.
@Zokormazo I'm no podman dev, but maybe try adding
sudoto your command:sudo podman ps --all
All my commands were used as root.
That container is probably a relic from a partially failed container delete, or was made by Buildah or CRI-O. You should be able to force it's removal, even if we don't see it, with Podman rm -f
[root@asheville ~]# podman rm -f nextcloud
unable to find container nextcloud: no container with name or ID nextcloud found: no such container
[root@asheville ~]# podman rm -f 31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1
unable to find container 31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1: no container with name or ID 31d772490bd4f63019908d896a8d5d62ce4f7db78162db4c8faab60725dbd4e1 found: no such container
Can't remove it with rm -f
Oh, you're on 1.0 - damn. We added that to rm -f in 1.1
If you have Buildah installed, it should be able to remove the container in the meantime - it operates at a lower level than us, and as such can see these containers.
https://paste.fedoraproject.org/paste/qIQ9gu0DF6ZtN8fEwG5pYg
Cleaned with podman 1.1.0
Having the same issue on centos 7.6 with podman podman.x86_64 1.2-2.git3bd528e.el7 :
podman run --rm --name container-registry registry does not remove overlay file system on stop. trying to podman rm -f container-registry give an error "/var/lib/containers/storage/overlay/.../merged device or resource busy"
[root@... lib]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
[root@... lib]# uname -a
Linux ... 3.10.0-957.12.2.el7.x86_64 #1 SMP Tue May 14 21:24:32 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
sytemd service:
ExecStartPre=-/usr/bin/podman rm -f container-registry
ExecStart=/usr/bin/podman run --conmon-pidfile=/run/container-registry.pid --rm -p 5000:5000 --name container-registry -v /etc/letsencrypt:/certs -v /opt/docker:/var/lib/registry registry:2.6.2
The podman fails at "podman run":
/usr/bin/podman run --conmon-pidfile=/run/container-registry.pid --rm -p 5000:5000 --name container-registry -v /etc/letsencrypt:/certs -v /opt/docker:/var/lib/registry registry:2.6.2
Error: error creating container storage: the container name "container-registry" is already in use by "8dc7a522698ced3d2a0c63a3023ed4ee879d68db4a913829c098d4a260f86cf7". You have to remove that container to be able to reuse that name.: that name is already in use
# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
# podman rm -f "container-registry"
ERRO[0000] Failed to remove container "container-registry" from storage: remove /var/lib/containers/storage/overlay/fb732b3ba3f35ae237eb334bee09676408a068554a5fed33eff03d46c9ac7655/merged: device or resource busy
[root@... lib]# lsof |grep overlay
#emtpy
[root@... lib]# mount |grep overlay
# dmesg |grep -i "overlay\|podman\|docker"
[ 18.250705] TECH PREVIEW: Overlay filesystem may not be fully supported.
[root@... lib]# cat /var/lib/containers/storage/overlay-containers/containers.json
[{"id":"8dc7a522698ced3d2a0c63a3023ed4ee879d68db4a913829c098d4a260f86cf7","names":["container-registry"],"image":"d5ef411ad932291d7733fe7188a1515b1db7bd6e69222a13929cdc5315d21fb0","layer":"fb732b3ba3f35ae237eb334bee09676408a068554a5fed33eff03d46c9ac7655","metadata":"{\"image-name\":\"docker.io/library/registry:2.6.2\",\"image-id\":\"d5ef411ad932291d7733fe7188a1515b1db7bd6e69222a13929cdc5315d21fb0\",\"name\":\"container-registry\",\"created-at\":1560743703}","created":"2019-06-17T03:55:03.766579368Z","flags":{"MountLabel":"","ProcessLabel":""}}]
May be related to https://github.com/moby/moby/issues/34198
Well, seems that it was fixed in moby around 2018. For podman, that can be fixed by using "slave" mount.
-v /etc/letsencrypt:/certs -v /opt/docker:/var/lib/registry -> -v /etc/letsencrypt:/certs:slave -v /opt/docker:/var/lib/registry:slave
I have the same issue on Fedora 31 with podman-1.4.4-1.fc30.x86_64. There are no references of this container in containers.json so not sure how to clean it up manually.
I saw this also yesterday; podman-1.4.4-3.fc30 as nonroot; but cannot reproduce it. Virt is still up, with one "container name already in use" stuck. Can provide login access on request.
Try a 'podman rm --storage'.
On Wed, Jul 10, 2019, 07:48 Ed Santiago notifications@github.com wrote:
I saw this also yesterday; podman-1.4.4-3.fc30 as nonroot; but cannot
reproduce it. Virt is still up, with one "container name already in use"
stuck. Can provide login access on request.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/2553?email_source=notifications&email_token=AB3AOCEF7VN5HYGYWXZ3QDLP6XEBDA5CNFSM4G4CJBL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODZTGS3Q#issuecomment-510028142,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB3AOCB3UJ5DUPXOOMPND73P6XEBDANCNFSM4G4CJBLQ
.
That did it. Since this seems to be a common problem, should the podman-run message perhaps be amended to include this hint?
Error: error creating container storage: the container name "foo" is already in use by "00fbb9ad28dd0cb32811e87fe789cbed612206a97395420365e3238e9afd2e1e". You have to remove that container to be able to reuse that name.: that name is already in use (hint: if "podman rm foo" doesn't clear things up, try "podman rm --storage foo")
The only issue with recommending it unconditionally is that it will quite happily destroy containers from Buildah/CRI-O as well.
The overall recommendation works something like this: Check CRI-O and Buildah to see if it's a container running there. If it is, we recommend deleting them through crictl and buildah. If it's not there, it's probably an orphan container - hit it with --storage.
podman rm --storage <id> doesn't seem to work for me with the zfs driver though:
# podman ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6b265ecd8ed3 docker.io/library/alpine:latest sh 21 minutes ago Exited (0) 21 minutes ago suspicious_banzai
45de4c6bf843 docker.io/library/alpine:latest sh 27 minutes ago Exited (130) 24 minutes ago optimistic_cerf
96aaa668db27 docker.io/library/alpine:latest sh 39 minutes ago Exited (0) 37 minutes ago magical_hopper
c95e5272d83f docker.io/library/alpine:latest sh 41 minutes ago Exited (0) 41 minutes ago vigorous_khorana
9645695533c7 docker.io/library/alpine:latest sh 42 minutes ago Exited (130) 41 minutes ago crazy_mccarthy
15684becc00a docker.io/library/alpine:latest bash 42 minutes ago Created dreamy_kalam
# podman run --rm --name=prometheus --net=bridge --network container-net -v "/var/container-data/prometheus/data:/prometheus" -v "/var/container-data/prometheus/conf/prometheus.yml:/etc/prometheus/prometheus.yml" -p "10.10.0.1:9090:9090" prom/prometheus
Error: error creating container storage: the container name "prometheus" is already in use by "dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5". You have to remove that container to be able to reuse that name.: that name is already in use
# podman rm -f prometheus
Error: no container with name or ID prometheus found: no such container
# podman rm -f dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
Error: no container with name or ID dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5 found: no such container
# podman rm --storage dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5
Error: error removing storage for container "dfb946cb8242c9aace3f2c2833f4c3875d1c2d93315bc7fb5438811f587a9ea5": exit status 1: "/usr/sbin/zfs zfs destroy -r tank/containers/60024e34b354c0274536c32b941f7826742c0579d541de3b5ab30323f2e4c0af" => cannot open 'tank/containers/60024e34b354c0274536c32b941f7826742c0579d541de3b5ab30323f2e4c0af': dataset does not exist
Managed to reproduce the issue accidentally by trying to Ctrl-C a container twice.
^CERRO[0014] Error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32 root filesyste
m: signal: interrupt: "/usr/sbin/zfs zfs destroy -r tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9" =>
ERRO[0014] Error forwarding signal 2 to container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: container has already been removed
And I fixed it with an ugly hack:
# zfs create tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9
# podman rm --storage nginx
Sounds like a bug with the ZFS driver.
On Mon, Aug 19, 2019, 02:21 alex notifications@github.com wrote:
Managed to reproduce the issue accidentally by trying to Ctrl-C a
container twice.^CERRO[0014] Error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32 root filesyste
m: signal: interrupt: "/usr/sbin/zfs zfs destroy -r tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9" =>
ERRO[0014] Error forwarding signal 2 to container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: container has already been removedAnd I fixed it with an ugly hack:
zfs create tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9
podman rm --storage nginx
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/2553?email_source=notifications&email_token=AB3AOCHJNTQ7MJMWAAIOSXTQFI3WBA5CNFSM4G4CJBL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4R2JAY#issuecomment-522429571,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB3AOCHIJUIO6W2VCMZ4G33QFI3WBANCNFSM4G4CJBLQ
.
You might want to make a new issue for this
On Mon, Aug 19, 2019, 07:30 Matthew Heon matthew.heon@gmail.com wrote:
Sounds like a bug with the ZFS driver.
On Mon, Aug 19, 2019, 02:21 alex notifications@github.com wrote:
Managed to reproduce the issue accidentally by trying to Ctrl-C a
container twice.^CERRO[0014] Error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: error removing container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32 root filesyste
m: signal: interrupt: "/usr/sbin/zfs zfs destroy -r tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9" =>
ERRO[0014] Error forwarding signal 2 to container f9512f7b0b731324f5651e92af7e02910bf35b16d3f373d63fb6ebee27c22d32: container has already been removedAnd I fixed it with an ugly hack:
zfs create tank/containers/4834b4aa97d1a48a27f44c718241c2d786349eee9ab66c3d515339402e2ed1c9
podman rm --storage nginx
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/2553?email_source=notifications&email_token=AB3AOCHJNTQ7MJMWAAIOSXTQFI3WBA5CNFSM4G4CJBL2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4R2JAY#issuecomment-522429571,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB3AOCHIJUIO6W2VCMZ4G33QFI3WBANCNFSM4G4CJBLQ
.
Also might be better to do discussion in containers/storage since podman is just using that library for management of its container images. And graphdrivers.
Note: related to https://github.com/containers/libpod/issues/3906
Most helpful comment
Try a 'podman rm --storage'.
On Wed, Jul 10, 2019, 07:48 Ed Santiago notifications@github.com wrote: