I've got unknown 706M of image/container data.
โ podman git:(master) โ podman images -a
โ podman git:(master) โ podman ps -a
โ podman git:(master) โ ls -l ~/.local/share/containers/storage/vfs/dir
total 4
drwxr-xr-x. 21 anatoli anatoli 4096 Dec 2 10:50 97ba8f52abc877f267dce6b6767a63a5fe549ab6b7b49cfac219a94631ca357a
โ podman git:(master) โ du -hs ~/.local/share/containers/storage/vfs/dir
706M /home/anatoli/.local/share/containers/storage/vfs/dir
What is that and how to clean it?
Output of podman version:
Version: 0.10.1.3
Go Version: go1.10.4
OS/Arch: linux/amd64
Output of podman info:
host:
BuildahVersion: 1.5-dev
Conmon:
package: podman-0.10.1.3-1.gitdb08685.fc28.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 1.12.0-dev, commit: 4a03e555ee4105308fa4d814d4ce02a059af0b7f-dirty'
Distribution:
distribution: fedora
version: "28"
MemFree: 1200979968
MemTotal: 8053104640
OCIRuntime:
package: runc-1.0.0-57.dev.git9e5aa74.fc28.x86_64
path: /usr/bin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 8109944832
SwapTotal: 8196714496
arch: amd64
cpus: 4
hostname: localhost
kernel: 4.19.4-200.fc28.x86_64
os: linux
uptime: 27h 59m 18.37s (Approximately 1.12 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: []
GraphRoot: /home/anatoli/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 0
RunRoot: /run/user/1000/run
You might want to try switching to the fuse-overlayfs graph driver. You'll
need to compile the fuse-overlayfs binary and install it somewhere in your
$PATH, but it has massive advantages over the default vfs driver in storage
used.
This definitely sounds like it could be a bug, though. Will take a look
tomorrow
On Sun, Dec 2, 2018, 07:51 Anatoli Babenia <[email protected] wrote:
[//]: # kind bug
I've got unknown 706M of image/container data.
โ podman git:(master) โ podman images -a
โ podman git:(master) โ podman ps -a
โ podman git:(master) โ ls -l ~/.local/share/containers/storage/vfs/dir
total 4
drwxr-xr-x. 21 anatoli anatoli 4096 Dec 2 10:50 97ba8f52abc877f267dce6b6767a63a5fe549ab6b7b49cfac219a94631ca357a
โ podman git:(master) โ du -hs ~/.local/share/containers/storage/vfs/dir
706M /home/anatoli/.local/share/containers/storage/vfs/dir
What is that and how to clean it?
Output of podman version:
Version: 0.10.1.3
Go Version: go1.10.4
OS/Arch: linux/amd64
Output of podman info:
host:
BuildahVersion: 1.5-dev
Conmon:
package: podman-0.10.1.3-1.gitdb08685.fc28.x86_64 path: /usr/libexec/podman/conmon version: 'conmon version 1.12.0-dev, commit: 4a03e555ee4105308fa4d814d4ce02a059af0b7f-dirty'Distribution:
distribution: fedora version: "28"MemFree: 1200979968
MemTotal: 8053104640
OCIRuntime:
package: runc-1.0.0-57.dev.git9e5aa74.fc28.x86_64 path: /usr/bin/runc version: 'runc version spec: 1.0.1-dev'SwapFree: 8109944832
SwapTotal: 8196714496
arch: amd64
cpus: 4
hostname: localhost
kernel: 4.19.4-200.fc28.x86_64
os: linux
uptime: 27h 59m 18.37s (Approximately 1.12 days)
insecure registries:
registries: []
registries:
registries:
docker.io
registry.fedoraproject.org
quay.io
registry.access.redhat.com
registry.centos.org
store:
ContainerStore:
number: 0GraphDriverName: vfs
GraphOptions: []
GraphRoot: /home/anatoli/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 0RunRoot: /run/user/1000/run
โ
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/1917, or mute the thread
https://github.com/notifications/unsubscribe-auth/AHYHCBdJsh6T9jCfpQRmsocnelfsCx7vks5u08zngaJpZM4Y9YzU
.
It is standard dnf install podman on Fedora 28. Maybe I should create an issue to make it the default?
On Fedora, it should actually be installed by default, so you can swap without compiling it.
We should be swapping it to the default driver next release, but this will not take effect for existing Podman users unless you deliberately delete your Podman storage directory - it's not possible to upgrade without deleting all your containers/images locally and starting from scratch, and we don't want to do that to people without warning.
I can remove images no problem. What should I remove to clean up from the previous storage type, and what to change in config for fuse-overlayfs?
fuse-overlayfs was not installed with podman.
โ sudo dnf info fuse-overlayfs
Last metadata expiration check: 1:55:01 ago on Mon 03 Dec 2018 06:04:34 AM +03.
Available Packages
Name : fuse-overlayfs
Version : 0.1
@abitrolly we made fuse-overlayfs the new default if it is found the first time podman runs.
What operations have you done to get that dangling dir? I've pulled a bunch of images, created some containers but once I rmi all the images the ~/.local/share/containers/storage/ directory size goes down to almost 0.
I don't know what caused it. I noticed that podman doesn't detect containers created by buildah, so maybe some other tool created it. How to check?
Should I just remove podman, kill ~/.local/share/containers and try to see it will pick up fuse-overlayfs? But then I should install it manually.
Yes could you try that.
I cleaned up everything with buildah and podman and there are again dangling dirs in vfs. Most of them are layers, but not all.
โ containers tree -L 4 --du -h -F
.
โโโ [ 2.4M] storage/
โโโ [ 540K] libpod/
โย ย โโโ [ 536K] bolt_state.db
โโโ [ 4.0K] mounts/
โโโ [ 64] storage.lock
โโโ [ 4.0K] tmp/
โโโ [ 48K] vfs/
โย ย โโโ [ 44K] dir/
โย ย โโโ [ 4.0K] 41c002c8a6fd36397892dc6dc36813aaa1be3298be4de93e4fe1f40b9c358d99/
โย ย โโโ [ 4.0K] 48537522172bb666b539c0bb112b3343855afce42963b31dfaa0b3b53750201d/
โย ย โโโ [ 4.0K] 6f819b502fc5ae3d6b5a4a9e375498bc3260eb9495d7809435690eb75b2ab0df/
โย ย โโโ [ 4.0K] 7bf218aaed23be7efa2e7596c9dcd907e303f3ad34bb920bad9766d50e475dc4/
โย ย โโโ [ 4.0K] 97ba8f52abc877f267dce6b6767a63a5fe549ab6b7b49cfac219a94631ca357a/
โย ย โโโ [ 4.0K] a28f17d44f1af5b31b1a2a1f916e09b8d54a6ec2e1d86998090040d5cbc52f6c/
โย ย โโโ [ 4.0K] ed6e6988d1b88f0d9e2227b9302bd635e1ae157baa8a528343dd7c1c3b9ae89e/
โย ย โโโ [ 4.0K] ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/
โโโ [ 4.1K] vfs-containers/
โย ย โโโ [ 2] containers.json
โย ย โโโ [ 64] containers.lock
โโโ [ 4.1K] vfs-images/
โย ย โโโ [ 2] images.json
โย ย โโโ [ 64] images.lock
โโโ [ 1.8M] vfs-layers/
โโโ [ 368K] 41c002c8a6fd36397892dc6dc36813aaa1be3298be4de93e4fe1f40b9c358d99.tar-split.gz
โโโ [ 363] 48537522172bb666b539c0bb112b3343855afce42963b31dfaa0b3b53750201d.tar-split.gz
โโโ [ 348] 7bf218aaed23be7efa2e7596c9dcd907e303f3ad34bb920bad9766d50e475dc4.tar-split.gz
โโโ [ 1.4M] a28f17d44f1af5b31b1a2a1f916e09b8d54a6ec2e1d86998090040d5cbc52f6c.tar-split.gz
โโโ [ 1.6K] ed6e6988d1b88f0d9e2227b9302bd635e1ae157baa8a528343dd7c1c3b9ae89e.tar-split.gz
โโโ [ 1.6K] ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d.tar-split.gz
โโโ [ 2.5K] layers.json
โโโ [ 64] layers.lock
2.4M used in 17 directories, 14 files
I don't know the command to clean up those layers, so I now clean and remove them manually. That was podman 0.10.1.3.
Interesting, I can not remove them.
...
rm: cannot remove 'storage/vfs/dir/ed6e6988d1b88f0d9e2227b9302bd635e1ae157baa8a528343dd7c1c3b9ae89e/run/systemd/netif/links': Permission denied
rm: cannot remove 'storage/vfs/dir/ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/run/systemd/netif/leases': Permission denied
rm: cannot remove 'storage/vfs/dir/ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/run/systemd/netif/links': Permission denied
$ ls -la storage/vfs/dir/ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/run/systemd/netif/links
total 8
drwxr-xr-x. 2 100100 100102 4096 Nov 13 17:04 .
drwxr-xr-x. 4 100100 100102 4096 Nov 13 17:04 ..
I removed those dirs with sudo and attached storage dir if you need to inspect it.
I am just going to close this, since the recommended way is to now use fuse-overlay.
@rhatdan reinstalled podman with clean config and it still uses vfs.
$ dnf remove podman
$ rm -rf $HOME/.local/share/containers
$ rm -rf $HOME/.config/containers
$ dnf install podman
...
podman x86_64 1:1.1.2-1.git0ad9b6b.fc29 updates 9.6 M
Installing dependencies:
containernetworking-plugins x86_64 0.7.4-2.fc29 updates 13 M
runc x86_64 2:1.0.0-68.dev.git6635b4f.fc29 updates 2.3 M
Installing weak dependencies:
fuse-overlayfs x86_64 0.3-4.dev.gitea72572.fc29 updates 47 k
slirp4netns x86_64 0.3-0.alpha.2.git30883b5.fc29 updates 71 k
...
$ podman info
store:
ConfigFile: /home/anatoli/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions:
- overlay.mount_program=/usr/bin/fuse-overlayfs
GraphRoot: /home/anatoli/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 0
RunRoot: /run/user/1000
VolumePath: /home/anatoli/.local/share/containers/storage/volumes
@giuseppe Ideas?
not really, there should not be anything under /run that could cause the wrong driver.
@abitrolly how does /home/anatoli/.config/containers/storage.conf look like once it is created?
[storage]
driver = "overlay"
runroot = "/run/user/1000"
graphroot = "/home/anatoli/.local/share/containers/storage"
[storage.options]
mount_program = "/usr/bin/fuse-overlayfs"
But..
$ podman run --rm -d -p 10000:10000 envoyproxy/envoy:latest
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve
Error: error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]
there are two different issues here, one with slirp4netns that cannot bind the specified port on the host. Is that port already used?
Also, from your comments seem like $HOME/.local/share/containers was deleted just before info, although vfs is still stored in the database.
Can you also share your libpod.conf and try to rm -rf the static_dir specified there?
Just to comment or confirm, rm'ing or mv'ing the bolt_state.db in the static_dir, solve the issue as cleverly reported by:
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve
_delete libpod local files to resolve_
delete libpod local files to resolve
which podman files should be deleted?
Generally speaking, the entirety of containers/storage's root directory (GraphRoot in podman info, defaults to $HOME/.local/share/containers/storage)
It would be cleverer to provide a command to handle this forced cleanup process, which can be mentioned in the error message, since this would avoid ambiguity and guesswork from the user's end.
Especially as the command would also know the exact location that need sto be cleaned instead of a 'generally speaking' or assumed location. Right?
podman not uses fuse-overlay, and the original issue is fixed. I don't see any dangling dirs anymore.
$ podman info
...
GraphDriverName: overlay
Most helpful comment
It would be cleverer to provide a command to handle this forced cleanup process, which can be mentioned in the error message, since this would avoid ambiguity and guesswork from the user's end.
Especially as the command would also know the exact location that need sto be cleaned instead of a 'generally speaking' or assumed location. Right?