Podman: dangling dir in vfs

Created on 2 Dec 2018  ยท  20Comments  ยท  Source: containers/podman

I've got unknown 706M of image/container data.

โžœ  podman git:(master) โœ— podman images -a                                
โžœ  podman git:(master) โœ— podman ps -a                                    
โžœ  podman git:(master) โœ— ls -l ~/.local/share/containers/storage/vfs/dir 
total 4
drwxr-xr-x. 21 anatoli anatoli 4096 Dec  2 10:50 97ba8f52abc877f267dce6b6767a63a5fe549ab6b7b49cfac219a94631ca357a
โžœ  podman git:(master) โœ— du -hs ~/.local/share/containers/storage/vfs/dir 
706M    /home/anatoli/.local/share/containers/storage/vfs/dir

What is that and how to clean it?

Output of podman version:

Version:       0.10.1.3
Go Version:    go1.10.4
OS/Arch:       linux/amd64

Output of podman info:

host:
  BuildahVersion: 1.5-dev
  Conmon:
    package: podman-0.10.1.3-1.gitdb08685.fc28.x86_64
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.12.0-dev, commit: 4a03e555ee4105308fa4d814d4ce02a059af0b7f-dirty'
  Distribution:
    distribution: fedora
    version: "28"
  MemFree: 1200979968
  MemTotal: 8053104640
  OCIRuntime:
    package: runc-1.0.0-57.dev.git9e5aa74.fc28.x86_64
    path: /usr/bin/runc
    version: 'runc version spec: 1.0.1-dev'
  SwapFree: 8109944832
  SwapTotal: 8196714496
  arch: amd64
  cpus: 4
  hostname: localhost
  kernel: 4.19.4-200.fc28.x86_64
  os: linux
  uptime: 27h 59m 18.37s (Approximately 1.12 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions: []
  GraphRoot: /home/anatoli/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /run/user/1000/run

Most helpful comment

It would be cleverer to provide a command to handle this forced cleanup process, which can be mentioned in the error message, since this would avoid ambiguity and guesswork from the user's end.

Especially as the command would also know the exact location that need sto be cleaned instead of a 'generally speaking' or assumed location. Right?

All 20 comments

You might want to try switching to the fuse-overlayfs graph driver. You'll
need to compile the fuse-overlayfs binary and install it somewhere in your
$PATH, but it has massive advantages over the default vfs driver in storage
used.

This definitely sounds like it could be a bug, though. Will take a look
tomorrow

On Sun, Dec 2, 2018, 07:51 Anatoli Babenia <[email protected] wrote:

[//]: # kind bug

I've got unknown 706M of image/container data.

โžœ podman git:(master) โœ— podman images -a

โžœ podman git:(master) โœ— podman ps -a

โžœ podman git:(master) โœ— ls -l ~/.local/share/containers/storage/vfs/dir

total 4

drwxr-xr-x. 21 anatoli anatoli 4096 Dec 2 10:50 97ba8f52abc877f267dce6b6767a63a5fe549ab6b7b49cfac219a94631ca357a

โžœ podman git:(master) โœ— du -hs ~/.local/share/containers/storage/vfs/dir

706M /home/anatoli/.local/share/containers/storage/vfs/dir

What is that and how to clean it?

Output of podman version:

Version: 0.10.1.3

Go Version: go1.10.4

OS/Arch: linux/amd64

Output of podman info:

host:

BuildahVersion: 1.5-dev

Conmon:

package: podman-0.10.1.3-1.gitdb08685.fc28.x86_64

path: /usr/libexec/podman/conmon

version: 'conmon version 1.12.0-dev, commit: 4a03e555ee4105308fa4d814d4ce02a059af0b7f-dirty'

Distribution:

distribution: fedora

version: "28"

MemFree: 1200979968

MemTotal: 8053104640

OCIRuntime:

package: runc-1.0.0-57.dev.git9e5aa74.fc28.x86_64

path: /usr/bin/runc

version: 'runc version spec: 1.0.1-dev'

SwapFree: 8109944832

SwapTotal: 8196714496

arch: amd64

cpus: 4

hostname: localhost

kernel: 4.19.4-200.fc28.x86_64

os: linux

uptime: 27h 59m 18.37s (Approximately 1.12 days)

insecure registries:

registries: []

registries:

registries:

  • docker.io

  • registry.fedoraproject.org

  • quay.io

  • registry.access.redhat.com

  • registry.centos.org

store:

ContainerStore:

number: 0

GraphDriverName: vfs

GraphOptions: []

GraphRoot: /home/anatoli/.local/share/containers/storage

GraphStatus: {}

ImageStore:

number: 0

RunRoot: /run/user/1000/run

โ€”
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/1917, or mute the thread
https://github.com/notifications/unsubscribe-auth/AHYHCBdJsh6T9jCfpQRmsocnelfsCx7vks5u08zngaJpZM4Y9YzU
.

It is standard dnf install podman on Fedora 28. Maybe I should create an issue to make it the default?

On Fedora, it should actually be installed by default, so you can swap without compiling it.

We should be swapping it to the default driver next release, but this will not take effect for existing Podman users unless you deliberately delete your Podman storage directory - it's not possible to upgrade without deleting all your containers/images locally and starting from scratch, and we don't want to do that to people without warning.

I can remove images no problem. What should I remove to clean up from the previous storage type, and what to change in config for fuse-overlayfs?

fuse-overlayfs was not installed with podman.

โœ— sudo dnf info fuse-overlayfs
Last metadata expiration check: 1:55:01 ago on Mon 03 Dec 2018 06:04:34 AM +03.
Available Packages
Name         : fuse-overlayfs
Version      : 0.1

@abitrolly we made fuse-overlayfs the new default if it is found the first time podman runs.

What operations have you done to get that dangling dir? I've pulled a bunch of images, created some containers but once I rmi all the images the ~/.local/share/containers/storage/ directory size goes down to almost 0.

I don't know what caused it. I noticed that podman doesn't detect containers created by buildah, so maybe some other tool created it. How to check?

Should I just remove podman, kill ~/.local/share/containers and try to see it will pick up fuse-overlayfs? But then I should install it manually.

Yes could you try that.

I cleaned up everything with buildah and podman and there are again dangling dirs in vfs. Most of them are layers, but not all.

โžœ  containers tree -L 4 --du -h -F 
.
โ””โ”€โ”€ [ 2.4M]  storage/
    โ”œโ”€โ”€ [ 540K]  libpod/
    โ”‚ย ย  โ””โ”€โ”€ [ 536K]  bolt_state.db
    โ”œโ”€โ”€ [ 4.0K]  mounts/
    โ”œโ”€โ”€ [   64]  storage.lock
    โ”œโ”€โ”€ [ 4.0K]  tmp/
    โ”œโ”€โ”€ [  48K]  vfs/
    โ”‚ย ย  โ””โ”€โ”€ [  44K]  dir/
    โ”‚ย ย      โ”œโ”€โ”€ [ 4.0K]  41c002c8a6fd36397892dc6dc36813aaa1be3298be4de93e4fe1f40b9c358d99/
    โ”‚ย ย      โ”œโ”€โ”€ [ 4.0K]  48537522172bb666b539c0bb112b3343855afce42963b31dfaa0b3b53750201d/
    โ”‚ย ย      โ”œโ”€โ”€ [ 4.0K]  6f819b502fc5ae3d6b5a4a9e375498bc3260eb9495d7809435690eb75b2ab0df/
    โ”‚ย ย      โ”œโ”€โ”€ [ 4.0K]  7bf218aaed23be7efa2e7596c9dcd907e303f3ad34bb920bad9766d50e475dc4/
    โ”‚ย ย      โ”œโ”€โ”€ [ 4.0K]  97ba8f52abc877f267dce6b6767a63a5fe549ab6b7b49cfac219a94631ca357a/
    โ”‚ย ย      โ”œโ”€โ”€ [ 4.0K]  a28f17d44f1af5b31b1a2a1f916e09b8d54a6ec2e1d86998090040d5cbc52f6c/
    โ”‚ย ย      โ”œโ”€โ”€ [ 4.0K]  ed6e6988d1b88f0d9e2227b9302bd635e1ae157baa8a528343dd7c1c3b9ae89e/
    โ”‚ย ย      โ””โ”€โ”€ [ 4.0K]  ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/
    โ”œโ”€โ”€ [ 4.1K]  vfs-containers/
    โ”‚ย ย  โ”œโ”€โ”€ [    2]  containers.json
    โ”‚ย ย  โ””โ”€โ”€ [   64]  containers.lock
    โ”œโ”€โ”€ [ 4.1K]  vfs-images/
    โ”‚ย ย  โ”œโ”€โ”€ [    2]  images.json
    โ”‚ย ย  โ””โ”€โ”€ [   64]  images.lock
    โ””โ”€โ”€ [ 1.8M]  vfs-layers/
        โ”œโ”€โ”€ [ 368K]  41c002c8a6fd36397892dc6dc36813aaa1be3298be4de93e4fe1f40b9c358d99.tar-split.gz
        โ”œโ”€โ”€ [  363]  48537522172bb666b539c0bb112b3343855afce42963b31dfaa0b3b53750201d.tar-split.gz
        โ”œโ”€โ”€ [  348]  7bf218aaed23be7efa2e7596c9dcd907e303f3ad34bb920bad9766d50e475dc4.tar-split.gz
        โ”œโ”€โ”€ [ 1.4M]  a28f17d44f1af5b31b1a2a1f916e09b8d54a6ec2e1d86998090040d5cbc52f6c.tar-split.gz
        โ”œโ”€โ”€ [ 1.6K]  ed6e6988d1b88f0d9e2227b9302bd635e1ae157baa8a528343dd7c1c3b9ae89e.tar-split.gz
        โ”œโ”€โ”€ [ 1.6K]  ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d.tar-split.gz
        โ”œโ”€โ”€ [ 2.5K]  layers.json
        โ””โ”€โ”€ [   64]  layers.lock

  2.4M used in 17 directories, 14 files

I don't know the command to clean up those layers, so I now clean and remove them manually. That was podman 0.10.1.3.

Interesting, I can not remove them.

...
rm: cannot remove 'storage/vfs/dir/ed6e6988d1b88f0d9e2227b9302bd635e1ae157baa8a528343dd7c1c3b9ae89e/run/systemd/netif/links': Permission denied
rm: cannot remove 'storage/vfs/dir/ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/run/systemd/netif/leases': Permission denied
rm: cannot remove 'storage/vfs/dir/ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/run/systemd/netif/links': Permission denied

$ ls -la storage/vfs/dir/ee3e1cf5907451bbbd77349279e7794aa1ca29859a06bfbf37d1dcca4a25b34d/run/systemd/netif/links 
total 8
drwxr-xr-x. 2 100100 100102 4096 Nov 13 17:04 .
drwxr-xr-x. 4 100100 100102 4096 Nov 13 17:04 ..

I removed those dirs with sudo and attached storage dir if you need to inspect it.

storage.zip

I am just going to close this, since the recommended way is to now use fuse-overlay.

@rhatdan reinstalled podman with clean config and it still uses vfs.

$ dnf remove podman
$ rm -rf $HOME/.local/share/containers
$ rm -rf $HOME/.config/containers
$ dnf install podman
...
 podman                                            x86_64                       1:1.1.2-1.git0ad9b6b.fc29                             updates                       9.6 M
Installing dependencies:
 containernetworking-plugins                       x86_64                       0.7.4-2.fc29                                          updates                        13 M
 runc                                              x86_64                       2:1.0.0-68.dev.git6635b4f.fc29                        updates                       2.3 M
Installing weak dependencies:
 fuse-overlayfs                                    x86_64                       0.3-4.dev.gitea72572.fc29                             updates                        47 k
 slirp4netns                                       x86_64                       0.3-0.alpha.2.git30883b5.fc29                         updates                        71 k
...
$ podman info
store:
  ConfigFile: /home/anatoli/.config/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: vfs
  GraphOptions:
  - overlay.mount_program=/usr/bin/fuse-overlayfs
  GraphRoot: /home/anatoli/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 0
  RunRoot: /run/user/1000
  VolumePath: /home/anatoli/.local/share/containers/storage/volumes

@giuseppe Ideas?

not really, there should not be anything under /run that could cause the wrong driver.

@abitrolly how does /home/anatoli/.config/containers/storage.conf look like once it is created?

[storage]
  driver = "overlay"
  runroot = "/run/user/1000"
  graphroot = "/home/anatoli/.local/share/containers/storage"
  [storage.options]
    mount_program = "/usr/bin/fuse-overlayfs"

But..

$ podman run --rm -d -p 10000:10000 envoyproxy/envoy:latest
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve 
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve 
Error: error from slirp4netns while setting up port redirection: map[desc:bad request: add_hostfwd: slirp_add_hostfwd failed]

there are two different issues here, one with slirp4netns that cannot bind the specified port on the host. Is that port already used?

Also, from your comments seem like $HOME/.local/share/containers was deleted just before info, although vfs is still stored in the database.

Can you also share your libpod.conf and try to rm -rf the static_dir specified there?

Just to comment or confirm, rm'ing or mv'ing the bolt_state.db in the static_dir, solve the issue as cleverly reported by:

ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve

_delete libpod local files to resolve_

delete libpod local files to resolve

which podman files should be deleted?

Generally speaking, the entirety of containers/storage's root directory (GraphRoot in podman info, defaults to $HOME/.local/share/containers/storage)

It would be cleverer to provide a command to handle this forced cleanup process, which can be mentioned in the error message, since this would avoid ambiguity and guesswork from the user's end.

Especially as the command would also know the exact location that need sto be cleaned instead of a 'generally speaking' or assumed location. Right?

podman not uses fuse-overlay, and the original issue is fixed. I don't see any dangling dirs anymore.

$ podman info
...
  GraphDriverName: overlay
Was this page helpful?
0 / 5 - 0 ratings