Podman: [rootless] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied

Created on 27 Sep 2018  ·  48Comments  ·  Source: containers/podman

Description

rootless container run see's following error in debug output:

WARN[0030] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0030] Cleaning up container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86
DEBU[0030] Network is already cleaned up, skipping...
DEBU[0030] unmounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
ERRO[0030] error reading container (probably exited) json message: EOF

Full output below:

[vagrant@vanilla-rawhide-atomic srv]$ alias cass='podman --log-level debug run --rm -ti -v ${PWD}:/srv/ ${COREOS_ASSEMBLER_CONFIG_GIT:+-v  $COREOS_ASSEMBLER_CONFIG_GIT:/srv/src/config/:ro} ${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro} --workdir /srv --device /dev/kvm ca'                    
[vagrant@vanilla-rawhide-atomic srv]$
[vagrant@vanilla-rawhide-atomic srv]$ cass init
INFO[0000] running as rootless
DEBU[0000] Not configuring container store
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /var/home/vagrant/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Using bridge netmode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/shm
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] Adding mount /run
DEBU[0000] Adding mount /run/lock
DEBU[0000] Adding mount /sys/fs/cgroup/systemd
DEBU[0000] Adding mount /tmp
DEBU[0000] Adding mount /var/log/journal
INFO[0000] running as rootless
DEBU[0000] [graphdriver] trying provided driver "vfs"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /var/home/vagrant/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest"
DEBU[0000] reference "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest" does not resolve to an image ID
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest"
DEBU[0000] reference "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]docker.io/library/ca:latest" does not resolve to an image ID
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]localhost/ca:latest"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] Using bridge netmode
DEBU[0000] User mount /srv:/srv/ options []
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/fedora-coreos-config/:/srv/src/config/ options [ro]
DEBU[0000] User mount /var/sharedfolder/code/github.com/coreos/coreos-assembler//src/:/usr/lib/coreos-assembler/ options [ro]
DEBU[0000] Adding mount /proc
DEBU[0000] Adding mount /dev
DEBU[0000] Adding mount /dev/shm
DEBU[0000] Adding mount /dev/mqueue
DEBU[0000] Adding mount /sys
DEBU[0000] Adding mount /dev/pts
DEBU[0000] Adding mount /sys/fs/cgroup
DEBU[0000] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0000] Creating dest directory: /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c
DEBU[0000] Calling TarUntar(/var/home/vagrant/.local/share/containers/storage/vfs/dir/d35c76dfa49441e23821e2e91c12c629997fa11ce714b110dad956f7cabed6dc, /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c)                                                       
DEBU[0000] TarUntar(/var/home/vagrant/.local/share/containers/storage/vfs/dir/d35c76dfa49441e23821e2e91c12c629997fa11ce714b110dad956f7cabed6dc /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c)                                                                
DEBU[0030] created container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has work directory "/var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata"                                                                                   
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has run directory "/run/user/1000/run/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata"                                                                                                                   
DEBU[0030] New container created "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
DEBU[0030] container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" has CgroupParent "/libpod_parent/libpod-c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"                                                                                                                                         
DEBU[0030] Handling terminal attach
DEBU[0030] mounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86" at "/var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c"                                                                                                           
DEBU[0030] Created root filesystem for container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 at /var/home/vagrant/.local/share/containers/storage/vfs/dir/f674ef8be388694a1eb6e793db64ec42f82074b5116d498333f7e5caac20f29c                                                                                           
WARN[0030] error mounting secrets, skipping: getting host secret data failed: failed to read secrets from "/usr/share/rhel/secrets": open /usr/share/rhel/secrets: permission denied
DEBU[0030] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] exporting opaque data as blob "sha256:30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] parsed reference into "[vfs@/var/home/vagrant/.local/share/containers/storage+/run/user/1000/run]@30eacefcd9ead895103afd67a7de48ca0fd72e518d3374797ec2dcdf396e7717"
DEBU[0030] Created OCI spec for container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 at /var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata/config.json                                                                      
DEBU[0030] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0030] running conmon: /usr/libexec/crio/conmon      args=[-c c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 -u c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86 -r /usr/bin/runc -b /var/home/vagrant/.local/share/containers/storage/vfs-containers/c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86/userdata -p /run/user/1000/ru]
WARN[0030] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0030] Cleaning up container c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86
DEBU[0030] Network is already cleaned up, skipping...
DEBU[0030] unmounted container "c7a9bd763e9ec54d8d4c8c6e2fffda1a403f95dae204535594cbf5fbdd239b86"
ERRO[0030] error reading container (probably exited) json message: EOF

Steps to reproduce the issue:

  1. boot rawhide VM

  2. rootless podman build -t ca with Dockerfile/context from: https://github.com/dustymabe/coreos-assembler/tree/7cd95023aa0d7f6ccee2e57f6006e8e9978313f8 -- (this takes a lot of space - see buildah issue

  3. try to run container using built image

Describe the results you received:

error

Describe the results you expected:
no error

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

[vagrant@vanilla-rawhide-atomic srv]$ rpm -q podman
podman-0.9.4-1.dev.gitaf791f3.fc30.x86_64
[vagrant@vanilla-rawhide-atomic srv]$ podman version
Version:       0.9.4-dev
Go Version:    go1.11
OS/Arch:       linux/amd64

Output of podman info:

[vagrant@vanilla-rawhide-atomic srv]$ podman info
host:
  Conmon:
    package: conmon-1.12.0-12.dev.gitc4f232a.fc29.x86_64
    path: /usr/libexec/crio/conmon
    version: 'conmon version 1.12.0-dev, commit: ed74efc8af284f786e041e8a98a910db4b2c0ec7'
  MemFree: 143093760
  MemTotal: 4133531648
  OCIRuntime:
    package: runc-1.0.0-54.dev.git00dc700.fc30.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc5+dev
      commit: b96b63adc3dd5b354bb2a39bb8cc4659f979c0a4
      spec: 1.0.0
  SwapFree: 0
  SwapTotal: 0
  arch: amd64
  cpus: 4
  hostname: vanilla-rawhide-atomic
  kernel: 4.19.0-0.rc5.git0.1.fc30.x86_64
  os: linux
  uptime: 4h 4m 57.82s (Approximately 0.17 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ContainerStore:
    number: 2
  GraphDriverName: vfs
  GraphOptions: []
  GraphRoot: /var/home/vagrant/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 12
  RunRoot: /run/user/1000/run

Additional environment details (AWS, VirtualBox, physical, etc.):

vagrant libvirt rawhide atomic host VM

rootless

Most helpful comment

I have podman embedded in a CI setup and use different log-levels tuned by a global setting, but I opted the default to be warning, as most warnings are lingering issues. This one is tainting the setup and I had to return to error level.

Warnings should be cast for potential errors or misconfigurations IMHO, not for future development.

All 48 comments

This might be useful too:

[vagrant@vanilla-rawhide-atomic srv]$ rpm -q systemd kernel
systemd-239-3.fc29.x86_64
kernel-4.19.0-0.rc5.git0.1.fc30.x86_64

The warning is nonfatal and probably unrelated - this looks like conmon exploding as it tries to launch the container.

@giuseppe PTAL

I think this could be fixed by 7ee6bf15738d582e8ef20dc470a824b7ed0e3429 as I am not seeing it with the latest upstream version.

I've tried:

podman run --rm --net=host -ti --privileged --userns=host -v $(pwd):/srv --workdir /srv quay.io/cgwalters/coreos-assembler

The warning is nonfatal and probably unrelated - this looks like conmon exploding as it tries to launch the container.

please feel free to rename the issue to more accurately reflect the problem

I think this could be fixed by 7ee6bf1 as I am not seeing it with the latest upstream version.

I've tried:

podman run --rm --net=host -ti --privileged --userns=host -v $(pwd):/srv --workdir /srv quay.io/cgwalters/coreos-assembler

my command didn't have --net=host or --privileged or --userns=host.

podman --log-level debug run --rm -ti -v ${PWD}:/srv/ ${COREOS_ASSEMBLER_CONFIG_GIT:+-v  $COREOS_ASSEMBLER_CONFIG_GIT:/srv/src/config/:ro} ${COREOS_ASSEMBLER_GIT:+-v $COREOS_ASSEMBLER_GIT/src/:/usr/lib/coreos-assembler/:ro} --workdir /srv --device /dev/kvm ca'

I think this could be fixed by 7ee6bf1

also note that I'm using podman-0.9.4-1.dev.gitaf791f3.fc30.x86_64 from rawhide. Since that commit went into 0.9.3.1 I'm thinking it should be included in what I'm running.

I've copied that from the README. I am not at the computer now to try it out but I am quite sure then it depends from the selinux change and should be fixed upstream

Since that commit went into 0.9.3.1 I'm thinking it should be included in what I'm running.

which means I'm seeing this issue even with 7ee6bf1

@mheon asked for any output from the journal during an invocation.. Here is what I see:

conmon[24225]: conmon fa8c93cb5c322a4776f8 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.JW1SPZ}
conmon[24225]: conmon fa8c93cb5c322a4776f8 <ninfo>: about to accept from console_socket_fd: 16
conmon[24225]: conmon fa8c93cb5c322a4776f8 <ninfo>: about to recvfd from connfd: 19
kernel: SELinux:  duplicate or incompatible mount options
conmon[24225]: conmon fa8c93cb5c322a4776f8 <ninfo>: console = {.name = '(null)'; .fd = 0}
conmon[24225]: conmon fa8c93cb5c322a4776f8 <error>: Failed to get console terminal settings Inappropriate ioctl for device

more info. I was using runc-1.0.0-54.dev.git00dc700.fc30.x86_64. I updated to -55 (very latest in rawhide) and still see the same issue.

Verified locally with a VM. More info: not rootless-specific, I managed to reproduce when running as root.

Checked a few other Podman commands, and it looks like it's not specific to the image in question. The current build of Podman for rawhide is very broken. I get a null console error when -t is specified, or a tmpfs mount EINVAL if -d is specified.

There's a new build of Podman with a newer bundled conmon in Koji - will retest once that comes out and see if the issue is resolved. Seems like runc could also be involved?

Update: It's SELinux. Probably the same issue as #1564

Sorry, #1560 not #1564

I just started enabling rootless Podman on openSUSE and am hitting the same issue.

edit: not anymore. Was an issue with runc masking it.

I think I am hitting the same or at least a similar issue. In short I am trying to run very simply:

$ podman --log-level debug run --rm -it busybox

Without debug logging it fails with:

ERRO[0000] Error removing container b568d76af699fc15c1c9065763ebc7993d600e843d9c9aff93b9dae73236cfd0 from runtime after creation failed 
container create failed: mkdir /run/runc: permission denied
: internal libpod error


More verbose:

$ podman --log-level debug run --rm -it busybox
INFO[0000] running as rootless                          
DEBU[0000] Not configuring container store              
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
WARN[0000] AppArmor security is not available in rootless mode 
DEBU[0000] Using bridge netmode                         
INFO[0000] running as rootless                          
DEBU[0000] [graphdriver] trying provided driver "vfs"   
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]docker.io/library/busybox:latest" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] exporting opaque data as blob "sha256:59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] exporting opaque data as blob "sha256:59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
WARN[0000] AppArmor security is not available in rootless mode 
DEBU[0000] Using bridge netmode                         
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] exporting opaque data as blob "sha256:59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs/dir/c60b05656e85d2414170f93ac622a1f9ec20991e54d2247fe697ea72defd4027 
DEBU[0000] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/8a788232037eaf17794408ff3df6b922a1aedf9ef8de36afdae3ed0b0381907b, /home/ansemjo/.local/share/containers/storage/vfs/dir/c60b05656e85d2414170f93ac622a1f9ec20991e54d2247fe697ea72defd4027) 
DEBU[0000] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/8a788232037eaf17794408ff3df6b922a1aedf9ef8de36afdae3ed0b0381907b /home/ansemjo/.local/share/containers/storage/vfs/dir/c60b05656e85d2414170f93ac622a1f9ec20991e54d2247fe697ea72defd4027) 
DEBU[0000] created container "507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" 
DEBU[0000] container "507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" has work directory "/home/ansemjo/.local/share/containers/storage/vfs-containers/507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d/userdata" 
DEBU[0000] container "507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" has run directory "/run/user/1000/run/vfs-containers/507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d/userdata" 
DEBU[0000] New container created "507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" 
DEBU[0000] container "507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" has CgroupParent "/libpod_parent/libpod-507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" 
DEBU[0000] Handling terminal attach                     
DEBU[0000] mounted container "507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" at "/home/ansemjo/.local/share/containers/storage/vfs/dir/c60b05656e85d2414170f93ac622a1f9ec20991e54d2247fe697ea72defd4027" 
DEBU[0000] Created root filesystem for container 507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d at /home/ansemjo/.local/share/containers/storage/vfs/dir/c60b05656e85d2414170f93ac622a1f9ec20991e54d2247fe697ea72defd4027 
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] exporting opaque data as blob "sha256:59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] exporting opaque data as blob "sha256:59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@59788edf1f3e78cd0ebe6ce1446e9d10788225db3dedcfd1a59f764bad2b2690" 
DEBU[0000] Created OCI spec for container 507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d at /home/ansemjo/.local/share/containers/storage/vfs-containers/507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d/userdata/config.json 
DEBU[0000] /usr/libexec/crio/conmon messages will be logged to syslog 
DEBU[0000] running conmon: /usr/libexec/crio/conmon      args=[-c 507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d -u 507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d/userdata -p /run/user/1000/run/vfs-containers/507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -t --log-level debug --syslog]
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied 
DEBU[0000] Received container pid: -1                   
ERRO[0000] Error removing container 507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d from runtime after creation failed 
DEBU[0000] Cleaning up container 507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d 
DEBU[0000] Network is already cleaned up, skipping...   
DEBU[0000] unmounted container "507119f01470388c24b2f592dd7ed8f24249af9c095419243af40bacaef2478d" 
ERRO[0000] container create failed: mkdir /run/runc: permission denied
: internal libpod error 

system information:

$ podman version
Version:       0.10.1.3
Go Version:    go1.11.1
OS/Arch:       linux/amd64

$ podman info

host:
  BuildahVersion: 1.5-dev
  Conmon:
    package: Unknown
    path: /usr/libexec/crio/conmon
    version: 'conmon version 1.11.6, commit: 2d0f8c787abdfc18644e921983482f47f1a2f814'
  Distribution:
    distribution: arch
    version: unknown
  MemFree: 1744924672
  MemTotal: 8260292608
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc5+dev
      commit: 69663f0bd4b60df09991c08812a60108003fa340
      spec: 1.0.0
  SwapFree: 4294963200
  SwapTotal: 4294963200
  arch: amd64
  cpus: 4
  hostname: thinkmett
  kernel: 4.18.16-arch1-1-ARCH
  os: linux
  uptime: 34m 32.75s
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ContainerStore:
    number: 26
  GraphDriverName: vfs
  GraphOptions: []
  GraphRoot: /home/ansemjo/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 3
  RunRoot: /run/user/1000/run

$ uname -a
Linux thinkmett 4.18.16-arch1-1-ARCH #1 SMP PREEMPT Sat Oct 20 22:06:45 UTC 2018 x86_64 GNU/Linux

(this is Arch Linux and podman was installed via aur/libpod)

$ pacman -Q podman cri-o runc
libpod 0.10.1.3-3
cri-o 1.11.6-1
runc 1.0.0rc5+19+g69663f0b-1

I tried setting selinux = false in /etc/crio/crio.conf but it did not change anything.

I'm not sure why podman even tries to do anything in /run/runc .. Running the command with sudo works fine, so it _generally_ works:

$ sudo podman run --rm -it busybox
[sudo] password for ansemjo: 
/ #

Hm. It's Arch, so I don't believe you have SELinux kernel support (it won't be in enforcing mode, regardless) - so that's out.

It's failing to make /run/runc which sounds to me like a runc problem. Your runc commit is fairly old (March), and a lot of rootless work has gone in sense then - can you try building runc from source and see if that fixes things?

Sure. I was wondering about that as well. But it seems to be the latest "release", so I figured it should be fine.

Right now I rebuilt podman from source and inserted some good old fmt.Printf() debugging to see if it uses GetRootlessRuntimeDir() in pkg/util/utils.go at all - and sure enough it does:

...
GETTING ROOTLESS RUNTIME DIR
RUNTIME DIR: /run/user/1000
ERRO[0000] Error removing container ...

I'll try runc next ..

Using runc-git from AUR and symlinking ln -s /usr/local/sbin/runc /usr/sbin/runc because the package installs to /usr/local ~for some reason~ which is the default but not in my podman's PATH .. I get a different error but I get a shell inside the container:

$ podman run --rm -it busybox
ERRO[0000] could not find slirp4netns, the network namespace won't be configured: exec: "slirp4netns": executable file not found in $PATH 
/ #

So it was runc in my case too .. maybe it's time for an rc6?

Edit: I previously tried to run a rootless container per the instructions and that worked with the older version as well, in case that helps.

Slirp4netns is needed to provide networking inside the container - if it's not available, rootless containers won't have network access. You can install it from https://github.com/rootless-containers/slirp4netns (I don't think it has Arch packages yet, but maybe in the AUR?). If you don't need network access, you can ignore the error.

~I don't think it has an AUR package either: search for "slirp OR netns". Either way, installation from source looks simple enough.~ I have added a package.

Thank you!


edit, in case anyone else stumbles over this: after installing slirp4netns I always tried checking for network connectivity with ping 1.1.1.1, which consistently had 100% package loss. That is until I read somewhere that pings won't work from inside the namespace but simply curl-ing something will. Unfortunately I did not save the link but I believe you needed to add some sysctl config on the host to allow ICMP packets.

You can do podman run --net host ...
And it will use the host network.

So can I close this issue? @dustymabe ?

rootless ping requires sudo sh -c "echo 0 2147483647 > /proc/sys/net/ipv4/ping_group_range"

@AkihiroSuda Aha! Yes, that was the setting I meant. Thanks!

@rhatdan as far as I'm concerned this issue might be closed .. and if I read @vrothberg correctly he also fixed his issue by updating runc.

Oh, now I am hitting a similar (or the same?) issue again, I believe. I am trying to run linuxserver/airsonic in a rootless container and I'm seeing similar symptoms.



podman info

$ podman info
host:
  BuildahVersion: 1.5-dev
  Conmon:
    package: Unknown
    path: /usr/libexec/crio/conmon
    version: 'conmon version 1.11.6, commit: 2d0f8c787abdfc18644e921983482f47f1a2f814'
  Distribution:
    distribution: arch
    version: unknown
  MemFree: 1871491072
  MemTotal: 8260214784
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc5+dev
      commit: 15b24b70df05b18bb93762b607507751fe9b4104
      spec: 1.0.1-dev
  SwapFree: 4291817472
  SwapTotal: 4294963200
  arch: amd64
  cpus: 4
  hostname: thinkmett
  kernel: 4.18.16-arch1-1-ARCH
  os: linux
  uptime: 2h 5m 30.35s (Approximately 0.08 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - quay.io
  - registry.access.redhat.com
  - registry.centos.org
store:
  ContainerStore:
    number: 4
  GraphDriverName: vfs
  GraphOptions: []
  GraphRoot: /home/ansemjo/.local/share/containers/storage
  GraphStatus: {}
  ImageStore:
    number: 13
  RunRoot: /run/user/1000/run

Note that I just updated runc again, so I am not running an old version this time. The output also shows conmon version 1.11.6. As I was writing this I updated to 1.11.8 but it did not fix the issue either.

Running with -it, -v .. and --net host:

$ podman run --rm -it --name airsonic -v /home/ansemjo/Music/:/music --net host linuxserver/airsonic
error reading container (probably exited) json message: EOF
$ echo $?
127

Running with only -it and debug logging:

$ podman --log-level debug run --rm -it linuxserver/airsonic
INFO[0000] running as rootless                          
DEBU[0000] Not configuring container store              
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
WARN[0000] AppArmor security is not available in rootless mode 
DEBU[0000] Using bridge netmode                         
INFO[0000] running as rootless                          
DEBU[0000] [graphdriver] trying provided driver "vfs"   
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
...
DEBU[0003] Created OCI spec for container 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 at /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/config.json 
DEBU[0003] /usr/libexec/crio/conmon messages will be logged to syslog 
DEBU[0003] running conmon: /usr/libexec/crio/conmon      args=[-c 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 -u 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata -p /run/user/1000/run/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -t --log-level debug --syslog]
WARN[0003] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied 
DEBU[0003] Cleaning up container 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 
DEBU[0003] Network is already cleaned up, skipping...   
DEBU[0003] unmounted container "3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057" 
ERRO[0003] error reading container (probably exited) json message: EOF 

Output from journalctl -f during the above command:

Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 <ninfo>: addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.7UH1RZ}
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 <ninfo>: about to accept from console_socket_fd: 16
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 <ninfo>: about to recvfd from connfd: 19
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 <ninfo>: console = {.name = '(null)'; .fd = 0}
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 <error>: Failed to get console terminal settings Inappropriate ioctl for device

Running without the -it:

$ podman --log-level debug run --rm linuxserver/airsonic
...
DEBU[0002] running conmon: /usr/libexec/crio/conmon      args=[-c a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75 -u a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75 -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata -p /run/user/1000/run/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog]
WARN[0002] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied 
DEBU[0002] Received container pid: -1                   
DEBU[0002] Cleaning up container a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75 
DEBU[0002] Network is already cleaned up, skipping...   
DEBU[0002] unmounted container "a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75" 
ERRO[0002] container create failed: container_linux.go:337: starting container process caused "process_linux.go:403: container init caused \"rootfs_linux.go:58: mounting \\\"/sys/fs/cgroup/systemd/libpod_parent/libpod-a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75\\\" to rootfs \\\"/home/ansemjo/.local/share/containers/storage/vfs/dir/7417ae767b02870c735dfe1fd3cac9576072d0a230ff51643137399543db121b\\\" at \\\"/sys/fs/cgroup/systemd\\\" caused \\\"stat /sys/fs/cgroup/systemd/libpod_parent/libpod-a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75: no such file or directory\\\"\""
: internal libpod error 

Output from journalctl -f during the above command:

Nov 05 13:56:00 thinkmett conmon[29845]: conmon 26795b034d46218c8ba4 <error>: Failed to create container: exit status 1

Since it tries to mount something from /sys/fs/cgroup/systemd/libpod_parent/, here is a directory listing:

$ ll /sys/fs/cgroup/systemd/
total 0
drwxr-xr-x  2 root root 0 Nov  5 13:50 init.scope/
drwxr-xr-x  2 root root 0 Nov  5 13:50 machine.slice/
drwxr-xr-x 55 root root 0 Nov  5 13:50 system.slice/
drwxr-xr-x  3 root root 0 Nov  5 13:47 user.slice/
-rw-r--r--  1 root root 0 Nov  5 13:31 cgroup.clone_children
-rw-r--r--  1 root root 0 Nov  5 13:31 cgroup.procs
-r--r--r--  1 root root 0 Nov  5 13:31 cgroup.sane_behavior
-rw-r--r--  1 root root 0 Nov  5 13:31 notify_on_release
-rw-r--r--  1 root root 0 Nov  5 13:31 release_agent
-rw-r--r--  1 root root 0 Nov  5 13:31 tasks

All this time, simple alpine containers work flawlessly:

$ podman run --rm -it alpine date
Mon Nov  5 12:53:23 UTC 2018

Can you run with --log-level=debug and check syslog for any messages from
conmon?

On Mon, Nov 5, 2018, 08:19 ansemjo <[email protected] wrote:

Oh, now I am hitting a similar (or the same?) issue again, I believe. I am
trying to run linuxserver/airsonic
https://hub.docker.com/r/linuxserver/airsonic/ in a rootless container
and I'm seeing similar symptoms.
podman info

$ podman info
host:
BuildahVersion: 1.5-dev
Conmon:
package: Unknown
path: /usr/libexec/crio/conmon
version: 'conmon version 1.11.6, commit: 2d0f8c787abdfc18644e921983482f47f1a2f814'
Distribution:
distribution: arch
version: unknown
MemFree: 1871491072
MemTotal: 8260214784
OCIRuntime:
package: Unknown
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc5+dev
commit: 15b24b70df05b18bb93762b607507751fe9b4104
spec: 1.0.1-dev
SwapFree: 4291817472
SwapTotal: 4294963200
arch: amd64
cpus: 4
hostname: thinkmett
kernel: 4.18.16-arch1-1-ARCH
os: linux
uptime: 2h 5m 30.35s (Approximately 0.08 days)
insecure registries:
registries: []
registries:
registries:

  • docker.io
  • registry.fedoraproject.org
  • quay.io
  • registry.access.redhat.com
  • registry.centos.org
    store:
    ContainerStore:
    number: 4
    GraphDriverName: vfs
    GraphOptions: []
    GraphRoot: /home/ansemjo/.local/share/containers/storage
    GraphStatus: {}
    ImageStore:
    number: 13
    RunRoot: /run/user/1000/run

Note that I just updated runc again, so I am not running an old version
this time. The output also shows conmon version 1.11.6. As I was writing
this I updated to 1.11.8 but it did not fix the issue either.
Running with -it, -v .. and --net host:

$ podman run --rm -it --name airsonic -v /home/ansemjo/Music/:/music --net host linuxserver/airsonic
error reading container (probably exited) json message: EOF
$ echo $?
127

Running with only -it and debug logging:

$ podman --log-level debug run --rm -it linuxserver/airsonic
INFO[0000] running as rootless
DEBU[0000] Not configuring container store
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] Using bridge netmode
INFO[0000] running as rootless
DEBU[0000] [graphdriver] trying provided driver "vfs"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db
...
DEBU[0003] Created OCI spec for container 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 at /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/config.json
DEBU[0003] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0003] running conmon: /usr/libexec/crio/conmon args=[-c 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 -u 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata -p /run/user/1000/run/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -t --log-level debug --syslog]
WARN[0003] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0003] Cleaning up container 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057
DEBU[0003] Network is already cleaned up, skipping...
DEBU[0003] unmounted container "3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057"
ERRO[0003] error reading container (probably exited) json message: EOF

Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.7UH1RZ}
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : about to accept from console_socket_fd: 16
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : about to recvfd from connfd: 19
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : console = {.name = '(null)'; .fd = 0}
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : Failed to get console terminal settings Inappropriate ioctl for device

Running without the -it:

$ podman --log-level debug run --rm linuxserver/airsonic
...
DEBU[0002] running conmon: /usr/libexec/crio/conmon args=[-c a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75 -u a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75 -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata -p /run/user/1000/run/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog]
WARN[0002] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0002] Received container pid: -1
DEBU[0002] Cleaning up container a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] unmounted container "a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75"
ERRO[0002] container create failed: container_linux.go:337: starting container process caused "process_linux.go:403: container init caused \"rootfs_linux.go:58: mounting \\"/sys/fs/cgroup/systemd/libpod_parent/libpod-a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75\\" to rootfs \\"/home/ansemjo/.local/share/containers/storage/vfs/dir/7417ae767b02870c735dfe1fd3cac9576072d0a230ff51643137399543db121b\\" at \\"/sys/fs/cgroup/systemd\\" caused \\"stat /sys/fs/cgroup/systemd/libpod_parent/libpod-a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75: no such file or directory\\"\""
: internal libpod error

Nov 05 13:56:00 thinkmett conmon[29845]: conmon 26795b034d46218c8ba4 : Failed to create container: exit status 1

Since it tries to mount something from
/sys/fs/cgroup/systemd/libpod_parent/, here is a directory listing:

$ ll /sys/fs/cgroup/systemd/
total 0
drwxr-xr-x 2 root root 0 Nov 5 13:50 init.scope/
drwxr-xr-x 2 root root 0 Nov 5 13:50 machine.slice/
drwxr-xr-x 55 root root 0 Nov 5 13:50 system.slice/
drwxr-xr-x 3 root root 0 Nov 5 13:47 user.slice/
-rw-r--r-- 1 root root 0 Nov 5 13:31 cgroup.clone_children
-rw-r--r-- 1 root root 0 Nov 5 13:31 cgroup.procs
-r--r--r-- 1 root root 0 Nov 5 13:31 cgroup.sane_behavior
-rw-r--r-- 1 root root 0 Nov 5 13:31 notify_on_release
-rw-r--r-- 1 root root 0 Nov 5 13:31 release_agent
-rw-r--r-- 1 root root 0 Nov 5 13:31 tasks

All this time, simple alpine containers work flawlessly:

$ podman run --rm -it alpine date
Mon Nov 5 12:53:23 UTC 2018


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/1552#issuecomment-435870704,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHYHCCg2XJvYinGWAoh_xnwiHPSQYPkJks5usDrZgaJpZM4W7wwe
.

I included output from journalctl -f right below the command execution blocks. Or do you mean a specific file in /var/log?

Also, you might want to run conmon yourself using the binary and args
listed by debug logging. Make sure the container is mounted first with
podman mount, then run the given conmon binary with the long set of args.
That might produce a better error message.

This seems to be runc erroring on our OCI config, and knowing exactly where
will help us fix it.

On Mon, Nov 5, 2018, 08:53 Matthew Heon [email protected] wrote:

Can you run with --log-level=debug and check syslog for any messages from
conmon?

On Mon, Nov 5, 2018, 08:19 ansemjo <[email protected] wrote:

Oh, now I am hitting a similar (or the same?) issue again, I believe. I
am trying to run linuxserver/airsonic
https://hub.docker.com/r/linuxserver/airsonic/ in a rootless container
and I'm seeing similar symptoms.
podman info

$ podman info
host:
BuildahVersion: 1.5-dev
Conmon:
package: Unknown
path: /usr/libexec/crio/conmon
version: 'conmon version 1.11.6, commit: 2d0f8c787abdfc18644e921983482f47f1a2f814'
Distribution:
distribution: arch
version: unknown
MemFree: 1871491072
MemTotal: 8260214784
OCIRuntime:
package: Unknown
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc5+dev
commit: 15b24b70df05b18bb93762b607507751fe9b4104
spec: 1.0.1-dev
SwapFree: 4291817472
SwapTotal: 4294963200
arch: amd64
cpus: 4
hostname: thinkmett
kernel: 4.18.16-arch1-1-ARCH
os: linux
uptime: 2h 5m 30.35s (Approximately 0.08 days)
insecure registries:
registries: []
registries:
registries:

  • docker.io
  • registry.fedoraproject.org
  • quay.io
  • registry.access.redhat.com
  • registry.centos.org
    store:
    ContainerStore:
    number: 4
    GraphDriverName: vfs
    GraphOptions: []
    GraphRoot: /home/ansemjo/.local/share/containers/storage
    GraphStatus: {}
    ImageStore:
    number: 13
    RunRoot: /run/user/1000/run

Note that I just updated runc again, so I am not running an old version
this time. The output also shows conmon version 1.11.6. As I was writing
this I updated to 1.11.8 but it did not fix the issue either.
Running with -it, -v .. and --net host:

$ podman run --rm -it --name airsonic -v /home/ansemjo/Music/:/music --net host linuxserver/airsonic
error reading container (probably exited) json message: EOF
$ echo $?
127

Running with only -it and debug logging:

$ podman --log-level debug run --rm -it linuxserver/airsonic
INFO[0000] running as rootless
DEBU[0000] Not configuring container store
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Set libpod namespace to ""
WARN[0000] AppArmor security is not available in rootless mode
DEBU[0000] Using bridge netmode
INFO[0000] running as rootless
DEBU[0000] [graphdriver] trying provided driver "vfs"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db
...
DEBU[0003] Created OCI spec for container 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 at /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/config.json
DEBU[0003] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0003] running conmon: /usr/libexec/crio/conmon args=[-c 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 -u 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057 -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata -p /run/user/1000/run/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket -t --log-level debug --syslog]
WARN[0003] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0003] Cleaning up container 3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057
DEBU[0003] Network is already cleaned up, skipping...
DEBU[0003] unmounted container "3e03422ad112d5c3036731a0a30261bccd6cf1f07c5ef687198127310696d057"
ERRO[0003] error reading container (probably exited) json message: EOF

Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : addr{sun_family=AF_UNIX, sun_path=/tmp/conmon-term.7UH1RZ}
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : about to accept from console_socket_fd: 16
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : about to recvfd from connfd: 19
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : console = {.name = '(null)'; .fd = 0}
Nov 05 13:40:30 thinkmett conmon[22167]: conmon 3e03422ad112d5c30367 : Failed to get console terminal settings Inappropriate ioctl for device

Running without the -it:

$ podman --log-level debug run --rm linuxserver/airsonic
...
DEBU[0002] running conmon: /usr/libexec/crio/conmon args=[-c a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75 -u a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75 -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata -p /run/user/1000/run/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog]
WARN[0002] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0002] Received container pid: -1
DEBU[0002] Cleaning up container a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] unmounted container "a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75"
ERRO[0002] container create failed: container_linux.go:337: starting container process caused "process_linux.go:403: container init caused \"rootfs_linux.go:58: mounting \\"/sys/fs/cgroup/systemd/libpod_parent/libpod-a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75\\" to rootfs \\"/home/ansemjo/.local/share/containers/storage/vfs/dir/7417ae767b02870c735dfe1fd3cac9576072d0a230ff51643137399543db121b\\" at \\"/sys/fs/cgroup/systemd\\" caused \\"stat /sys/fs/cgroup/systemd/libpod_parent/libpod-a34e0b2e1ab1b1d994f935b6e0b45645db213cd7d0f805c300b3605c5cd10e75: no such file or directory\\"\""
: internal libpod error

Nov 05 13:56:00 thinkmett conmon[29845]: conmon 26795b034d46218c8ba4 : Failed to create container: exit status 1

Since it tries to mount something from
/sys/fs/cgroup/systemd/libpod_parent/, here is a directory listing:

$ ll /sys/fs/cgroup/systemd/
total 0
drwxr-xr-x 2 root root 0 Nov 5 13:50 init.scope/
drwxr-xr-x 2 root root 0 Nov 5 13:50 machine.slice/
drwxr-xr-x 55 root root 0 Nov 5 13:50 system.slice/
drwxr-xr-x 3 root root 0 Nov 5 13:47 user.slice/
-rw-r--r-- 1 root root 0 Nov 5 13:31 cgroup.clone_children
-rw-r--r-- 1 root root 0 Nov 5 13:31 cgroup.procs
-r--r--r-- 1 root root 0 Nov 5 13:31 cgroup.sane_behavior
-rw-r--r-- 1 root root 0 Nov 5 13:31 notify_on_release
-rw-r--r-- 1 root root 0 Nov 5 13:31 release_agent
-rw-r--r-- 1 root root 0 Nov 5 13:31 tasks

All this time, simple alpine containers work flawlessly:

$ podman run --rm -it alpine date
Mon Nov 5 12:53:23 UTC 2018


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/1552#issuecomment-435870704,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHYHCCg2XJvYinGWAoh_xnwiHPSQYPkJks5usDrZgaJpZM4W7wwe
.

It does not seem that running conmon directly gives any further output:

podman create

$ podman --log-level debug create -it --net host linuxserver/airsonic
INFO[0000] running as rootless                          
DEBU[0000] Not configuring container store              
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
WARN[0000] AppArmor security is not available in rootless mode 
DEBU[0000] Using host netmode                           
INFO[0000] running as rootless                          
DEBU[0000] [graphdriver] trying provided driver "vfs"   
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]docker.io/linuxserver/airsonic:latest" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
WARN[0000] AppArmor security is not available in rootless mode 
DEBU[0000] Using host netmode                           
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs/dir/6da4220c8d4973346d0d35e9f16d637acd467c92d4f6766968e740b000d8a7fb 
DEBU[0000] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/2d61c52ae09fa1cf64e31f7c19a4f9a41b460ae55d801b0812c954cd0feaba84, /home/ansemjo/.local/share/containers/storage/vfs/dir/6da4220c8d4973346d0d35e9f16d637acd467c92d4f6766968e740b000d8a7fb) 
DEBU[0000] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/2d61c52ae09fa1cf64e31f7c19a4f9a41b460ae55d801b0812c954cd0feaba84 /home/ansemjo/.local/share/containers/storage/vfs/dir/6da4220c8d4973346d0d35e9f16d637acd467c92d4f6766968e740b000d8a7fb) 
DEBU[0003] created container "22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498" 
DEBU[0003] container "22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498" has work directory "/home/ansemjo/.local/share/containers/storage/vfs-containers/22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498/userdata" 
DEBU[0003] container "22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498" has run directory "/run/user/1000/run/vfs-containers/22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498/userdata" 
DEBU[0003] New container created "22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498" 
22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498

podman mount

$ podman --log-level debug mount 22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498
INFO[0000] running as rootless                          
DEBU[0000] [graphdriver] trying provided driver "vfs"   
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] mounted container "22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498" at "/home/ansemjo/.local/share/containers/storage/vfs/dir/6da4220c8d4973346d0d35e9f16d637acd467c92d4f6766968e740b000d8a7fb" 
/home/ansemjo/.local/share/containers/storage/vfs/dir/6da4220c8d4973346d0d35e9f16d637acd467c92d4f6766968e740b000d8a7fb

And finally the conmon command:

$ /usr/libexec/crio/conmon \
  -c 22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498 \
  -u 22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498 \
  -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498/userdata \
  -p /run/user/1000/run/vfs-containers/22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498/userdata/pidfile \
  -l /home/ansemjo/.local/share/containers/storage/vfs-containers/22762d04c7b281abf8cce345bc296be32b3852ad2a08d5d2a0ad0349bd782498/userdata/ctr.log \
  --exit-dir /run/user/1000/libpod/tmp/exits \
  --socket-dir-path /run/user/1000/libpod/tmp/socket \
  --log-level debug --syslog

This was the only output in journalctl -f during that command:

Nov 05 15:06:43 thinkmett conmon[2344]: conmon 22762d04c7b281abf8cc <error>: Failed to create container: exit status 1

conmon exited with status 0 and printed nothing on the terminal.


Shouldn't there be a config.json in the bundle directory or am I misunderstanding something?

$ ll ~/.local/share/containers/storage/vfs-containers/$ID/userdata/
total 0
drwxr-xr-x 1 ansemjo users 26 Nov  5 15:16 artifacts/
drwx------ 1 ansemjo users  0 Nov  5 15:16 shm/
-rw------- 1 ansemjo users  0 Nov  5 15:22 ctr.log

I must be missing something? Starting a simple alpine container manually does not seem to work either:

$ ID=$(podman create alpine date)
$ podman start -a $ID
Mon Nov  5 14:50:41 UTC 2018
$ podman mount $ID
/home/ansemjo/.local/share/containers/storage/vfs/dir/656d895766bcc4adefed57e07cc2686bae1f826315c61a62db8b5e6ae22d3101
$ /usr/libexec/crio/conmon -c $ID -u $ID -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/$ID/userdata -p /run/user/1000/run/vfs-containers/$ID/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/$ID/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog

Again, conmon exits without any errors and the journal only contains:

Nov 05 15:50:45 thinkmett conmon[5768]: conmon 1cb5b5dc6ca2b5bdaee8 <error>: Failed to create container: exit status 1

Oops - I don't think I was clear about what I was asking, sorry.

Proper procedure:

  1. podman --log-level=debug run a container and note the Conmon command line as you did there. It should fail with this issue.
  2. podman mount the container
  3. Run the conmon command from step 1. Ideally it should output something to stdout/stderr from runc.
  4. podman unmount the container

To be clear, command 1 should be a podman run, so your conmon command line comes from the container you created/started/mounted

Still, no luck:

⦁ ansemjo @thinkmett ~ $ podman --log-level debug run linuxserver/airsonic
INFO[0000] running as rootless                          
DEBU[0000] Not configuring container store              
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
WARN[0000] AppArmor security is not available in rootless mode 
DEBU[0000] Using bridge netmode                         
INFO[0000] running as rootless                          
DEBU[0000] [graphdriver] trying provided driver "vfs"   
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist 
DEBU[0000] Initializing boltdb state at /home/ansemjo/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]docker.io/linuxserver/airsonic:latest" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
WARN[0000] AppArmor security is not available in rootless mode 
DEBU[0000] Using bridge netmode                         
DEBU[0000] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0000] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761 
DEBU[0000] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/2d61c52ae09fa1cf64e31f7c19a4f9a41b460ae55d801b0812c954cd0feaba84, /home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761) 
DEBU[0000] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/2d61c52ae09fa1cf64e31f7c19a4f9a41b460ae55d801b0812c954cd0feaba84 /home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761) 
DEBU[0003] created container "90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" 
DEBU[0004] container "90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" has work directory "/home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata" 
DEBU[0004] container "90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" has run directory "/run/user/1000/run/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata" 
DEBU[0004] New container created "90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" 
DEBU[0004] container "90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" has CgroupParent "/libpod_parent/libpod-90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" 
DEBU[0004] Not attaching to stdin                       
DEBU[0004] mounted container "90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" at "/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761" 
DEBU[0004] Created root filesystem for container 90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a at /home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761 
DEBU[0004] /etc/system-fips does not exist on host, not mounting FIPS mode secret 
DEBU[0004] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0004] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0004] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0004] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0004] exporting opaque data as blob "sha256:c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0004] parsed reference into "[vfs@/home/ansemjo/.local/share/containers/storage+/run/user/1000/run]@c7085c58bbff31cf3644073d61cc2e1ea1187ab79b2e281d7b8ce7d93febafcb" 
DEBU[0004] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/config 
DEBU[0004] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/config, /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/config) 
DEBU[0004] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/config /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/config) 
DEBU[0004] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/media 
DEBU[0004] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/media, /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/media) 
DEBU[0004] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/media /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/media) 
DEBU[0004] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/music 
DEBU[0004] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/music, /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/music) 
DEBU[0004] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/music /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/music) 
DEBU[0004] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/playlists 
DEBU[0004] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/playlists, /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/playlists) 
DEBU[0004] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/playlists /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/playlists) 
DEBU[0004] Creating dest directory: /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/podcasts 
DEBU[0004] Calling TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/podcasts, /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/podcasts) 
DEBU[0004] TarUntar(/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761/podcasts /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/volumes/podcasts) 
DEBU[0004] Created OCI spec for container 90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a at /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/config.json 
DEBU[0004] /usr/libexec/crio/conmon messages will be logged to syslog 
DEBU[0004] running conmon: /usr/libexec/crio/conmon      args=[-c 90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a -u 90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata -p /run/user/1000/run/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog]
WARN[0004] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied 
DEBU[0004] Received container pid: -1                   
DEBU[0004] Cleaning up container 90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a 
DEBU[0004] Network is already cleaned up, skipping...   
DEBU[0004] unmounted container "90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a" 
ERRO[0004] container create failed: container_linux.go:337: starting container process caused "process_linux.go:403: container init caused \"rootfs_linux.go:58: mounting \\\"/sys/fs/cgroup/systemd/libpod_parent/libpod-90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a\\\" to rootfs \\\"/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761\\\" at \\\"/sys/fs/cgroup/systemd\\\" caused \\\"stat /sys/fs/cgroup/systemd/libpod_parent/libpod-90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a: no such file or directory\\\"\""
: internal libpod error 
127 ansemjo @thinkmett ~ $ podman ps -a
CONTAINER ID   IMAGE                                   COMMAND   CREATED          STATUS    PORTS   NAMES               IS INFRA
90cf2badd9ae   docker.io/linuxserver/airsonic:latest   /init     25 seconds ago   Created           friendly_brattain   false
⦁ ansemjo @thinkmett ~ $ podman mount 90cf2badd9ae
/home/ansemjo/.local/share/containers/storage/vfs/dir/86a02bae89bfcfa3aed623ff337c696da41525129aed7151723f4f0ce92e2761
⦁ ansemjo @thinkmett ~ $ /usr/libexec/crio/conmon -c 90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a -u 90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a -r /usr/bin/runc -b /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata -p /run/user/1000/run/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/pidfile -l /home/ansemjo/.local/share/containers/storage/vfs-containers/90cf2badd9aeab1a70dc4821aee4039ab70eaac08121d8aeafcf8e5935ca810a/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog
⦁ ansemjo @thinkmett ~ $ echo $?
0

And the journal output:

⦁ ansemjo @thinkmett ~ $ journalctl -f | grep conmon
Nov 05 16:49:35 thinkmett conmon[8928]: conmon 90cf2badd9aeab1a70dc <error>: Failed to create container: exit status 1
Nov 05 16:50:30 thinkmett conmon[9022]: conmon 90cf2badd9aeab1a70dc <error>: Failed to create container: exit status 1

Could this be something specific to rootless + systemd?

I tried running the same container thorugh sudo and it fires up fine. The only difference in the conmon commands that I can see is the added -s flag:

DEBU[0040] running conmon: /usr/libexec/crio/conmon      args=[-s -c 034a5bb57e99dedccea324ca66886d97ab78d54634833b712eb17a8e48107e6e -u 034a5bb57e99dedccea324ca66886d97ab78d54634833b712eb17a8e48107e6e -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/034a5bb57e99dedccea324ca66886d97ab78d54634833b712eb17a8e48107e6e/userdata -p /var/run/containers/storage/overlay-containers/034a5bb57e99dedccea324ca66886d97ab78d54634833b712eb17a8e48107e6e/userdata/pidfile -l /var/lib/containers/storage/overlay-containers/034a5bb57e99dedccea324ca66886d97ab78d54634833b712eb17a8e48107e6e/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket --log-level debug --syslog]

Furhtermore, I tried running the centos image and fire up a bash shell and systemd init. The shell works fine but when I run (rootless) $ podman run centos /sbin/init I get the exact same symptoms.

Yeah, it has to be rootless-specific. It could be specific to the way runc handles running without root...

Actually, do we expect systemd to start in rootless cases, given it will not have the ability to manage cgroups? @rhatdan @giuseppe

Looking at the Dockerfiles for linuxserver/airsonic, it uses their Ubuntu baseimage, which uses S6 overlay and /init as an entrypoint. So while this is not systemd it calls something called init, which might trigger some sort of systemd-specific code?

Testing this theory I put together a very simple dockerfile, which has the same shell script copied to different names:

$ cat test
#!/bin/sh
echo "RUNNING AS $0"
date
exec bash
$ cat Dockerfile 
FROM ubuntu:xenial

COPY test /init
COPY test /init2
COPY test /noinit
COPY test /test

And indeed, it fails as soon as I use the /init entrypoint:

$ podman run --rm -it testinit /test
RUNNING AS /test
Mon Nov  5 16:34:40 UTC 2018
root@c8f9e6d1aa7d:/# exit
$ podman run --rm -it testinit /noinit
RUNNING AS /noinit
Mon Nov  5 16:34:50 UTC 2018
root@ee7e45bd8c68:/# exit
$ podman run --rm -it testinit /init2
RUNNING AS /init2
Mon Nov  5 16:34:55 UTC 2018
root@de73dade3c56:/# exit
$ podman run --rm -it testinit /init
error reading container (probably exited) json message: EOF

Yup. Starting the container with bash as an entrypoint like this works fine:

$ podman run --rm -it --entrypoint bash linuxserver/airsonic /init
/init: line 12: export: `/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin': not a valid identifier
[s6-init] making user provided files available at /var/run/s6/etc...exited 0.
[s6-init] ensuring user provided files have correct perms...exited 0.
[fix-attrs.d] applying ownership & permissions fixes...
[fix-attrs.d] done.
[...]
[services.d] starting services
[services.d] done.
           _                       _          
     /\   (_)                     (_)         
    /  \   _ _ __  ___  ___  _ __  _  ___     
   / /\ \ | | '__|/ __|/ _ \| '_ \| |/ __|    
  / ____ \| | |   \__ \ (_) | | | | | (__     
 /_/    \_\_|_|   |___/\___/|_| |_|_|\___|    

                        10.1.2-RELEASE

[...]

edit: or simply using podman run --systemd=false ... would have worked aswell ..

Alright, it's our systemd-specific mount handling triggering.

We can fence this off so it only runs if we have root.

(If you actually want rootless systemd containers, they will probably not work, but at least we can give a better error message...)

@mheon, thanks for tracking this down.

Indeed it seems to be caused by the systemd code. @ansemjo, you could try to disable it with --systemd=false. This should be the default for rootless containers, I'll prepare a patch

@mheon @ansemjo patch here: https://github.com/containers/libpod/pull/1761

From a quick test, systemd boots in the rootless container:

$ bin/podman run --rm -it docker.io/fedora /sbin/init
systemd 238 running in system mode. (+PAM +AUDIT +SELINUX +IMA -APPARMOR +SMACK +SYSVINIT +UTMP +LIBCRYPTSETUP +GCRYPT +GNUTLS +ACL +XZ +LZ4 +SECCOMP +BLKID +ELFUTILS +KMOD +IDN2 -IDN +PCRE2 default-hierarchy=hybrid)
Detected virtualization container-other.
Detected architecture x86-64.

Welcome to Fedora 28 (Twenty Eight)!

Set hostname to <helium>.
Initializing machine ID from random generator.
Couldn't move remaining userspace processes, ignoring: Input/output error
File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  OK  ] Listening on Journal Socket (/dev/log).
[  OK  ] Started Forward Password Requests to Wall Directory Watch.
[  OK  ] Listening on Process Core Dump Socket.
[  OK  ] Listening on /dev/initctl Compatibility Named Pipe.
[  OK  ] Started Dispatch Password Requests to Console Directory Watch.
[  OK  ] Reached target Slices.
[  OK  ] Reached target Swap.
[  OK  ] Reached target Remote File Systems.
[  OK  ] Listening on Journal Socket.
         Starting Journal Service...
         Starting Create System Users...
[  OK  ] Reached target Paths.
[  OK  ] Reached target Local File Systems.
         Starting Rebuild Dynamic Linker Cache...
         Starting Rebuild Journal Catalog...
[  OK  ] Started Journal Service.
[  OK  ] Started Create System Users.
         Starting Flush Journal to Persistent Storage...
[  OK  ] Started Rebuild Journal Catalog.
[  OK  ] Started Flush Journal to Persistent Storage.
         Starting Create Volatile Files and Directories...
[  OK  ] Started Rebuild Dynamic Linker Cache.
         Starting Update is Completed...
[  OK  ] Started Update is Completed.
[  OK  ] Started Create Volatile Files and Directories.
         Starting Update UTMP about System Boot/Shutdown...
[  OK  ] Started Update UTMP about System Boot/Shutdown.
[  OK  ] Reached target System Initialization.
[  OK  ] Listening on D-Bus System Message Bus Socket.
[  OK  ] Reached target Sockets.
[  OK  ] Started dnf makecache timer.
[  OK  ] Reached target Basic System.
         Starting Permit User Sessions...
[  OK  ] Started D-Bus System Message Bus.
[  OK  ] Started Daily Cleanup of Temporary Directories.
[  OK  ] Reached target Timers.
[  OK  ] Started Permit User Sessions.
[  OK  ] Reached target Multi-User System.
         Starting Update UTMP about System Runlevel Changes...
[  OK  ] Started Update UTMP about System Runlevel Changes.
         Unmounting /var/log/journal...
[  OK  ] Stopped target Multi-User System.
         Stopping D-Bus System Message Bus...
[  OK  ] Stopped target Timers.
         Stopping Permit User Sessions...
[  OK  ] Stopped Daily Cleanup of Temporary Directories.
[  OK  ] Stopped D-Bus System Message Bus.

I believe this is fixed now.

Sorry, appears again on podman version 1.2.0-dev when running rootless podman --log-level debug run -it --rm --read-only busybox sh

WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied

Output of podman info --debug:

debug:
  compiler: gc
  git commit: 91373151335a84c5b78dbe46362cb678807d7509
  go version: go1.11.5
  podman version: 1.2.0-dev
host:
  BuildahVersion: 1.7.1
  Conmon:
    package: Unknown
    path: /usr/libexec/podman/conmon
    version: 'conmon version 1.14.0-dev, commit: 8de2ffc9e0809e98a72b93e35b74b8e5d4b0c29c'
  Distribution:
    distribution: debian
    version: "9"
  MemFree: 32108892160
  MemTotal: 38205562880
  OCIRuntime:
    package: Unknown
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc6+dev
      commit: f79e211b1d5763d25fb8debda70a764ca86a0f23
      spec: 1.0.1-dev
  SwapFree: 8586784768
  SwapTotal: 8586784768
  arch: amd64
  cpus: 8
  hostname: pgsql
  kernel: 4.19.0-0.bpo.2-amd64
  os: linux
  rootless: false
  uptime: 26h 48m 2.64s (Approximately 1.08 days)
insecure registries:
  registries: []
registries:
  registries:
  - docker.io
  - registry.fedoraproject.org
  - registry.access.redhat.com
store:
  ConfigFile: /etc/containers/storage.conf
  ContainerStore:
    number: 0
  GraphDriverName: overlay
  GraphOptions: null
  GraphRoot: /var/lib/containers/storage
  GraphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  ImageStore:
    number: 0
  RunRoot: /var/run/containers/storage
  VolumePath: /var/lib/containers/storage/volumes

Sorry, appears again on podman version 1.2.0-dev when running rootless podman --log-level debug run -it --rm --read-only busybox sh

it is only a warning, and I think it is fine to keep it. With cgroup v2 we will be able to delegate subtrees to unprivileged users, so having a way to distinguish if we were able or not to join the cgroup will be helpful.

Hi Giuseppe,
this (cosmetic) warning does cause confusion at customers that consider this warning to be the cause of later failures (which have different causes).
I find myself in the need to lower the log level on some podman operations to ease adoption.
Could we make a different priority/wording or a smarter detection to not issue a warning if there are no cgroups V2?

Are you seeing WARN level logs by default? That shouldn't be the case,
Podman has always defaulted to ERROR level logging.

On Mon, Jan 13, 2020, 13:05 axelthimm notifications@github.com wrote:

Hi Giuseppe,
this (cosmetic) warning does cause confusion at customers that consider
this warning to be the cause of later failures (which have different
causes).
I find myself in the need to lower the log level on some podman operations
to ease adoption.
Could we make a different priority/wording or a smarter detection to not
issue a warning if there are no cgroups V2?


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/1552?email_source=notifications&email_token=AB3AOCAY3U64RFNJR7KEN5TQ5SUNNA5CNFSM4FXPBQPKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEIZWFZY#issuecomment-573792999,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AB3AOCEVD56GBNIZ7SWZE4DQ5SUNNANCNFSM4FXPBQPA
.

I have podman embedded in a CI setup and use different log-levels tuned by a global setting, but I opted the default to be warning, as most warnings are lingering issues. This one is tainting the setup and I had to return to error level.

Warnings should be cast for potential errors or misconfigurations IMHO, not for future development.

Was this page helpful?
0 / 5 - 0 ratings