Podman: Rootless podman has new warning message on 1.9.0

Created on 20 Apr 2020  路  15Comments  路  Source: containers/podman

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

  1. Create a new user account that won't have a systemd login session open for it.

  2. su to the new account and run podman run --rm hello-world

Describe the results you received:
On 1.9.0:

clint@www4:~$ sudo useradd -m testuser
clint@www4:~$ sudo su - testuser
$ podman run --rm hello-world
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1005` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1005` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
Trying to pull docker.io/library/hello-world...
Getting image source signatures
Copying blob 0e03bdcc26d7 done
Copying config bf756fb1ae done
Writing manifest to image destination
Storing signatures

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

Specifically, this new warning is suddenly cropping up now that I've upgraded from 1.8.2 to 1.9.0, with no other configuration changes.

Describe the results you expected:
On 1.8.2:

clint@www4:~$ sudo useradd -m testuser
clint@www4:~$ sudo su - testuser
$ podman run --rm hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

As you can see, 1.8.2 does not show the warning when running rootless podman as a user without a systemd loginctl session.

Additional information you deem important (e.g. issue happens only occasionally):
This issue only happens in 1.9.0 if a systemd loginctl session for the user is not present. This presents a problem, though, because my workflow is to run podman from a systemd unit file, and I can't guarantee that a loginctl session will be present for the user in question. I don't know why this warning message has suddenly appeared in 1.9.0, but talking to @mheon it sounds like the suspicion is that a default changed in the big configuration file change that was merged in that version.

Output of podman version:

Version:            1.9.0
RemoteAPI Version:  1
Go Version:         go1.11.6
OS/Arch:            linux/amd64

Output of podman info --debug:

debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.11.6
  podmanVersion: 1.9.0
host:
  arch: amd64
  buildahVersion: 1.14.8
  cgroupVersion: v1
  conmon:
    package: 'conmon: /usr/libexec/podman/conmon'
    path: /usr/libexec/podman/conmon
    version: 'conmon version 2.0.14, commit: '
  cpus: 64
  distribution:
    distribution: debian
    version: "10"
  eventLogger: journald
  hostname: www4
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 296608
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1003
      size: 1
    - container_id: 1
      host_id: 296608
      size: 65536
  kernel: 4.19.0-6-amd64
  memFree: 60525170688
  memTotal: 67185127424
  ociRuntime:
    name: runc
    package: 'containerd.io: /usr/bin/runc'
    path: /usr/bin/runc
    version: |-
      runc version 1.0.0-rc10
      commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
      spec: 1.0.1-dev
  os: linux
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: 'slirp4netns: /usr/bin/slirp4netns'
    version: |-
      slirp4netns version 0.4.3
      commit: unknown
  swapFree: 16000217088
  swapTotal: 16000217088
  uptime: 327h 35m 12.85s (Approximately 13.62 days)
registries:
  search:
  - docker.io
  - quay.io
store:
  configFile: /home/clint/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
      Version: |-
        fusermount3 version: 3.4.1
        fuse-overlayfs: version 0.7.6
        FUSE library version 3.4.1
        using FUSE kernel interface version 7.27
  graphRoot: /home/clint/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 1
  runRoot: /run/user/1003/containers
  volumePath: /home/clint/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

Listing... Done
podman/unknown,now 1.9.0~2 amd64 [installed]
podman/unknown 1.9.0~2 arm64
podman/unknown 1.9.0~2 armhf
podman/unknown 1.9.0~2 ppc64el

(I installed podman 1.9.0 from the Kubic Debian 10 repository)

Additional environment details (AWS, VirtualBox, physical, etc.):

This is a physical box running Debian 10.

kinbug stale-issue

Most helpful comment

@coandco can you create the following /etc/containers/containers.conf file and see if it helps?

[engine]
cgroup_manager = "cgroupfs"
events_logger = "file"

All 15 comments

on cgroup v1 we should default to cgroupfs.

Do you see the same error if you use podman --cgroup-manager cgroupfs run --rm hello-world ?

if that solves the problem, I suggest to override the cgroup_manager to cgroupfs in ~/.config/containers/libpod.conf

I have exactly the same issue as described here.
Platform configuration: VM with Debian10, Linux kernel 4.19.98-1. I have no problems with previous version - 1.8.2 as well. Is any chance to install 1.8.2 from the same repo (Kubic Debian 10) ?

@giuseppe Unfortunately your suggestion about "podman --cgroup-manager cgroupfs run --rm hello-world" not fix an issue.

I'm seeing the same on rhel8. Bizarrely, --cgroup-manager=cgroupfs seems to be a NOP:

$ podman --cgroup-manager cgroupfs run --rm hello-world
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
Trying to pull registry.access.redhat.com/hello-world...
...

I've tried all my usual tricks: loginctl enable-linger, # rm -rf /tmp/run-1000 /run/user/1000, rm -rf /home/testuser/.{config,local}

Can you provide your libpod.conf? I suspect this is an issue of containers.conf combined with a legacy, fully-specified libpod.conf and rootless Podman.

So, um, that's the rathole I was just diving into. I can't find a containers.conf anywhere: not /etc, not /usr/share/containers. libpod.conf is default in /usr/share/containers; there isn't one under ~testuser.

Could this be a dependency thing? What package provides containers.conf? If it's skopeo (containers-common), it hasn't been updated on this rhel8 branch, and I wonder if it could be related to OP's issue?

Creating the following /etc/containers.conf makes the problem go away:

[engine]
cgroup_manager = "cgroupfs"

We should be able to use a legacy libpod.conf if it is present; in fact, it should override containers.conf if both are present.

Growing theory: events_logger was previously ignored (always set to file). We generated a lot of libpod.conf files with it set to journald that were actually using the other backend file. As part of 1.9 we now respect the proper journald setting.

@edsantiago "/etc/containers/containers.conf" - is the right location! The issue was eliminated!

I have opened this PR to ignore libpod.conf eventslogger setting.
https://github.com/containers/common/pull/120
If you are having this issue and remove all libpod.conf does the problem go away, without adding a containers.conf?

@mheon the priority is

builtin containers.conf (Default) no file
libpod.conf if set
/usr/share/containers/containers.conf (if set)
/etc/containers/container.con (if set)
~/.config/containers/containers.conf (if set)

In Fedora 31 we do not ship any containers.conf (We ship one that is commented out in Fedora 32)
We will no longer ship libpod.conf going forward.

@coandco can you create the following /etc/containers/containers.conf file and see if it helps?

[engine]
cgroup_manager = "cgroupfs"
events_logger = "file"

@edsantiago Just tried it, and I can confirm that it makes the warning message go away.

A friendly reminder that this issue had no activity for 30 days.

We fixed this in containers.conf in 1.9.1 - closing as such.

If anyone can still reproduce on 1.9.1+, we can reopen.

Was this page helpful?
0 / 5 - 0 ratings