Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
Steps to reproduce the issue:
podman run -it --rm fedora:32Describe the results you received:
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
Describe the results you expected:
#
Additional information you deem important (e.g. issue happens only occasionally):
Happens all the time
Output of podman version:
Version: 1.9.1
RemoteAPI Version: 1
Go Version: go1.14.2
OS/Arch: linux/amd64
Output of podman info --debug:
debug:
compiler: gc
gitCommit: ""
goVersion: go1.14.2
podmanVersion: 1.9.1
host:
arch: amd64
buildahVersion: 1.14.8
cgroupVersion: v2
conmon:
package: conmon-2.0.15-1.fc32.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.15, commit: 33da5ef83bf2abc7965fc37980a49d02fdb71826'
cpus: 8
distribution:
distribution: fedora
version: "32"
eventLogger: file
hostname: tmp.scylladb.com
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.6.7-300.fc32.x86_64
memFree: 5275238400
memTotal: 33541488640
ociRuntime:
name: crun
package: crun-0.13-2.fc32.x86_64
path: /usr/bin/crun
version: |-
crun version 0.13
commit: e79e4de4ac16da0ce48777afb72c6241de870525
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.0.0-1.fc32.x86_64
version: |-
slirp4netns version 1.0.0
commit: a3be729152a33e692cd28b52f664defbf2e7810a
libslirp: 4.2.0
swapFree: 16869486592
swapTotal: 16869486592
uptime: 93h 3m 6.55s (Approximately 3.88 days)
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/avi/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.0.0-1.fc32.x86_64
Version: |-
fusermount3 version: 3.9.1
fuse-overlayfs: version 1.0.0
FUSE library version 3.9.1
using FUSE kernel interface version 7.31
graphRoot: /home/avi/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 1
runRoot: /run/user/1000/containers
volumePath: /home/avi/.local/share/containers/storage/volumes
Package info (e.g. output of rpm -q podman or apt list podman):
podman-1.9.1-1.fc32.x86_64
Additional environment details (AWS, VirtualBox, physical, etc.):
Fully updated Fedora 32.
Update: this started to happen on my second Fedora 32 machine. On the other hand, I rebooted the first one (due to an unrelated problem :( ) and the problem went away.
I'm running Linux 5.6.7 on the still-broken machine, so it's unlikely to be kernel version related.
Can you provide ~/.config/containers/libpod.conf from a system that reproduces?
That one has no ~/.config/containers/libpod.conf.
$ cat ~/.config/containers/libpod.conf
cat: /home/avi/.config/containers/libpod.conf: No such file or directory
$ podman run -it --rm fedora:32
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
Linux avi 5.6.7-300.fc32.x86_64 #1 SMP Thu Apr 23 14:13:50 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
It's definitely a v2 system from podman info, so the issue must be the cgroup manager. Can you try a container with --cgroup-manager=systemd and see if that works?
@rhatdan Does this look like a potential containers.conf issue to you? Maybe a default resource limit specified in the config?
I also bumped into this. First time running podman on Fedora 32, trying to launch a couple of containers.
Output:
[hiisukun@serv:~]$ podman run -it --rm fedora:32
Trying to pull registry.fedoraproject.org/fedora:32...
Getting image source signatures
Copying blob 3088721d7dbf done
Copying config d81c91deec done
Writing manifest to image destination
Storing signatures
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
podman version:
Version: 1.9.1
RemoteAPI Version: 1
Go Version: go1.14.2
OS/Arch: linux/amd64
podman info --debug:
debug:
compiler: gc
gitCommit: ""
goVersion: go1.14.2
podmanVersion: 1.9.1
host:
arch: amd64
buildahVersion: 1.14.8
cgroupVersion: v2
conmon:
package: conmon-2.0.15-1.fc32.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.15, commit: 33da5ef83bf2abc7965fc37980a49d02fdb71826'
cpus: 4
distribution:
distribution: fedora
version: "32"
eventLogger: file
hostname: serv
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.6.8-300.fc32.x86_64
memFree: 3510607872
memTotal: 14608658432
ociRuntime:
name: crun
package: crun-0.13-2.fc32.x86_64
path: /usr/bin/crun
version: |-
crun version 0.13
commit: e79e4de4ac16da0ce48777afb72c6241de870525
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.0.0-1.fc32.x86_64
version: |-
slirp4netns version 1.0.0
commit: a3be729152a33e692cd28b52f664defbf2e7810a
libslirp: 4.2.0
swapFree: 7417098240
swapTotal: 7423913984
uptime: 50h 46m 55.42s (Approximately 2.08 days)
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/hiisukun/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.0.0-1.fc32.x86_64
Version: |-
fusermount3 version: 3.9.1
fuse-overlayfs: version 1.0.0
FUSE library version 3.9.1
using FUSE kernel interface version 7.31
graphRoot: /home/hiisukun/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 3
runRoot: /run/user/1000/containers
volumePath: /home/hiisukun/.local/share/containers/storage/volumes
rpm -q podman:
podman-1.9.1-1.fc32.x86_64
uname -a:
Linux omni 5.6.8-300.fc32.x86_64 #1 SMP Wed Apr 29 19:01:34 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
libpod.conf:
[hiisukun@serv:~]$ cat ~/.config/containers/libpod.conf
cat: /home/hiisukun/.config/containers/libpod.conf: No such file or directory
Other:
/usr/share/containers/libpod.conf is whatever came with my package (unmodified by me)--cgroup-manager=systemd on the command line didn't help/usr/share/containers/containers.conf that is not a section headingdocker.io/linuxserver/unifi-controller, and it gives the same error as fedora:32.Well, after shutting down the various things and rebooting, it has indeed gone away as per @avikivity mentioning. Hopefully the source of the problem can be fixed since rebooting isn't always convenient.
It's definitely a v2 system from
podman info, so the issue must be the cgroup manager. Can you try a container with--cgroup-manager=systemdand see if that works?
Tried it, no change.
Since I needed the machine to work I rebooted it (unfortunately, couldn't check if logout is sufficient since Fedora 32 loses the keyboard/mouse after logout). The problem went away on that machine too. Unfortunately this means I can't help with debugging it until it recurs again (and then I will be limited by needing podman to work).
For me it was an outdated ~/.config/containers/libpod.conf config file that still had a cgroup_manager = "cgroupfs" line.
I don't remember having created that config file, so I suspect it was automatically created by podman (or maybe by me following some tutorial).
I removed that file and now running a container works without the --cgroup-manager=systemd override.
This doesn't sound exactly like OPs problem, but seems related enough to post here.
The reboot part of this seems bizarre...
If anyone else encounters this problem: can you try running podman system reset and see if things go back to normal after that?
I am facing the same issue after upgrading Fedora 31 to 32.
Only config I could find is located at /usr/share/containers/libpod.conf and it states cgroup_manager = "systemd"
Running with debug output yields:
podman run --log-level=debug --rm -it debian bash
WARN[0000] The cgroupv2 manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
DEBU[0000] Ignoring lipod.conf EventsLogger setting "journald". Use containers.conf if you want to change this setting and remove libpod.conf files.
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] container-default [] private [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL ...
...
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
...
Running podman system reset does not change the result.
Previously I removed the directory .local/share/containers/, but the error persists.
Can you provide full output for the command with debugging enabled?
Can you try manually setting --log-driver=k8s-file and seeing if that resolves the issue?
Finally, does systemctl --user as the user running Podman work?
The command systemctl --user reports: Failed to connect to bus.
It is also impossible to start a new gnome-terminal by now.
Alright, I don't think this is Podman.
It sounds like the systemd user session is just not running, but starts after a reboot? Tempted to say it's a systemd bug?
I'm also seeing this on Fedora 31.
@mheon please change the error message to "could not connect to dbus" then. The current error message looks like it's trying to help, but it's actively misleading.
Digging deeper, that part looks like a bug. We should be discarding resource limits and not erroring unless manually set, but it looks like containers.conf is forcing resources to be set on every Podman invocation.
@rhatdan From a brief parse, I'm pretty sure containers.conf is unconditionally setting the PID limit, which is causing us to blow up on cgroups v2 systems that are not using the systemd cgroup manager.
I see the original error as well when using cgroup_manager="cgroupfs" (which I need to use due to a different error which I’ll likely open an issue for soon). Downgrading podman from 1.9.1 to 1.8.2 appears to resolve it.
I experienced this too on my Fedora 31 laptop (been through 28 > 29 > 30 > 31 upgrades). An inelegant deletion of ~/.config/containers fixed it, but not sure if there are consequences in this.
Most likely you are fine. Podman will just use the defaults if that directory was removed. If you made any customizations, then there could be issues.
I am facing the same issue when using REST API:
curl --header "Content-Type: application/json" --request POST --data '{
> "image": "nginx",
> "privileged": true,
> "publish_image_ports": true,
> "name": "rem-nginx"
> }' http://localhost:4041/containers/create
Error:
{"cause":"invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd","message":"CreateContainerFromCreateConfig(): invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd","response":500}
Any hints?
Did you remove the libpod.conf, and could you try out podman-1.9.2 in updates-testing?
Interestingly, it works with the restart.
Seeing the same issue on two fresh F32 instances, rolling back podman-1.8.2 (whats currently "new" in F31) was our current solution. We'll be watching this ticket with intent!
I would love to see if people with this issue, see if it is fixed by podman 1.9.2
I think this one is resolved by 1.9.2, but I'm going to hold off on closing until someone can confirm
I think this one is resolved by 1.9.2, but I'm going to hold off on closing until someone can confirm
Do you know when 1.9.2 will be promoted out of testing on F32?
Did you give it good karma?
I was hitting this and 1.9.2 fixes it for me, but I can't find the update in bodhi to provide karma.
Thanks. :)
Works for me. Karma voted. Thanks. :sake:
Thanks for checking, everyone! Glad to see this is resolved.
I was finally able to consume 1.9.2 on F32. After a podman system reset I'm still seeing this error. I'm instancing the container with a unit file:
[Unit]
Description=Podman running deluge
Wants=network.target
After=network-online.target
[Service]
WorkingDirectory=/app/acquisition/deluge
User=acquisition
Group=acquisition
Restart=no
ExecStartPre=/usr/bin/rm -f %T/%N.pid %T/%N.cid
ExecStartPre=/usr/bin/podman rm --ignore -f deluge
ExecStart=/usr/bin/podman run --conmon-pidfile %T/%N.pid --cidfile %T/%N.cid --cgroups=no-conmon \
--name=deluge \
--publish 127.0.0.1:8112:8112 \
--security-opt label=disable \
--volume /app/acquisition/deluge/config:/config \
docker.io/linuxserver/deluge:latest
ExecStop=/usr/bin/podman stop --ignore deluge -t 10
ExecStopPost=/usr/bin/podman rm --ignore -f deluge
ExecStopPost=/usr/bin/rm -f %T/%N.pid %T/%N.cid
KillMode=control-group
Type=simple
[Install]
WantedBy=multi-user.target default.target
A downgrade to 1.8.2 and I can run again without issues. This is a fresh install of F32 as of a couple days ago. No changing of any configuration related to containers.
Do you have a ~/config/containers/libpod.conf and if so, can you provide the contents of it?
The error is still present.
podman-1.9.2-1.fc32.x86_64~/config/containers/libpod.conf does not exist/usr/share/containers/libpod.conf says it's systemd (default and unchanged)$ systemctl --user -> Failed to connect to bus: No such file or directory$ podman system reset does not solve anything either.@mheon should this issue be reopened or opened as a new issue? (The history here is pretty good, I'd rather keep it.) Even if it's a systemd bug, to serve as a tracker and to point people in the right direction.
I think we should probably re-open until a patch can be merged to improve
the error message.
On Sun, May 24, 2020, 13:03 Radka Gustavsson notifications@github.com
wrote:
@mheon https://github.com/mheon should this issue be reopened or opened
as a new issue? (The history here is pretty good, I'd rather keep it.) Even
if it's a systemd bug, to serve as a tracker and to point people in the
right direction.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/6084#issuecomment-633260732,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AB3AOCHNFBB5ZE2TU6AEQTLRTFHONANCNFSM4MZJ27NQ
.
The error is still present.
* `podman-1.9.2-1.fc32.x86_64` * `~/config/containers/libpod.conf` does not exist * `/usr/share/containers/libpod.conf` says it's systemd (default and unchanged) * `$ systemctl --user` -> `Failed to connect to bus: No such file or directory` * `$ podman system reset` does not solve anything either. * Not restarting the whole space station (semi-production server) to get this to work.
Correct, my system errors with the same state that @RheaAyase enumerates above.
/usr/share/containers/libpod.conf should also not be there. We don't want to be shipping libpod.conf at all anymore.
rpm -qf /usr/share/containers/libpod.conf
podman-1.9.2-1.fc32.x86_64
That should be removed.
Removed rm /usr/share/containers/libpod.conf
Verified ls -l /usr/share/containers/libpod.conf
ls: cannot access '/usr/share/containers/libpod.conf': No such file or directory
reboot system (test system)
podman run -it --rm fedora:32
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
rpm -q podman
podman-1.9.2-1.fc32.x86_64
@rhatdan From my reading of the issue, we've probably fixed the libpod.conf issues in containers/common, but now we're giving this error in cases where the user does not have a systemd user session enabled as well, and it's not very helpful there.
@sharmay Can you check if systemctl --user works?
I was switching user sudo su - <user> and for this new user systemctl --user is not working. This is where I was running podman.
For now enabled ssh and podman working.
I started experimenting with podman.. Getting some inconsistencies on running as non root user
On first run, getting a permission denied error
[user@localhost ~]$ podman run -it hello-world
Error: sd-bus call: Permission denied: OCI runtime permission denied error
Running podman as root works fine
[user@localhost ~]$ sudo su
[sudo] password for user:
[root@localhost user]# podman run -it hello-world
Hello from Docker!
.....
After exiting root, on second time as user
[user@localhost ~]$ podman run -it hello-world
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
System information
[user@localhost ~]$ uname -a
Linux localhost 5.6.13-300.fc32.x86_64 #1 SMP Thu May 14 22:51:37 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
Podman Info
[user@localhost ~]$ podman info --debug
debug:
compiler: gc
gitCommit: ""
goVersion: go1.14.2
podmanVersion: 1.9.2
host:
arch: amd64
buildahVersion: 1.14.8
cgroupVersion: v2
conmon:
package: conmon-2.0.16-2.fc32.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.16, commit: 1044176f7dd177c100779d1c63931d6022e419bd'
cpus: 4
distribution:
distribution: fedora
version: "32"
eventLogger: file
hostname: localhost
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.6.13-300.fc32.x86_64
memFree: 24427372544
memTotal: 25190215680
ociRuntime:
name: crun
package: crun-0.13-2.fc32.x86_64
path: /usr/bin/crun
version: |-
crun version 0.13
commit: e79e4de4ac16da0ce48777afb72c6241de870525
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.0.0-1.fc32.x86_64
version: |-
slirp4netns version 1.0.0
commit: a3be729152a33e692cd28b52f664defbf2e7810a
libslirp: 4.2.0
swapFree: 0
swapTotal: 0
uptime: 3m 17.12s
registries:
search:
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- docker.io
store:
configFile: /home/user/.config/containers/storage.conf
containerStore:
number: 6
paused: 0
running: 0
stopped: 6
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.0.0-1.fc32.x86_64
Version: |-
fusermount3 version: 3.9.1
fuse-overlayfs: version 1.0.0
FUSE library version 3.9.1
using FUSE kernel interface version 7.31
graphRoot: /home/user/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 1
runRoot: /tmp/run-1000/containers
volumePath: /home/user/.local/share/containers/storage/volumes
Libpod config locations
[user@localhost ~]$ locate libpod.conf
/usr/share/containers/libpod.conf
/usr/share/man/man5/libpod.conf.5.gz
Podman installation
[user@localhost ~]$ sudo yum distro-sync --enablerepo=updates-testing podman
[user@localhost ~]$ sudo yum -y install podman
[user@localhost ~]$ rpm -q podman
podman-1.9.2-1.fc32.x86_64
Do you have a systemd user session available? IE, does systemctl --user work?
Do you have a systemd user session available? IE, does
systemctl --userwork?
$ systemctl --user
Failed to connect to bus: No such file or directory
That would be your problem then. Your distro may not enabled the systemd user session by default, or you may be logging in by a method that does not create one (su or sudo). You can also make a ~/.config/containers/libpod.conf with the line cgroup_manager = "cgroupfs" in it - that will disable the ability to set resource limits on rootless containers, but should remove the issue.
Do you have a systemd user session available? IE, does systemctl --user work?
$ systemctl --user
Failed to connect to bus: No such file or directory
Lol, that was a good hint. I searched for this
Failed to connect to bus: No such file or directory
and landed on https://bbs.archlinux.org/viewtopic.php?id=234813
It says. I need to ensure UsePAM yes in /etc/ssh/sshd_config, Which I had actually set to UsePAM no as part of securing sshd. In Fedora 32 /etc/ssh/sshd_config there is a clear warning though.
# WARNING: 'UsePAM no' is not supported in Fedora and may cause several
# problems.
I enabled back 'UsePAM yes' and restarted system.
$ podman run -it hello-world
Hello from Docker!
Everything works fine !!!! 🏆
I for instance have UsePAM set to yes, and I'm trying to run the containers from a systemd service as a non root user. Still the same problem.
I for instance have UsePAM set to yes, and I'm trying to run the containers from a systemd service as a non root user. Still the same problem.
It didn't work after I did sshd reload after edit. I rebooted machine and it started working after. May be it could be a different problem in your case.
No like I had UsePAM for years. It was already confirmed that reboot resolves the issue, that's irrelevant of the sshd configuration.
@mheon, like @RheaAyase I'm trying to launch containers via systemd system level units (not user), but I'm specifying User in the unit. Is this entirely unsupported going forward with podman? It works consistently in 1.8.2.
We're working with the systemd team on making that one easier. Their general recommendation was to enable lingering mode manually on every user you want to do that with.
They also didn't seem to really understand the request (were strongly in favor of making the units in the user's systemd session). I think we explained why it was desirable to our users, but more details on why folks are using it would be appreciated.
Might be an education issue? I personally don't know how I'd start a unit at system boot when it's a --user unit. I'd chosen to set up each "container application" user with /sbin/nologin and specify them in the root level defined systemd unit.
I'm not planning to boot the server, then ssh in and su into each user to get those applications started (which... maybe I'm crazy but... is that what the assumption is now?)
I __love__ that podman is allowing me to to rootless containers without a monolithic daemon. However I still need a daemon-ish system to manage the start/stop and relationships between containers. I'm now doing things like Requires= and PartOf= so that the groups of containers are being started together, sharing network namespaces, etc. systemd+podman allows me to define ephemeral containerized applications, but dependencies between them so they can function properly.
Their general recommendation was to enable lingering mode manually on every user you want to do that with.
Can you explain this approach a little bit more? I'm not sure I understand. Does it allow me to have a user that isn't "logged in", but has the user session active?
loginctl enable-linger is the command, I believe - and yes, that is the intent. Systemd will stop auto-pruning the user's processes on logout, and the systemd user session will stay alive.
If I enable-linger, then I get $ podman ps -a -> cannot chdir: Permission denied under the user in question. ps -a should show something even if I have a broken container in the list, shouldn't it?
I had the same issue when trying to run rootless podman containers as a non-root account that was treated essentially as a service account.
sudo loginctl enable-linger <username> did allow my "service accounts" to execute the podman containers.
I have the same issue with 1.9.3 if I run podman run -it localhost/redis:6.0.4, I get Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd. Before I made a podman system reset and also enabled lingering for the rootless user. I rebooted but nothing changed. So a clean setup without any containers.conf in home dir of the user. I also used this option at run command: --cgroup-manager=systemd.
Here are some details:
$ podman info --debug
debug:
compiler: gc
gitCommit: ""
goVersion: go1.11.6
podmanVersion: 1.9.3
host:
arch: amd64
buildahVersion: 1.14.9
cgroupVersion: v2
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.16, commit: '
cpus: 24
distribution:
distribution: debian
version: "10"
eventLogger: file
hostname: hostsystem
idMappings:
gidmap:
- container_id: 0
host_id: 2646
size: 1
- container_id: 1
host_id: 689824
size: 65536
uidmap:
- container_id: 0
host_id: 2646
size: 1
- container_id: 1
host_id: 689824
size: 65536
kernel: 4.19.0-9-amd64
memFree: 66232385536
memTotal: 66994249728
ociRuntime:
name: crun
package: 'crun: /usr/bin/crun'
path: /usr/bin/crun
version: |-
crun version UNKNOWN
commit: 11b7a1f2baa6bbb762c470ff0457a0714274c141
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
os: linux
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: 'slirp4netns: /usr/bin/slirp4netns'
version: |-
slirp4netns version 1.0.0
commit: unknown
libslirp: 4.2.0
swapFree: 7999582208
swapTotal: 7999582208
uptime: 3m 6.27s
registries:
search:
- docker.io
- quay.io
store:
configFile: /home/rootlessuser/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
Version: |-
fusermount3 version: 3.4.1
fuse-overlayfs: version 0.7.6
FUSE library version 3.4.1
using FUSE kernel interface version 7.27
graphRoot: /home/rootlessuser/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 1
runRoot: /run/user/2646/containers
volumePath: /home/rootlessuser/.local/share/containers/storage/volumes
$ podman version
Version: 1.9.3
RemoteAPI Version: 1
Go Version: go1.11.6
OS/Arch: linux/amd64
$ apt list podman
Listing... Done
podman/unknown,now 1.9.3~1 amd64 [installed]
podman/unknown 1.9.3~1 arm64
podman/unknown 1.9.3~1 armhf
podman/unknown 1.9.3~1 ppc64el
Do you have limits set for your user account?
Any new ideas to get this running without server restart? It's still far far from planned maintenance for me.
I am trying like hell to reproduce this in podman 2.0. Could you check if the podman 2.0 release candidate fixes the problem.
yum -y update --enablerepo updates-testing podman
Do you have limits set for your user account?
Which limits do you mean exactly?
Hard and soft limits for root and rootless user are equal and starting a container as root works but not with rootless:
$ ulimit -H -a
core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 255429
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1048576
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) unlimited
cpu time (seconds, -t) unlimited
max user processes (-u) 255429
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
$ ulimit -S -a
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 255429
max locked memory (kbytes, -l) 65536
max memory size (kbytes, -m) unlimited
open files (-n) 1024
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 255429
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
/etc/security/limits.conf only contains some comments.
My description above concerns Podman v1.9.3. In Version 1.9.1 I get another error as root:
$ podman run -it localhost/redis:6.0.4
Error: container_linux.go:344: starting container process caused "process_linux.go:275: applying cgroup configuration for process caused \"mountpoint for devices not found\"": OCI runtime error
I did system reset before. While as rootless I get the same as with 1.9.3:
$ podman run -it localhost/redis:6.0.4
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
This is what I am seeing with podman 2.0
$ podman create -ti alpine sh
fc7abf0fddfa30e1c375e44f0c70f180ff63be8a19f816cbbcb46d292a76f750
$ podman inspect -l --format '{{ .HostConfig.Ulimits }}'
[]
I think if you inspect the Ulimits on your containers you will see this being set? Could you check if 2.0 fixes the problem?
Sorry to jump in the middle of this, but I'm running on Ubuntu 18.04, and with both podman 1.8.3 and 1.9.3 I run into this error when I try to execute this code
(base) 💊 ~ 5645 💊 podman run -it --rm -m=2g --memory-swap=-1 --cgroup-manager=systemd alpine /bin/bash
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
Here's my podman info
host:
arch: amd64
buildahVersion: 1.14.9
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.16, commit: '
cpus: 4
distribution:
distribution: ubuntu
version: "18.04"
eventLogger: file
hostname: lil
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 4.15.0-106-generic
memFree: 1065861120
memTotal: 8262774784
ociRuntime:
name: runc
package: 'runc: /usr/sbin/runc'
path: /usr/sbin/runc
version: 'runc version spec: 1.0.1-dev'
os: linux
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: 'slirp4netns: /usr/bin/slirp4netns'
version: |-
slirp4netns version 0.4.3
commit: unknown
swapFree: 2147479552
swapTotal: 2147479552
uptime: 1h 22m 54.33s (Approximately 0.04 days)
registries:
search:
- docker.io
- quay.io
store:
configFile: /home/myusername/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: vfs
graphOptions: {}
graphRoot: /home/myusername/.local/share/containers/storage
graphStatus: {}
imageStore:
number: 2
runRoot: /run/user/1000/containers
volumePath: /home/myusername/.local/share/containers/storage/volumes
I'm going to try podman 2.0 and see if it fixes anything. But help with this is otherwise appreciated!
If you're not running cgroups v2 (and from your podman info command, it looks like you're not), this is fully expected. Resource limits as rootless Podman require the system to use cgroups v2 (cgroups v1 has fundamental security issues that make it unsafe to expose to rootless users; these are resolved in v2, which allows containers to modify cgroups and add resource limits).
Good to know I'm not stumbling on something unexpected. Yea I struck out setting a specific cgroup using https://www.paranoids.at/cgroup-ubuntu-18-04-howto/, but I can work a little more on trying to set up cgroups v2.
Supposing I get further the next time, what changes should I make in podman's configuration to use cgroups v2?
I don't believe any are necessary; we'll detect that /sys/fs/cgroup is using v2 when Podman is invoked, and begin using it.
hmm, now I'm stumped. So if I don't need to modify the configuration how do I make podman start using cgroups v2? Does that mean I should only have to pass the correct flag to --cgroup-manager= when I do podman run with memory limits?
The system itself needs to be switched to Cgroups v2 - it should be a kernel parameter, though I don't know what the exact parameter is for Ubuntu.
And just to make sure, I know I have cgroups v2 because of the result of grep cgroup /proc/filesystems which shows
nodev cgroup
nodev cgroup2
but I still need to find a way to switch the system to cgroups v2? It's not enough to have it already installed? Sorry this stuff is pretty foreign to me.
We only support cgroups v2 in unified mode - that is, only cgroup2 will be mounted.
ok managed to follow these instructions https://askubuntu.com/questions/19486/how-do-i-add-a-kernel-boot-parameter, and I tucked in the parameter systemd.unified_cgroup_hierarchy=1 and now podman info actually contains a line that says cgroupVersion: v2, but when i run podman run -it --rm -m=2g --memory-swap=-1 --cgroup-manager=systemd alpine /bin/bash
I get a new error
Error:sd-bus add match: Operation not permitted: OCI runtime permission denied error
Is this something else that folks have dealt with?
@giuseppe Any thoughts here?
what version of systemd are you using? The error is coming from crun while trying to create the cgroup.
What is the value of the DBUS_SESSION_BUS_ADDRESS env variable?
Thanks for helping out. The version of systemd is 237, and the value of the variable is DBUS_SESSION_BUS_ADDRESS=unix:path=/run/user/1000/bus
it seems like the systemd version is too old.
To verify it, try the command unshare -r systemd-run --user echo it works as rootless. Does it succeed?
yes that command fails
Failed to start transient service unit: Access denied
So I should try to update systemd? Or perhaps that's only possible by upgrading to a newer version of Ubuntu?
yes, you either need to upgrade systemd or use the cgroupfs backend for cgroups.
What happens is that we are not able to create a systemd scope as rootless while being in a user namespace.
gotcha, I think for the moment I'll switch to a centos 8 machine and see if the same problems arise. But just to try to follow this thread to completion, I tried using --cgroup-manager=cgroupfs and I still got the error that is this issue's title. I've searched a bit but haven't immediately found instructions for setting up cgroupfs, are there specific steps I need to take before I can expect podman to work with cgroupfs? Let me know if we've reached the point where what I'm asking isn't specific to the issue here..
Please re-open #6798, if you find this unrelated. I closed it for now
I ran into the same issue with a fresh install of Centos 8 even after modifying the kernel parameters so that cgroups v2 was in use for podman. I feel like I'm still not understanding how to make use of the cgroupfs in the cli or if there's supposed to be a config file that I update. The only one I could find was the ~/.config/containers/storage.conf. Either way I still got
Error: invalid configuration, cannot specify resource limits without cgroups v2 and --cgroup-manager=systemd
when i ran podman run -it --rm -m=2g --memory-swap=-1 --cgroup-manager=systemd alpine
I'm going to try centos stream 8 to see if things are new enough to support rootless memory management for containers.
Still very much present with podman-2.0.1-1.fc32
So I updated and restarted the whole spaceship which somehow ignored podman so then I updated podman and now the same error, back to square one, waiting for the next server restart in several months I guess.
Can you provide the error message you're seeing? Also, are you on Fedora or FCOS?
The one in the $subject - after updating from 1.9.3 to 2.0.1
Restart solved it (came to the (obvious) realisation that the recent hardware upgrade and the removal of that supermicro bloat actually makes restarts without upgrades pretty fast.)
It did not however resolve everything, I still had to run all the systemd things with: XDG_RUNTIME_DIR=/run/user/$UID systemctl --user which is the earlier mentioned systemd issue that was discussed.
That's really wierd, because the code path that leads to that error should be completely disabled in v2.0.0 and up...
Another interesting thing, perhaps coincidence, is that virsh no longer works from system services either. Had to change that to a --user as well.
A friendly reminder that this issue had no activity for 30 days.
We had some fixes for this in podman 2.0 releases. Is this fixed now, can we close this issue?
Personally I'm waiting for 2.1.x (stuck on 1.8.2 due to #7016). I'd not encountered this when I was doing 2.x testing though.
Original reporter here, haven't seen it in a long time. On 2.0.3 now.
Most helpful comment
For me it was an outdated
~/.config/containers/libpod.confconfig file that still had acgroup_manager = "cgroupfs"line.I don't remember having created that config file, so I suspect it was automatically created by podman (or maybe by me following some tutorial).
I removed that file and now running a container works without the
--cgroup-manager=systemdoverride.This doesn't sound exactly like OPs problem, but seems related enough to post here.