/kind bug
Description
I rarely change my container setup, but if I go there after some weeks/couple months to change something, I can't spawn new containers due:
Steps to reproduce the issue:
**Describe the results you received:**
It always fails. I updated the podman to latest 1.5.1-3, and it still fails the same. Note, there were containers running over upgrade.
I can't see what it tries to open:
sudo strace -fe open podman run --name sftp --rm -p 2222:22/tcp -h sftp --memory=128M atmoz/sftp:latest --log-level=debug
strace: Process 17668 attached
strace: Process 17669 attached
strace: Process 17670 attached
strace: Process 17671 attached
strace: Process 17672 attached
strace: Process 17673 attached
strace: Process 17674 attached
strace: Process 17675 attached
strace: Process 17676 attached
[pid 17676] +++ exited with 0 +++
[pid 17671] --- SIGCHLD {si_signo=SIGCHLD, si_code=CLD_EXITED, si_pid=17676, si_uid=0, si_status=0, si_utime=0, si_stime=0} ---
Trying to pull docker.io/atmoz/sftp:latest...
strace: Process 17677 attached
strace: Process 17678 attached
strace: Process 17679 attached
strace: Process 17680 attached
Getting image source signatures
strace: Process 17701 attached
strace: Process 17704 attached
strace: Process 17705 attached
Copying blob 017cdea0acaa done
Copying blob 03cce7e7c0ee done
Copying blob b2f8f2e93ab3 done
Copying blob 54f7e8ac135a done
Copying blob 0fbd7701cad1 done
Copying config 6345f82053 done
Writing manifest to image destination
Storing signatures
Error: error allocating lock for new container: no space left on device
[pid 17704] +++ exited with 125 +++
[pid 17705] +++ exited with 125 +++
[pid 17701] +++ exited with 125 +++
[pid 17680] +++ exited with 125 +++
[pid 17679] +++ exited with 125 +++
[pid 17678] +++ exited with 125 +++
[pid 17677] +++ exited with 125 +++
[pid 17675] +++ exited with 125 +++
[pid 17674] +++ exited with 125 +++
[pid 17673] +++ exited with 125 +++
[pid 17672] +++ exited with 125 +++
[pid 17671] +++ exited with 125 +++
[pid 17670] +++ exited with 125 +++
[pid 17669] +++ exited with 125 +++
[pid 17668] +++ exited with 125 +++
+++ exited with 125 +++
I have plenty of space both in bytes as well as in inodes:
$ sudo df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.8G 168K 3.8G 1% /dev/shm
tmpfs 3.8G 17M 3.8G 1% /run
tmpfs 3.8G 0 3.8G 0% /sys/fs/cgroup
/dev/mapper/fedora-root 49G 33G 15G 70% /
tmpfs 3.8G 72K 3.8G 1% /tmp
/dev/sda1 976M 205M 705M 23% /boot
/dev/mapper/fedora-home 177G 24G 144G 14% /home
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/61ecc54cf1c77e8c2877b88b61838356c662496147
43e78e2e37d334dfab1123/userdata/shm
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/00b91bda33dcee9a0fc8502b7e0eb28ee06de5ca50524fed28a37
85c75c62c9d/merged
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/fb406050a05b6db12fceecafc59e27cde30be5b2108c7be563d5a
6efed97b67e/merged
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/cb450a422b3da1f655aef09144af5c426a2e11a4f0
33651b725cf5e8186c1360/userdata/shm
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/b3f61b802a3041d283a9c315d9bfb17d517873f382d9bd87ccc4a
e817bbc98bc/merged
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/0aa8bd7512a04f5c03aee311585b5048988a2a9bf5
71d6b90a8d51dc75e54f33/userdata/shm
tmpfs 768M 16K 768M 1% /run/user/1017
tmpfs 768M 20K 768M 1% /run/user/42
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/f21ff8e7c827dde1bde9f44cf4a304a8c65808cd54bc8618ad07f
12542dbb1c5/merged
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/2e20e07861960b51dd83534369e04d7b4a16a70f57
c43054bb67980b66662eb7/userdata/shm
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/079108eedaffc503ba3e9a792f4a10caedde8e776a
b01f79363e443d148ce901/userdata/shm
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/2eaf7978d284e70422ea28d1f49933be05000fb428ad808891b33
a518680c44c/merged
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/860832aff0dadc395990f5a4e178284a8ad25ea049
3e08e78a00acea840460d0/userdata/shm
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/999bcb15197d19f9eb3d2330b7afb91f09ba08af743ebffa536ed
4a85bdddea7/merged
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/b19e44a179757de5962d302cb0600f53dc1c787f8b
507f37ab111c40c6480744/userdata/shm
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/07932579f56d5af5753f370ea96235129ce2c92edb00e071154d5
3c68e6dde15/merged
shm 63M 0 63M 0% /var/lib/containers/storage/overlay-containers/7cd90271f81f6ee84486c5d0fae27fade8b9e8f241
6129ae4a94323f3fb0a188/userdata/shm
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/a8c46521b6d0cfbbf6252931cd5a36798f638b532cc84a64fa3bf
a85b115cf2a/merged
shm 63M 6.4M 57M 11% /var/lib/containers/storage/overlay-containers/4be11f7879f384bbb50a228b07b48e9460db7818cf
b370e8671e23f5c4b83576/userdata/shm
overlay 49G 33G 15G 70% /var/lib/containers/storage/overlay/2b8b2ffae7f9f4f58e262df9601c738d4298511a3c4241839ea71
482436039ae/merged
$ sudo df -hi
Filesystem Inodes IUsed IFree IUse% Mounted on
devtmpfs 957K 550 956K 1% /dev
tmpfs 960K 3 960K 1% /dev/shm
tmpfs 960K 4.9K 956K 1% /run
tmpfs 960K 17 960K 1% /sys/fs/cgroup
/dev/mapper/fedora-root 3.2M 581K 2.6M 19% /
tmpfs 960K 22 960K 1% /tmp
/dev/sda1 64K 444 64K 1% /boot
/dev/mapper/fedora-home 12M 173K 12M 2% /home
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/61ecc54cf1c77e8c2877b88b61838356c662496$
4743e78e2e37d334dfab1123/userdata/shm
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/00b91bda33dcee9a0fc8502b7e0eb28ee06de5ca50524fed28$
3785c75c62c9d/merged
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/fb406050a05b6db12fceecafc59e27cde30be5b2108c7be563$
5a6efed97b67e/merged
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/cb450a422b3da1f655aef09144af5c426a2e11a$
f033651b725cf5e8186c1360/userdata/shm
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/b3f61b802a3041d283a9c315d9bfb17d517873f382d9bd87cc$
4ae817bbc98bc/merged
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/0aa8bd7512a04f5c03aee311585b5048988a2a9$
f571d6b90a8d51dc75e54f33/userdata/shm
tmpfs 960K 38 960K 1% /run/user/1017
tmpfs 960K 20 960K 1% /run/user/42
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/f21ff8e7c827dde1bde9f44cf4a304a8c65808cd54bc8618ad$
7f12542dbb1c5/merged
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/2e20e07861960b51dd83534369e04d7b4a16a70$
57c43054bb67980b66662eb7/userdata/shm
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/079108eedaffc503ba3e9a792f4a10caedde8e7$
6ab01f79363e443d148ce901/userdata/shm
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/2eaf7978d284e70422ea28d1f49933be05000fb428ad808891$
33a518680c44c/merged
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/860832aff0dadc395990f5a4e178284a8ad25ea$
493e08e78a00acea840460d0/userdata/shm
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/999bcb15197d19f9eb3d2330b7afb91f09ba08af743ebffa53$
ed4a85bdddea7/merged
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/b19e44a179757de5962d302cb0600f53dc1c787$
8b507f37ab111c40c6480744/userdata/shm
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/07932579f56d5af5753f370ea96235129ce2c92edb00e07115$
d53c68e6dde15/merged
shm 960K 1 960K 1% /var/lib/containers/storage/overlay-containers/7cd90271f81f6ee84486c5d0fae27fade8b9e8f$
416129ae4a94323f3fb0a188/userdata/shm
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/a8c46521b6d0cfbbf6252931cd5a36798f638b532cc84a64fa$
bfa85b115cf2a/merged
shm 960K 10 960K 1% /var/lib/containers/storage/overlay-containers/4be11f7879f384bbb50a228b07b48e9460db781$
cfb370e8671e23f5c4b83576/userdata/shm
overlay 3.2M 581K 2.6M 19% /var/lib/containers/storage/overlay/2b8b2ffae7f9f4f58e262df9601c738d4298511a3c4241839e$
71482436039ae/merged
**Describe the results you expected:**
Container to start.
**Additional information you deem important (e.g. issue happens only occasionally):**
I have some containers mounted to NFS share, if that could make a difference.
**Output of `podman version`:**
$ podman version
Version: 1.5.1
RemoteAPI Version: 1
Go Version: go1.11.12
OS/Arch: linux/amd64
**Output of `podman info --debug`:**
debug:
compiler: gc
git commit: ""
go version: go1.11.12
podman version: 1.5.1
host:
BuildahVersion: 1.10.1
Conmon:
package: podman-1.5.1-3.fc29.x86_64
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.0, commit: fa55639b725e7626b28dbd43de8e9546f7411226-dirty'
Distribution:
distribution: fedora
version: "29"
MemFree: 636534784
MemTotal: 8052482048
OCIRuntime:
package: runc-1.0.0-93.dev.gitb9b6cc6.fc29.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc8+dev
commit: 82f4855a8421018c9f4d74fbcf2da7f8ad1e11fa
spec: 1.0.1-dev
SwapFree: 8054894592
SwapTotal: 8199860224
arch: amd64
cpus: 4
eventlogger: journald
hostname: ohuska.localdomain
kernel: 5.1.8-200.fc29.x86_64
os: linux
rootless: false
uptime: 1142h 55m 33.37s (Approximately 47.58 days)
registries:
blocked: null
insecure: null
search:
Additional environment details (AWS, VirtualBox, physical, etc.):
And BTW, there are perfectly running containers while at this, this only affects while spawning a new one.
That's a lock exhaustion error - you've run out of locks for containers and
pods. I don't suppose you're running 2048 of those?
If not - try a 'podman system renumber' (needs to shut down all containers)
On Sun, Aug 18, 2019, 08:25 Ilkka Tengvall notifications@github.com wrote:
And BTW, there are perfectly running containers while at this, this only
affects while spawning a new one.—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/containers/libpod/issues/3843?email_source=notifications&email_token=AB3AOCGMUEAUU3MUSN3EWS3QFE5SHA5CNFSM4IMSTM7KYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4Q66AI#issuecomment-522317569,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB3AOCGHOKN4FR45RG7ZFWLQFE5SHANCNFSM4IMSTM7A
.
thanks @mheon, it freed up some space from somewhere. No, I don't run more than about 10 containers. However, it might effect that I do have systemd definitions for them, and if something doesn't come up, systemd always does some amount of retries. But still, likely not that amount.
systemd unit file has:
Restart=on-failure
Anyhow, now the pod succeeded, so unfortunately it sounds like leaking out the locks.
Was poking around in here for unrelated reasons and I think I found it.
Fix incoming.
Nevermind - looks like it wasn't related.
I tried reproducing for around an hour over here, and I haven't been able to produce a lock leak.
Podman 1.1.x to 1.3.x had serious lock leaks around both pods and container restart, and I wonder if that did start the situation - and then the locks remained allocated after upgrading...
This issue had no activity for 30 days. In the absence of activity or the "do-not-close" label, the issue will be automatically closed within 7 days.
Since we have had no feedback, I will go with @mheon assumption and close this. If it is still repeatable please reopen.
I'm experincing the same issue on a RHEL 8.1 (podman-1.4.2-stable2) when running rootless containers with a systemd unit with the parameter: Restart=on-failure.
When hitting the issue #3906, systemd is trying to restart in loop the container and then hit the lock allocation issue. The command podman system renumber fixes the issue but the fact I have to kill all other containers is a bit annoying.
While the removal issue has been dealt with in subsequent upstream versions, you really need to investigate how your systemd unit is written; a properly-written unit will not have this issue. Are you using KillMode = none?
Yes, I do use the KillMode = none. Actually, I have implemented the unit as per described on this Red Hat blog post [1].
Is there a way to list these locks?
[1] https://www.redhat.com/sysadmin/podman-shareable-systemd-services
Not presently; I should look into adding them to podman inspect.
@vrothberg PTAL - systemd unit file issues seem to be resulting in lock leaks. We really need to track this one down.
Without a reproducer, I can't do much. Please share the systemd.service (if possible).
When hitting the issue #3906, systemd is trying to restart in loop the container and then hit the lock allocation issue
To me this sounds like systemd is stuck in a loop trying to successfully run ExecStart which for unknown reasons doesn't work and thereby exceeds the container limit.
Here is my unit, which actually derived a bit from the example, like we used named containers.
I was thinking about a workarounf before starting the container like:
CONTAINER_NAME=$1
CONTAINER_STORAGE_ID=$(cat ~/.local/share/containers/storage/overlay-containers/containers.json | jq -r --arg CONTAINERNAME "$CONTAINER_NAME" '.[] | select(.names[]==$CONTAINERNAME) | .id' -e)
if [ $? -eq 0 ];then
echo "Found old container storage, deleting it"
podman rm --storage -f $CONTAINER_STORAGE_ID
fi
The systemd unit file
```[Unit]
Description=testserver container
[Service]
Restart=on-failure
Type=forking
TimeoutStartSec=5m
ExecStartPre=/usr/bin/rm -f /%T/%n-pid /%T/%n-cid
ExecStart=/usr/bin/podman run
--conmon-pidfile=/%T/%n-pid
--cidfile=/%T/%n-cid
--name %N
--rm
--detach
alpine:latest top
ExecStop=/usr/bin/sh -c "/usr/bin/podman rm -f cat /%T/%n-cid"
RestartSec=30
KillMode=none
PIDFile=/%T/%n-pid
[Install]
WantedBy=default.target
```
Thanks a lot for sharing!
Can you try without --rm \ in the ExecStart?
I have tested to start/stop 100 times, looks to work, not even hitting the orphaned storage from issue #3906
I used that snipet to test
#!/bin/bash
for i in {1..100};do
echo "Starting container"
systemctl --user start helloworld-container
echo "Going to sleep 2s"
sleep 2
echo "Stoping the container"
systemctl --user stop helloworld-container
done
Example of journald output
Feb 18 17:15:31 something.example.com systemd[1998]: Started Helloworld container.
Feb 18 17:15:33 something.example.com systemd[1998]: Stopping Helloworld container...
Feb 18 17:15:33 something.example.com podman[19908]: 2020-02-18 17:15:33.877068419 +0100 CET m=+0.165966008 container died 762ab88b4dece9403518859f49b2951dd3f93a3ad6fd730927b11fb2b9637d06 (image=docker.io/jwilder/whoami:latest, name=helloworld-container)
Feb 18 17:15:33 something.example.com podman[19908]: 2020-02-18 17:15:33.878456856 +0100 CET m=+0.167354435 container stop 762ab88b4dece9403518859f49b2951dd3f93a3ad6fd730927b11fb2b9637d06 (image=docker.io/jwilder/whoami:latest, name=helloworld-container)
Feb 18 17:15:33 something.example.com podman[19908]: 2020-02-18 17:15:33.920965811 +0100 CET m=+0.209863434 container remove 762ab88b4dece9403518859f49b2951dd3f93a3ad6fd730927b11fb2b9637d06 (image=docker.io/jwilder/whoami:latest, name=helloworld-container)
Feb 18 17:15:33 something.example.com sh[19908]: 762ab88b4dece9403518859f49b2951dd3f93a3ad6fd730927b11fb2b9637d06
Feb 18 17:15:33 something.example.com systemd[1998]: Stopped Helloworld container.
Feb 18 17:15:33 something.example.com systemd[1998]: helloworld-container.service: Found left-over process 19875 (conmon) in control group while starting unit. Ignoring.
Feb 18 17:15:33 something.example.com systemd[1998]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Feb 18 17:15:33 something.example.com systemd[1998]: helloworld-container.service: Found left-over process 19926 (podman) in control group while starting unit. Ignoring.
Feb 18 17:15:33 something.example.com systemd[1998]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
I'm extending the test to more start/stop (target 500)
I have tested it 2500 times and it seems to work like a charm!
Thanks for the hints!
@mheon @mheon
Wonderful, thanks for checking, @pburgisser-c2c!
@mheon, looks like we're running into race with run --rm and rm?
If we're leaking locks, one of them has to be dying midway through removal, but I'm not sure as to how this is happening. I'll take a look if I get a chance.
I seem to have bumped into the same error and cannot figure a way to solve this. Even tried podman system renumber but to no avail. For the record:
Output of podman version:
Version: 2.0.4
API Version: 1
Go Version: go1.14
Built: Thu Jan 1 00:00:00 1970
Output of podman info --debug:
host:
arch: amd64
buildahVersion: 1.15.0
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.20, commit: '
cpus: 2
distribution:
distribution: debian
version: "10"
eventLogger: file
hostname: ip-10-0-1-22
idMappings:
gidmap: null
uidmap: null
kernel: 4.19.0-8-cloud-amd64
linkmode: dynamic
memFree: 131837952
memTotal: 1008451584
ociRuntime:
name: runc
package: 'containerd.io: /usr/bin/runc'
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc10
commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
spec: 1.0.1-dev
os: linux
remoteSocket:
path: /run/podman/podman.sock
rootless: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 0
swapTotal: 0
uptime: 35m 36.16s
registries:
search:
- docker.io
- quay.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /var/lib/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "true"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 3
runRoot: /var/run/containers/storage
volumePath: /var/lib/containers/storage/volumes
systemd service file:
[Unit]
Description=Test Podman Container
After=network.target
[Service]
Type=simple
TimeoutStartSec=10
ExecStart=/usr/bin/podman run --name test-container \
some-repopath/image-name:"devel"
ExecStop=-/usr/bin/podman container stop -t 10 test-container
ExecStopPost=-/usr/bin/podman container rm test-container
Restart=on-failure
RestartSec=25
KillMode=none
PIDFile=/%T/%n-pid
[Install]
WantedBy=multi-user.target
For the record, df -h and df -i display everything's fine and I can download and start a container using docker instead of podman
I'm assuming you do not have a large number of containers/pods present?
Does the problem go away after a reboot?
I'm assuming you do not have a large number of containers/pods present?
Nope. Only this one which btw was working fine (as a systemd handled service) some time ago
Does the problem go away after a reboot?
Unfortunately no. At first I found this issue and tried the "instructions" posted here and when that didn't fix the issue I thought "ok let's solve this temporarily with a reboot", only to realize that the issue persisted after the reboot :/
@mheon I did another test trying to run a minimal hello-world and an alpine container. What's interesting is that I could not have both of them running at the same time. I had to stop one of them every time to start the other:
admin@proxy1:~$ sudo podman run hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
[etc etc]
admin@proxy1:~$ sudo podman run alpine
Error: error allocating lock for new container: no space left on device
admin@proxy1:~$ sudo podman container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
13d0823bf150 docker.io/library/hello-world:latest /hello 9 seconds ago Exited (0) 8 seconds ago modest_gates
admin@proxy1:~$ sudo podman stop 13d0823bf150
13d0823bf1508c9d3d757e616feff7652ee4b266709f081409dab6dd1db0c551
admin@proxy1:~$ sudo podman run alpine
Error: error allocating lock for new container: no space left on device
admin@proxy1:~$ sudo podman rm 13d0823bf150
13d0823bf1508c9d3d757e616feff7652ee4b266709f081409dab6dd1db0c551
admin@proxy1:~$ sudo podman run alpine
# runs ok
But this is not the case for the container I'm interested. The only difference (don't know if it makes any, just mentioning) is that my container service has mapped volumes
Can you provide the output of a successful podman run comman with --log-level=debug added?
Sure
Here's a successful run output:
admin@proxy1:~$ sudo podman --log-level=debug run alpine
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run alpine)
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/etc/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files.
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] containers-default-0.14.6 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}}
DEBU[0000] Reading configuration file "/etc/containers/containers.conf"
DEBU[0000] Merged system config "/etc/containers/containers.conf": &{{[] [] containers-default-0.14.6 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}}
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/crun"
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
DEBU[0000] using runtime "/usr/bin/runc"
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
WARN[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/alpine:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/alpine:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e"
DEBU[0000] exporting opaque data as blob "sha256:a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/alpine:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e"
DEBU[0000] exporting opaque data as blob "sha256:a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] Allocated lock 0 for container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e"
DEBU[0000] exporting opaque data as blob "sha256:a24bb4013296f61e89ba57005a7b3e52274d8edd3ae2077d04395f806b63d83e"
DEBU[0000] created container "834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a"
DEBU[0000] container "834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a" has work directory "/var/lib/containers/storage/overlay-containers/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/userdata"
DEBU[0000] container "834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a" has run directory "/var/run/containers/storage/overlay-containers/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/userdata"
DEBU[0000] container "834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a" has CgroupParent "machine.slice/libpod-834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a.scope"
DEBU[0000] Not attaching to stdin
DEBU[0000] overlay: mount_data=lowerdir=/var/lib/containers/storage/overlay/l/S5JU44YBJKB3H6ORLTXBMEDOI7,upperdir=/var/lib/containers/storage/overlay/02224d192d6348bd65daa3793b8bb0bff4949e0f1252f2cc681637515a13876e/diff,workdir=/var/lib/containers/storage/overlay/02224d192d6348bd65daa3793b8bb0bff4949e0f1252f2cc681637515a13876e/work
DEBU[0000] mounted container "834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a" at "/var/lib/containers/storage/overlay/02224d192d6348bd65daa3793b8bb0bff4949e0f1252f2cc681637515a13876e/merged"
DEBU[0000] Created root filesystem for container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a at /var/lib/containers/storage/overlay/02224d192d6348bd65daa3793b8bb0bff4949e0f1252f2cc681637515a13876e/merged
DEBU[0000] Made network namespace at /var/run/netns/cni-e2b827dd-8967-562e-c941-f2e605cbe518 for container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a
INFO[0000] About to add CNI network lo (type=loopback)
INFO[0000] Got pod network &{Name:flamboyant_easley Namespace:flamboyant_easley ID:834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a NetNS:/var/run/netns/cni-e2b827dd-8967-562e-c941-f2e605cbe518 Networks:[] RuntimeConfig:map[podman:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}]}
INFO[0000] About to add CNI network podman (type=bridge)
DEBU[0000] [0] CNI result: &{0.4.0 [{Name:cni-podman0 Mac:ee:82:32:e8:94:6c Sandbox:} {Name:vethcd1d3d97 Mac:ae:34:49:e4:e5:1b Sandbox:} {Name:eth0 Mac:1e:7a:b3:a0:af:97 Sandbox:/var/run/netns/cni-e2b827dd-8967-562e-c941-f2e605cbe518}] [{Version:4 Interface:0xc0002942f8 Address:{IP:10.88.8.22 Mask:ffff0000} Gateway:10.88.0.1}] [{Dst:{IP:0.0.0.0 Mask:00000000} GW:<nil>}] {[] [] []}}
INFO[0000] AppAmor profile "containers-default-0.14.6" is already loaded
DEBU[0000] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0000] Setting CGroups for container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a to machine.slice:libpod:834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a
DEBU[0000] reading hooks from /usr/share/containers/oci/hooks.d
DEBU[0000] Created OCI spec for container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a at /var/lib/containers/storage/overlay-containers/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/userdata/config.json
DEBU[0000] /usr/libexec/podman/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/libexec/podman/conmon args="[--api-version 1 -c 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a -u 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/userdata -p /var/run/containers/storage/overlay-containers/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/userdata/pidfile -n flamboyant_easley --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket -s -l k8s-file:/var/lib/containers/storage/overlay-containers/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/userdata/ctr.log --log-level debug --syslog --conmon-pidfile /var/run/containers/storage/overlay-containers/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /var/lib/containers/storage --exit-command-arg --runroot --exit-command-arg /var/run/containers/storage --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg systemd --exit-command-arg --tmpdir --exit-command-arg /var/run/libpod --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg --syslog --exit-command-arg true --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a]"
INFO[0000] Running conmon under slice machine.slice and unitName libpod-conmon-834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a.scope
DEBU[0000] Received: 21198
INFO[0000] Got Conmon PID as 21186
DEBU[0000] Created container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a in OCI runtime
DEBU[0000] Attaching to container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a
DEBU[0000] connecting to socket /var/run/libpod/socket/834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a/attach
DEBU[0000] Starting container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a with command [/bin/sh]
DEBU[0000] Started container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a
DEBU[0000] Enabling signal proxying
DEBU[0000] Called run.PersistentPostRunE(podman --log-level=debug run alpine)
and here is the output from trying to launch a hello-world container after that
admin@proxy1:~$ sudo podman --log-level=debug run hello-world
INFO[0000] podman filtering at log level debug
DEBU[0000] Called run.PersistentPreRunE(podman --log-level=debug run hello-world)
DEBU[0000] Ignoring libpod.conf EventsLogger setting "/etc/containers/containers.conf". Use "journald" if you want to change this setting and remove libpod.conf files.
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf"
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{{[] [] containers-default-0.14.6 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}}
DEBU[0000] Reading configuration file "/etc/containers/containers.conf"
DEBU[0000] Merged system config "/etc/containers/containers.conf": &{{[] [] containers-default-0.14.6 [] host enabled [CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] [] [nproc=32768:32768] [] [] [] false [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] false false false private k8s-file -1 bridge false 2048 private /usr/share/containers/seccomp.json 65536k private host 65536} {false systemd [PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] ctrl-p,ctrl-q true /var/run/libpod/events/events.log file [/usr/share/containers/oci/hooks.d] docker:// /pause k8s.gcr.io/pause:3.2 /usr/libexec/podman/catatonit shm false 2048 runc map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] missing false [] [crun runc] [crun] [kata kata-runtime kata-qemu kata-fc] {false false false false false false} /etc/containers/policy.json false 3 /var/lib/containers/storage/libpod 10 /var/run/libpod /var/lib/containers/storage/volumes} {[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] podman /etc/cni/net.d/}}
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /var/lib/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver
DEBU[0000] Using graph root /var/lib/containers/storage
DEBU[0000] Using run root /var/run/containers/storage
DEBU[0000] Using static dir /var/lib/containers/storage/libpod
DEBU[0000] Using tmp dir /var/run/libpod
DEBU[0000] Using volume path /var/lib/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] cached value indicated that overlay is supported
DEBU[0000] cached value indicated that metacopy is not being used
DEBU[0000] cached value indicated that native-diff is usable
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=true, usingMetacopy=false
INFO[0000] [graphdriver] using prior storage driver: overlay
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
DEBU[0000] using runtime "/usr/bin/crun"
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument
INFO[0000] Found CNI network podman (type=bridge) at /etc/cni/net.d/87-podman-bridge.conflist
WARN[0000] Default CNI network name podman is unchangeable
INFO[0000] Setting parallel job count to 7
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/hello-world:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/hello-world:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b"
DEBU[0000] exporting opaque data as blob "sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]docker.io/library/hello-world:latest"
DEBU[0000] parsed reference into "[overlay@/var/lib/containers/storage+/var/run/containers/storage]@bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b"
DEBU[0000] exporting opaque data as blob "sha256:bf756fb1ae65adf866bd8c456593cd24beb6a0a061dedf42b26a993176745f6b"
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
Error: error allocating lock for new container: no space left on device
DEBU[0000] Allocated lock 0 for container 834521ee676b5db827b72ff0dbb705832ec24bf4d9441343511be7bba70ce50a
Alright. So we are on lock index 0. Page size for locks is 32/64 (I forget which), which means we have an absolute minimum of 32 locks available on the system - so this isn't a conventional exhaustion issue.
Do you have a containers.conf or libpod.conf in /usr/share/containers/ or /etc/containers? If so can you provide then?
Do you have a containers.conf or libpod.conf in
/usr/share/containers/or/etc/containers? If so can you provide then?
Sure thing (/usr/share/containers/) ! For the record, I didn't make any changes to that file (at least I'm not aware of any changes)
# The containers configuration file specifies all of the available configuration
# command-line options/flags for container engine tools like Podman & Buildah,
# but in a TOML format that can be easily modified and versioned.
# Please refer to containers.conf(5) for details of all configuration options.
# Not all container engines implement all of the options.
# All of the options have hard coded defaults and these options will override
# the built in defaults. Users can then override these options via the command
# line. Container engines will read containers.conf files in up to three
# locations in the following order:
# 1. /usr/share/containers/containers.conf
# 2. /etc/containers/containers.conf
# 3. $HOME/.config/containers/containers.conf (Rootless containers ONLY)
# Items specified in the latter containers.conf, if they exist, override the
# previous containers.conf settings, or the default settings.
[containers]
# List of devices. Specified as
# "<device-on-host>:<device-on-container>:<permissions>", for example:
# "/dev/sdc:/dev/xvdc:rwm".
# If it is empty or commented out, only the default devices will be used
#
# devices = []
# List of volumes. Specified as
# "<directory-on-host>:<directory-in-container>:<options>", for example:
# "/db:/var/lib/db:ro".
# If it is empty or commented out, no volumes will be added
#
# volumes = []
# Used to change the name of the default AppArmor profile of container engine.
#
# apparmor_profile = "container-default"
# List of annotation. Specified as
# "key=value"
# If it is empty or commented out, no annotations will be added
#
# annotations = []
# Default way to to create a cgroup namespace for the container
# Options are:
# `private` Create private Cgroup Namespace for the container.
# `host` Share host Cgroup Namespace with the container.
#
# cgroupns = "private"
# Control container cgroup configuration
# Determines whether the container will create CGroups.
# Options are:
# `enabled` Enable cgroup support within container
# `disabled` Disable cgroup support, will inherit cgroups from parent
# `no-conmon` Container engine runs run without conmon
#
# cgroups = "enabled"
# List of default capabilities for containers. If it is empty or commented out,
# the default capabilities defined in the container engine will be added.
#
# default_capabilities = [
# "AUDIT_WRITE",
# "CHOWN",
# "DAC_OVERRIDE",
# "FOWNER",
# "FSETID",
# "KILL",
# "MKNOD",
# "NET_BIND_SERVICE",
# "NET_RAW",
# "SETGID",
# "SETPCAP",
# "SETUID",
# "SYS_CHROOT",
# ]
# A list of sysctls to be set in containers by default,
# specified as "name=value",
# for example:"net.ipv4.ping_group_range = 0 1000".
#
# default_sysctls = [
# "net.ipv4.ping_group_range=0 1000",
# ]
# A list of ulimits to be set in containers by default, specified as
# "<ulimit name>=<soft limit>:<hard limit>", for example:
# "nofile=1024:2048"
# See setrlimit(2) for a list of resource names.
# Any limit not specified here will be inherited from the process launching the
# container engine.
# Ulimits has limits for non privileged container engines.
#
# default_ulimits = [
# “nofile”=”1280:2560”,
# ]
# List of default DNS options to be added to /etc/resolv.conf inside of the container.
#
# dns_options = []
# List of default DNS search domains to be added to /etc/resolv.conf inside of the container.
#
# dns_searches = []
# Set default DNS servers.
# This option can be used to override the DNS configuration passed to the
# container. The special value “none” can be specified to disable creation of
# /etc/resolv.conf in the container.
# The /etc/resolv.conf file in the image will be used without changes.
#
# dns_servers = []
# Environment variable list for the conmon process; used for passing necessary
# environment variables to conmon or the runtime.
#
# env = [
# "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
# ]
# Pass all host environment variables into the container.
#
# env_host = false
# Path to OCI hooks directories for automatically executed hooks.
#
# hooks_dir = [
# “/usr/share/containers/oci/hooks.d”,
# ]
# Default proxy environment variables passed into the container.
# The environment variables passed in include:
# http_proxy, https_proxy, ftp_proxy, no_proxy, and the upper case versions of
# these. This option is needed when host system uses a proxy but container
# should not use proxy. Proxy environment variables specified for the container
# in any other way will override the values passed from the host.
#
# http_proxy = true
# Run an init inside the container that forwards signals and reaps processes.
#
# init = false
# Container init binary, if init=true, this is the init binary to be used for containers.
#
# init_path = "/usr/libexec/podman/catatonit"
# Default way to to create an IPC namespace (POSIX SysV IPC) for the container
# Options are:
# `private` Create private IPC Namespace for the container.
# `host` Share host IPC Namespace with the container.
#
# ipcns = "private"
# Flag tells container engine to whether to use container separation using
# MAC(SELinux)labeling or not.
# Flag is ignored on label disabled systems.
#
# label = true
# Logging driver for the container. Available options: k8s-file and journald.
#
# log_driver = "k8s-file"
# Maximum size allowed for the container log file. Negative numbers indicate
# that no size limit is imposed. If positive, it must be >= 8192 to match or
# exceed conmon's read buffer. The file is truncated and re-opened so the
# limit is never exceeded.
#
# log_size_max = -1
# Default way to to create a Network namespace for the container
# Options are:
# `private` Create private Network Namespace for the container.
# `host` Share host Network Namespace with the container.
# `none` Containers do not use the network
#
# netns = "private"
# Create /etc/hosts for the container. By default, container engine manage
# /etc/hosts, automatically adding the container's own IP address.
#
# no_hosts = false
# Maximum number of processes allowed in a container.
#
# pids_limit = 2048
# Default way to to create a PID namespace for the container
# Options are:
# `private` Create private PID Namespace for the container.
# `host` Share host PID Namespace with the container.
#
# pidns = "private"
# Path to the seccomp.json profile which is used as the default seccomp profile
# for the runtime.
#
# seccomp_profile = "/usr/share/containers/seccomp.json"
# Size of /dev/shm. Specified as <number><unit>.
# Unit is optional, values:
# b (bytes), k (kilobytes), m (megabytes), or g (gigabytes).
# If the unit is omitted, the system uses bytes.
#
# shm_size = "65536k"
# Default way to to create a UTS namespace for the container
# Options are:
# `private` Create private UTS Namespace for the container.
# `host` Share host UTS Namespace with the container.
#
# utsns = "private"
# Default way to to create a User namespace for the container
# Options are:
# `auto` Create unique User Namespace for the container.
# `host` Share host User Namespace with the container.
#
# userns = "host"
# Number of UIDs to allocate for the automatic container creation.
# UIDs are allocated from the “container” UIDs listed in
# /etc/subuid & /etc/subgid
#
# userns_size=65536
# The network table contains settings pertaining to the management of
# CNI plugins.
[network]
# Path to directory where CNI plugin binaries are located.
#
# cni_plugin_dirs = ["/usr/libexec/cni"]
# Path to the directory where CNI configuration files are located.
#
# network_config_dir = "/etc/cni/net.d/"
[engine]
# Cgroup management implementation used for the runtime.
# Valid options “systemd” or “cgroupfs”
#
# cgroup_manager = "systemd"
# Environment variables to pass into conmon
#
# conmon_env_vars = [
# "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
# ]
# Paths to look for the conmon container manager binary
#
# conmon_path = [
# "/usr/libexec/podman/conmon",
# "/usr/local/libexec/podman/conmon",
# "/usr/local/lib/podman/conmon",
# "/usr/bin/conmon",
# "/usr/sbin/conmon",
# "/usr/local/bin/conmon",
# "/usr/local/sbin/conmon"
# ]
# Specify the keys sequence used to detach a container.
# Format is a single character [a-Z] or a comma separated sequence of
# `ctrl-<value>`, where `<value>` is one of:
# `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
#
# detach_keys = "ctrl-p,ctrl-q"
# Determines whether engine will reserve ports on the host when they are
# forwarded to containers. When enabled, when ports are forwarded to containers,
# ports are held open by as long as the container is running, ensuring that
# they cannot be reused by other programs on the host. However, this can cause
# significant memory usage if a container has many ports forwarded to it.
# Disabling this can save memory.
#
# enable_port_reservation = true
# Selects which logging mechanism to use for container engine events.
# Valid values are `journald`, `file` and `none`.
#
# events_logger = "journald"
# Default transport method for pulling and pushing for images
#
# image_default_transport = "docker://"
# Default command to run the infra container
#
# infra_command = "/pause"
# Infra (pause) container image name for pod infra containers. When running a
# pod, we start a `pause` process in a container to hold open the namespaces
# associated with the pod. This container does nothing other then sleep,
# reserving the pods resources for the lifetime of the pod.
#
# infra_image = "k8s.gcr.io/pause:3.2"
# Specify the locking mechanism to use; valid values are "shm" and "file".
# Change the default only if you are sure of what you are doing, in general
# "file" is useful only on platforms where cgo is not available for using the
# faster "shm" lock type. You may need to run "podman system renumber" after
# you change the lock type.
#
# lock_type** = "shm"
# Default engine namespace
# If engine is joined to a namespace, it will see only containers and pods
# that were created in the same namespace, and will create new containers and
# pods in that namespace.
# The default namespace is "", which corresponds to no namespace. When no
# namespace is set, all containers and pods are visible.
#
# namespace = ""
# Whether to use chroot instead of pivot_root in the runtime
#
# no_pivot_root = false
# Number of locks available for containers and pods.
# If this is changed, a lock renumber must be performed (e.g. with the
# 'podman system renumber' command).
#
# num_locks = 2048
# Whether to pull new image before running a container
# pull_policy = "missing"
# Directory for persistent engine files (database, etc)
# By default, this will be configured relative to where the containers/storage
# stores containers
# Uncomment to change location from this default
#
# static_dir = "/var/lib/containers/storage/libpod"
# Directory for temporary files. Must be tmpfs (wiped after reboot)
#
# tmp_dir = "/var/run/libpod"
# Directory for libpod named volumes.
# By default, this will be configured relative to where containers/storage
# stores containers.
# Uncomment to change location from this default.
#
# volume_path = "/var/lib/containers/storage/volumes"
# Default OCI runtime
#
# runtime = "runc"
# List of the OCI runtimes that support --format=json. When json is supported
# engine will use it for reporting nicer errors.
#
# runtime_supports_json = ["crun", "runc", "kata"]
# List of the OCI runtimes that supports running containers without cgroups.
#
# runtime_supports_nocgroups = ["crun"]
# List of the OCI runtimes that supports running containers with KVM Separation.
#
# runtime_supports_kvm = ["kata"]
# Paths to look for a valid OCI runtime (runc, runv, kata, etc)
[engine.runtimes]
# runc = [
# "/usr/bin/runc",
# "/usr/sbin/runc",
# "/usr/local/bin/runc",
# "/usr/local/sbin/runc",
# "/sbin/runc",
# "/bin/runc",
# "/usr/lib/cri-o-runc/sbin/runc",
# ]
# crun = [
# "/usr/bin/crun",
# "/usr/sbin/crun",
# "/usr/local/bin/crun",
# "/usr/local/sbin/crun",
# "/sbin/crun",
# "/bin/crun",
# "/run/current-system/sw/bin/crun",
# ]
# kata = [
# "/usr/bin/kata-runtime",
# "/usr/sbin/kata-runtime",
# "/usr/local/bin/kata-runtime",
# "/usr/local/sbin/kata-runtime",
# "/sbin/kata-runtime",
# "/bin/kata-runtime",
# "/usr/bin/kata-qemu",
# "/usr/bin/kata-fc",
# ]
# Number of seconds to wait for container to exit before sending kill signal.
#stop_timeout = 10
# The [engine.runtimes] table MUST be the last entry in this file.
# (Unless another table is added)
# TOML does not provide a way to end a table other than a further table being
# defined, so every key hereafter will be part of [runtimes] and not the main
# config.
Can you copy that file into /etc/containers/, uncomment # num_locks = 2048 and change it to 4096, and run podman system renumber?
There is a file in /etc/containers/ which is basically a link to/usr/share/containers/containers.conf. I did change thenum_locksto4096` and it "broke" podman since any command I tried to give after that resulted in:
Error: failed to open 4096 locks in /libpod_lock: numerical result out of range
I had this issue, and it turned out that, according to podman system df, I had 2047 local volumes. I did podman volume prune, followed by apodman system renumber` for good luck, because I'm basically cargo-culting here, and my container worked after that.
@jfmcbrayer Thanks! podman volume prune did the trick! I saw it deleted many many volumes.
Any ideas why this garbage was left? I wouldn't want to bump into to in the future and have to manually prune again
I just did a test starting two containers and then stopping them using the systemd service file I posted above. I then did a podman volume prune and it deleted two volumes. This means that these two volumes were left as garbage when the containers stopped. (I also did the test manually starting -> stopping -> removing a container and the result was again leftover volumes)
Thanks for the tip. It seems it has gotten messed up a lot, There were tens of volumes I coulnd't remove as there are no locks for them. The output starts like this, perhaps there should be some script done so that those would get cleaned as well.:
008d0dc8c435779de3e230a76b1539d7812841d857cfab8a80b41a58651f3ee7
015a7b9f7bae9875eefc7cb51993dde5f55b98289eed8a1d62c3841cf0850e12
1929d7a847281ace39298dea5d1e89faa6c4e88bcf2db8fd0cf9e39948a39945
238bd5c217bf1e366c157c60b2d78313e62ffac03d6f6c71e9ba9dafc55f4681
344138878c21c1f719909345d90010d888996755d8639d8f93885ea07abaadb6
5769d5c9241d1e5095f8dc6eb562c77645f09892b58999d9a2559e8b96ce76c6
6aa470d7e673e7d53f84185615f603668f461ed7ef0b9200acd73b7835852e5e
9847c93afdb4f3fea6dbc690f7de9b852e40f99e4086aa74112d3886e345a6dc
98f5b701d584b1ed7efd5241fcef59da2c71c4d3fddbf1cfe467bc90aad6c484
ERRO[0006] "error freeing lock for volume 09976fb68e6480e65b762ef135a8269cb19f8b1a46e9d413a462c2214030f155: no such file or directory"
ERRO[0006] "error freeing lock for volume 0a4fbe508dec110beb63fd426c621c145722fb2dbb73e9bee810d6f2474d8077: no such file or directory"
ERRO[0006] "error freeing lock for volume 0ab9ea4a68f88085c11a16c3f4af740796d06728c18f518f3c2d39b6c9fc8647: no such file or directory"
ERRO[0006] "error freeing lock for volume 0be644d84fa20e5063cbdf2dc4f1157c1938af69726045a765a918f6ef73abf4: no such file or directory"
ERRO[0006] "error freeing lock for volume 0fcd8838d8ce63dc69416d1b5ba132e35985ae8cee4c1c7f29ed2a567d48cfa6: no such file or directory"
ERRO[0006] "error freeing lock for volume 110416a7b10c88a9d0b6b29348be0de8868ff0a7ace38494b641e7b334151a65: no such file or directory"
ERRO[0006] "error freeing lock for volume 119384af33e05b7c950b9b6cbc0ec24d5c1b4bd6f872ea708022d6e04599a326: no such file or directory"
@ikke-t Said volumes were removed, but Podman is warning you it was unable to reap their locks as well.
It's probably fine, but you may want to run podman system renumber to make sure all is well.
thanks, turns out that can't be done without shutting down the pods, which I can't do ATM:
podman system renumber
Error: Error shutting down container storage: A layer is mounted: layer is in use by a container
Perhaps at the next time I need to reboot anyhow or something.
In order to prevent it from happening again, I checked all my rootless containers that I run with systemd (systemd files generated by podman 2.0.6) for VOLUME statements in their Dockerfiles. I made sure that I am passing a --volume for each of them to podman run.
I was already using -v in my systemd files for the podman service (although I skipped mentioning them in my example above) and I had the leftover volumes issue