I have an issue since containers-common-1.2.0-9.el7.x86_64.rpm (previously containers-common-1.2.0-3.el7.x86_64.rpm was used), now my rootless podman run ends with:
Error: container_linux.go:349: starting container process caused \"error adding seccomp rule for syscall socket: requested action matches default action of filter\": OCI runtime error"
Overwriting /usr/share/containers/containers.conf with the version coming from containers-common-1.2.0-3.el7.x86_64.rpm makes things usage again as a first aid workaround.
Diiffing gives:
--- containers.conf-containers-common-1.2.0-3 2020-11-20 12:59:44.945103000 -0500
+++ containers.conf-containers-common-1.2.0-9 2020-11-20 13:16:06.199103000 -0500
@@ -52,36 +52,35 @@
# Options are:
# `enabled` Enable cgroup support within container
# `disabled` Disable cgroup support, will inherit cgroups from parent
-# `no-conmon` Container engine runs run without conmon
+# `no-conmon` Do not create a cgroup dedicated to conmon.
#
# cgroups = "enabled"
# List of default capabilities for containers. If it is empty or commented out,
# the default capabilities defined in the container engine will be added.
#
-# default_capabilities = [
-# "AUDIT_WRITE",
-# "CHOWN",
-# "DAC_OVERRIDE",
-# "FOWNER",
-# "FSETID",
-# "KILL",
-# "MKNOD",
-# "NET_BIND_SERVICE",
-# "NET_RAW",
-# "SETGID",
-# "SETPCAP",
-# "SETUID",
-# "SYS_CHROOT",
-# ]
+default_capabilities = [
+ "CHOWN",
+ "DAC_OVERRIDE",
+ "FOWNER",
+ "FSETID",
+ "KILL",
+ "NET_BIND_SERVICE",
+ "SETFCAP",
+ "SETGID",
+ "SETPCAP",
+ "SETUID",
+ "SYS_CHROOT"
+]
+
# A list of sysctls to be set in containers by default,
# specified as "name=value",
-# for example:"net.ipv4.ping_group_range = 0 1000".
+# for example:"net.ipv4.ping_group_range = 0 0".
#
-# default_sysctls = [
-# "net.ipv4.ping_group_range=0 1000",
-# ]
+default_sysctls = [
+ "net.ipv4.ping_group_range=0 0",
+]
# A list of ulimits to be set in containers by default, specified as
# "<ulimit name>=<soft limit>:<hard limit>", for example:
@@ -243,6 +242,9 @@
# network_config_dir = "/etc/cni/net.d/"
[engine]
+# ImageBuildFormat indicates the default image format to building
+# container images. Valid values are "oci" (default) or "docker".
+# image_build_format = "oci"
# Cgroup management implementation used for the runtime.
# Valid options "systemd" or "cgroupfs"
@@ -355,6 +357,11 @@
# Whether to pull new image before running a container
# pull_policy = "missing"
+# Indicates whether the application should be running in remote mode. This flag modifies the
+# --remote option on container engines. Setting the flag to true will default
+# `podman --remote=true` for access to the remote Podman service.
+# remote = false
+
# Directory for persistent engine files (database, etc)
# By default, this will be configured relative to where the containers/storage
# stores containers
CentOS-7.9 machine with latest packages installed, including the most fresh bits from kubic/libcontainers.
@rhatdan @mheon any idea?
We've seen this before - it's an outdated runc binary without a patch to Seccomp handling.
This should be a bugzilla, not an issue.
@rhatdan May I report this somewhere else?
since there's no info about what's needed to fix it - i don't see any other options but to roll back to a working rpm.. @addes6 where to get containers-common-1.2.0-3.el7.x86_64.rpm ?
I got the same error message after a recent update. Downloading and building the latest runc as described on https://github.com/opencontainers/runc#building worked for me. If you can't install it to the default location, you can use it via podman run --runtime=/path/to/runc.
Thanks Daniel for the pointer to the runc build work-around that solved this issue for you - however i wasn't up for going through the work of having to build runc from src, it so i found that just downloading the latest from the releases page and replaced the /usr/bin/runc also worked (had to rename runc.amd64 to runc).
But ultimately i think this is an issue with the runc RPM that's being installed from devel-kubic repos in the rpm named runc-1.0.0-103.dev.el7.x86_64 as the output of rpm -qi on that package shows:
Name : runc
Epoch : 2
Version : 1.0.0
Release : 103.dev.el7
Architecture: x86_64
Install Date: Tue Nov 24 15:37:55 2020
Group : Unspecified
Size : 11231255
License : ASL 2.0
Signature : RSA/SHA256, Thu Nov 12 19:00:10 2020, Key ID 4d64390375060aa4
Source RPM : runc-1.0.0-103.dev.el7.src.rpm
Build Date : Thu Nov 12 19:00:06 2020
Build Host : sheep88
Relocations : (not relocatable)
Vendor : obs://build.opensuse.org/devel:kubic
URL : https://github.com/opencontainers/runc
Summary : CLI for running Open Containers
Description :
The runc command can be used to start containers which are packaged
in accordance with the Open Container Initiative's specifications,
and to manage containers running under runc.
@aleks-mariusz The only difference between containers-common-1.2.0-3.el7.x86_64.rpm andcontainers-common-1.2.0-9.el7.x86_64.rpm was some changes in the file containers.conf (difference posted above). I have no copy of the original RPM file and all mirrors have the new version. I will retest with a more recent runc.
/cc @lsm5 re: kubic-stable runc package
I am facing the same error message when trying to run a container using podman on CentOS 7, and I think @aleks-mariusz may be correct. I also have that runc version, also obtained from devel-kubic repos. I will try to build runc separately as suggested, to see if it solves my issue for now.
after some futzing around, i was able to get this working on CentOS 7 with current podman without upgrading the runc from the runc distributed via kubic-stable:
[root@cc-runner0~]$ podman --version
podman version 2.2.0
[root@cc-runner0 ~]$ runc --version
runc version 1.0.0-rc10
commit: 7e7d68b149a34e4aaa43694e4f8da8ed87c6d50a
spec: 1.0.1-dev
[root@cc-runner0 ~]$ cat /etc/containers/containers.conf
[containers]
default_sysctls = []
default_capabilities = [
"AUDIT_WRITE",
"CHOWN",
"DAC_OVERRIDE",
"FOWNER",
"FSETID",
"KILL",
"MKNOD",
"NET_BIND_SERVICE",
"NET_RAW",
"SETGID",
"SETPCAP",
"SETUID",
"SYS_CHROOT",
]
I don't know exactly what does that /etc/containers/containers.conf do to solve the issue, but the one that @nathwill solves my issue. I didn't have to update my podman or runc, because I was already using the ones from kubic-stable, but adding that conf file solved the issue. I guess this goes in line with what the OP described about overwriting that file, but I did not have that file in the first place, I had to create it
Most helpful comment
Thanks Daniel for the pointer to the runc build work-around that solved this issue for you - however i wasn't up for going through the work of having to build runc from src, it so i found that just downloading the latest from the releases page and replaced the /usr/bin/runc also worked (had to rename runc.amd64 to runc).
But ultimately i think this is an issue with the runc RPM that's being installed from devel-kubic repos in the rpm named
runc-1.0.0-103.dev.el7.x86_64as the output ofrpm -qion that package shows: