Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
I installed a fresh CentOS 7 VM and tried to install podman from the default repo, the stable opensuse binarys and the testing ones, but as soon as I install slirp4netns for rootless networking the containers fail to start.
I'm ultimatively trying to install podman without root access (given that user namespaces are enabled), but even a "normal" install doesn't work.
Steps to reproduce the issue:
Install a clean CentOS7 install
Run the following to enable user namespaces
sudo -i
yum update
grubby --args="namespace.unpriv_enable=1 user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
echo "user.max_user_namespaces=15076" >> /etc/sysctl.conf
echo snapstromegon:100000:65535 > /etc/subuid
echo snapstromegon:100000:65535 > /etc/subgid
reboot
sudo yum install podman
reboot
(the install step can be changed as descriped in the install guide to install version 1.8.0 or 1.8.1 instead of 1.4.4 from the CentOS packages)
podman run -ir alpineDescribe the results you received:
the container does not start and the following output is given:
Error: /home/snapstromegon/podman/usr/bin/slirp4netns failed: "sent tapfd=7 for tap0\nWARNING: Support for sandboxing is experimental\nreceived tapfd=7\ncannot mount tmpfs on /tmp\ncreate_sandbox failed\ndo_slirp is exiting\ndo_slirp failed\nparent failed\nWARNING: Support for sandboxing is experimental\nStarting slirp\n* MTU: 65520\n* Network: 10.0.2.0\n* Netmask: 255.255.255.0\n* Gateway: 10.0.2.2\n* DNS: 10.0.2.3\n* Recommended IP: 10.0.2.100\n"
Formatted:
Error: /home/snapstromegon/podman/usr/bin/slirp4netns failed: "
sent tapfd=7 for tap0
WARNING: Support for sandboxing is experimental
received tapfd=7
cannot mount tmpfs on /tmp
create_sandbox failed
do_slirp is exiting
do_slirp failed
parent failed
WARNING: Support for sandboxing is experimental
Starting slirp
* MTU: 65520
* Network: 10.0.2.0
* Netmask: 255.255.255.0
* Gateway: 10.0.2.2
* DNS: 10.0.2.3
* Recommended IP: 10.0.2.100
"
Describe the results you expected:
I expected that the container starts normally like without slirp4netns
Additional information you deem important (e.g. issue happens only occasionally):
I ran the same command with --log-level=debug:
[snapstromegon@centos7-test yum.repos.d]$ podman run --log-level debug alpine
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
DEBU[0000] Reading configuration file "/home/snapstromegon/.config/containers/libpod.conf"
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/snapstromegon/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/snapstromegon/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/snapstromegon/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/snapstromegon/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] No store required. Not opening container store.
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Failed to add podman to systemd sandbox cgroup: exec: "dbus-launch": executable file not found in $PATH
INFO[0000] running as rootless
DEBU[0000] Reading configuration file "/home/snapstromegon/.config/containers/libpod.conf"
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs and --events-backend=file
DEBU[0000] Using conmon: "/usr/bin/conmon"
DEBU[0000] Initializing boltdb state at /home/snapstromegon/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/snapstromegon/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/snapstromegon/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/snapstromegon/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/bin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] parsed reference into "[vfs@/home/snapstromegon/.local/share/containers/storage+/run/user/1000]docker.io/library/alpine:latest"
DEBU[0000] parsed reference into "[vfs@/home/snapstromegon/.local/share/containers/storage+/run/user/1000]@e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a"
DEBU[0000] exporting opaque data as blob "sha256:e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a"
DEBU[0000] Using slirp4netns netmode
DEBU[0000] No hostname set; container's hostname will default to runtime default
DEBU[0000] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0000] created OCI spec and options for new container
DEBU[0000] Allocated lock 1 for container 019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf
DEBU[0000] parsed reference into "[vfs@/home/snapstromegon/.local/share/containers/storage+/run/user/1000]@e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a"
DEBU[0000] exporting opaque data as blob "sha256:e7d92cdc71feacf90708cb59182d0df1b911f8ae022d29e8e95d75ca6a99776a"
DEBU[0000] created container "019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf"
DEBU[0000] container "019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf" has work directory "/home/snapstromegon/.local/share/containers/storage/vfs-containers/019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf/userdata"
DEBU[0000] container "019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf" has run directory "/run/user/1000/vfs-containers/019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf/userdata"
DEBU[0000] New container created "019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf"
DEBU[0000] container "019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf" has CgroupParent "/libpod_parent/libpod-019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf"
DEBU[0000] Not attaching to stdin
DEBU[0000] mounted container "019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf" at "/home/snapstromegon/.local/share/containers/storage/vfs/dir/e4b92244c40c92f89b41f5b43109780e8cf23ee39f60f7f21d670e9dbeba57ed"
DEBU[0000] Created root filesystem for container 019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf at /home/snapstromegon/.local/share/containers/storage/vfs/dir/e4b92244c40c92f89b41f5b43109780e8cf23ee39f60f7f21d670e9dbeba57ed
DEBU[0000] Made network namespace at /run/user/1000/netns/cni-9952bd67-f969-8d18-495e-fb0798f03920 for container 019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf
DEBU[0000] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-9952bd67-f969-8d18-495e-fb0798f03920 tap0
DEBU[0001] unmounted container "019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf"
DEBU[0001] Tearing down network namespace at /run/user/1000/netns/cni-9952bd67-f969-8d18-495e-fb0798f03920 for container 019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf
DEBU[0001] Cleaning up container 019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf
DEBU[0001] Network is already cleaned up, skipping...
DEBU[0001] Container 019f42926c8778bd03fdaafa3b4495309d97ee2b764d34141ccb36f767e803cf storage is already unmounted, skipping...
DEBU[0001] ExitCode msg: "/usr/bin/slirp4netns failed: \"sent tapfd=7 for tap0\\nwarning: support for sandboxing is experimental\\nreceived tapfd=7\\ncannot mount tmpfs on /tmp\\ncreate_sandbox failed\\ndo_slirp is exiting\\ndo_slirp failed\\nparent failed\\nwarning: support for sandboxing is experimental\\nstarting slirp\\n* mtu: 65520\\n* network: 10.0.2.0\\n* netmask: 255.255.255.0\\n* gateway: 10.0.2.2\\n* dns: 10.0.2.3\\n* recommended ip: 10.0.2.100\\n\""
ERRO[0001] /usr/bin/slirp4netns failed: "sent tapfd=7 for tap0\nWARNING: Support for sandboxing is experimental\nreceived tapfd=7\ncannot mount tmpfs on /tmp\ncreate_sandbox failed\ndo_slirp is exiting\ndo_slirp failed\nparent failed\nWARNING: Support for sandboxing is experimental\nStarting slirp\n* MTU: 65520\n* Network: 10.0.2.0\n* Netmask: 255.255.255.0\n* Gateway: 10.0.2.2\n* DNS: 10.0.2.3\n* Recommended IP: 10.0.2.100\n"
Output of podman version:
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
podman version 1.8.1-rc3
Output of podman info --debug:
[snapstromegon@centos7-test yum.repos.d]$ podman info --debug
WARN[0000] The cgroups manager is set to systemd but there is no systemd user session available
WARN[0000] For using systemd, you may need to login using an user session
WARN[0000] Alternatively, you can enable lingering with: `loginctl enable-linger 1000` (possibly as root)
WARN[0000] Falling back to --cgroup-manager=cgroupfs
debug:
compiler: gc
git commit: ""
go version: go1.13.6
podman version: 1.8.1-rc3
host:
BuildahVersion: 1.14.2
CgroupVersion: v1
Conmon:
package: conmon-2.0.11-1.1.el7.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.0.11, commit: 978015bfb6c6f46617f899f07d00a29593f7b2d6'
Distribution:
distribution: '"centos"'
version: "7"
IDMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65535
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65535
MemFree: 71688192
MemTotal: 971239424
OCIRuntime:
name: runc
package: runc-1.0.0-15.1.el7.x86_64
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc10
commit: 67b92f062188d9cb6472b428855432c9f35efcf5
spec: 1.0.1-dev
SwapFree: 1719652352
SwapTotal: 1719660544
arch: amd64
cpus: 1
eventlogger: file
hostname: centos7-test
kernel: 3.10.0-1062.12.1.el7.x86_64
os: linux
rootless: true
slirp4netns:
Executable: /usr/bin/slirp4netns
Package: slirp4netns-0.4.3-22.1.el7.x86_64
Version: |-
slirp4netns version 0.4.3-beta.1
commit: b04291ba84ca35ccc60bd009372a28f9ea7ef841
uptime: 14m 56.97s
registries:
search:
- docker.io
- registry.fedoraproject.org
- registry.access.redhat.com
- registry.centos.org
- quay.io
store:
ConfigFile: /home/snapstromegon/.config/containers/storage.conf
ContainerStore:
number: 2
GraphDriverName: vfs
GraphOptions: {}
GraphRoot: /home/snapstromegon/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 2
RunRoot: /run/user/1000
VolumePath: /home/snapstromegon/.local/share/containers/storage/volumes
Package info (e.g. output of rpm -q podman or apt list podman):
[snapstromegon@centos7-test yum.repos.d]$ rpm -q podman
podman-1.8.1-6.1.el7.x86_64
Additional environment details (AWS, VirtualBox, physical, etc.):
Running on Hyper-V with CentOS7:
[snapstromegon@centos7-test yum.repos.d]$ rpm -q centos-release
centos-release-7-7.1908.0.el7.centos.x86_64
@giuseppe @AkihiroSuda PTAL - looks like a non-zero return out of slirp, but no error message?
Should be fixed in https://github.com/rootless-containers/slirp4netns/commit/e6b31feb414766a7760a3cba453838a9170c4e97 (v0.4.3)
@AkihiroSuda I had v0.4.3-beta.1 installed. I assume this is a beta version which didn't have that fix yet. I'll try it in the next days.
slirp4netns v0.4.3 (non beta) solves the problem.
Now I have problems with "Failed to add conmon to cgroupfs sandbox cgroup", but this doesn't seem to be related, so I close this issue.
for anyone else who finds this issue via search engine as well, and relies soley on RPM's - until the slirp4netns RPM (i use the normal centos7 kubic-libcontainers-stable repo) is updated with 0.4.3 (non-beta), i was able to use the opensuse tumbleweed's repo RPM without issue:
sudo rpm -Uvh http://download.opensuse.org/repositories/Virtualization:/containers/openSUSE_Tumbleweed/x86_64/slirp4netns-0.4.4-25.1.x86_64.rpm
standard disclaimer applies, use at your own risk... it's a opensuse rpm being installed on centos so should not be done lightly, i'm only doing this temporarily until the usual repo i use for centos7 rpm's is updated with the fixed slirp4netns
@lsm5 as per the above, on freenode, i was informed you are one of the maintainers of the slirp4netns package in the kubic repo's, any chance you can bump the version to something more modern than 0.4.3-beta (since both 0.4.4. and 1.0.0 are out now) please.
I've been resorting to cross-installing the opensuse_tumbleweed version of slirp4netns since it's newer but installing suse rpm's on centos is not exactly my most brilliant idea really :-)
@aleks-mariusz building right now. I'll enable slirp4netns in my autobuilder script, sorry about the lag.
Builds can be tracked here: https://build.opensuse.org/package/show/devel:kubic:libcontainers:stable/slirp4netns
i see there are dep issues. Looking into it...
should be ready now