Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)
/kind bug
Description
When using the Ubuntu PPA repo to install on a debian 9 (stretch) machine, containers are unable to run due to slirp4netns error.
slirp4netns version 0.2.1
commit: 1797e46728440e93f9229d5a34874befe00b4cab
Steps to reproduce the issue:
cat > /etc/apt/sources.list.d/podman.list <<EOF
deb http://ppa.launchpad.net/projectatomic/ppa/ubuntu bionic main
EOF
sudo apt-key adv --recv-key --keyserver keyserver.ubuntu.com 0x018ba5ad9df57a4448f0e6cf8becf1637ad8c79d
sudo apt update
sudo apt -y install podman
podman pull fedora
podman run --rm -ti fedora /bin/bash
Describe the results you received:
$ podman --log-level debug run --rm -ti fedora /bin/bash
INFO[0000] running as rootless
DEBU[0000] Initializing boltdb state at /home/maxamillion/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/maxamillion/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/maxamillion/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/maxamillion/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] Not configuring container store
DEBU[0000] Using slirp4netns netmode
INFO[0000] running as rootless
DEBU[0000] Initializing boltdb state at /home/maxamillion/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/maxamillion/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000
DEBU[0000] Using static dir /home/maxamillion/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/maxamillion/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/run/user/1000]docker.io/library/fedora:latest"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/run/user/1000]@26ffec5b4a8ad65083424903b7aa175953329413fe5cc4c0dac6fedbe81f2fbb"
DEBU[0000] exporting opaque data as blob "sha256:26ffec5b4a8ad65083424903b7aa175953329413fe5cc4c0dac6fedbe81f2fbb"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/run/user/1000]@26ffec5b4a8ad65083424903b7aa175953329413fe5cc4c0dac6fedbe81f2fbb"
DEBU[0000] exporting opaque data as blob "sha256:26ffec5b4a8ad65083424903b7aa175953329413fe5cc4c0dac6fedbe81f2fbb"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/run/user/1000]@26ffec5b4a8ad65083424903b7aa175953329413fe5cc4c0dac6fedbe81f2fbb"
DEBU[0000] Using slirp4netns netmode
DEBU[0000] Allocated lock 5 for container d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/run/user/1000]@26ffec5b4a8ad65083424903b7aa175953329413fe5cc4c0dac6fedbe81f2fbb"
DEBU[0000] exporting opaque data as blob "sha256:26ffec5b4a8ad65083424903b7aa175953329413fe5cc4c0dac6fedbe81f2fbb"
DEBU[0001] created container "d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719"
DEBU[0001] container "d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719" has work directory "/home/maxamillion/.local/share/containers/storage/vfs-containers/d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719/userdata"
DEBU[0001] container "d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719" has run directory "/run/user/1000/vfs-containers/d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719/userdata"
DEBU[0001] New container created "d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719"
DEBU[0001] container "d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719" has CgroupParent "/libpod_parent/libpod-d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719"
DEBU[0001] Handling terminal attach
DEBU[0001] mounted container "d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719" at "/home/maxamillion/.local/share/containers/storage/vfs/dir/9b389ee3d0a3ca98813491155cb84572381557832e53cc1f0f9d06d7c440c9a7"
DEBU[0001] Created root filesystem for container d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719 at /home/maxamillion/.local/share/containers/storage/vfs/dir/9b389ee3d0a3ca98813491155cb84572381557832e53cc1f0f9d06d7c440c9a7
DEBU[0001] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0001] Created OCI spec for container d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719 at /home/maxamillion/.local/share/containers/storage/vfs-containers/d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719/userdata/config.json
DEBU[0001] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0001] running conmon: /usr/libexec/crio/conmon args=[-c d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719 -u d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719 -r /usr/bin/runc -b /home/maxamillion/.local/share/containers/storage/vfs-containers/d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719/userdata -p /run/user/1000/vfs-containers/d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719/userdata/pidfile -l /home/maxamillion/.local/share/containers/storage/vfs-containers/d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --conmon-pidfile /home/maxamillion/.local/share/containers/storage/vfs-containers/d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/maxamillion/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719 --socket-dir-path /run/user/1000/libpod/tmp/socket -t --log-level debug --syslog]
WARN[0001] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0001] Received container pid: 4324
DEBU[0001] Created container d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719 in OCI runtime
DEBU[0002] Cleaning up container d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] unmounted container "d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719"
DEBU[0002] Cleaning up container d7460244ff725a88760420818821a6caf4287aecc22f7e9dee406ebaecfac719
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] Storage is already unmounted, skipping...
DEBU[0002] Storage is already unmounted, skipping...
ERRO[0002] slirp4netns failed
Describe the results you expected:
Additional information you deem important (e.g. issue happens only occasionally):
Output of podman version:
Version: 1.2.0-dev
RemoteAPI Version: 1
Go Version: go1.10.4
OS/Arch: linux/amd64
Output of podman info --debug:
debug:
compiler: gc
git commit: ""
go version: go1.10.4
podman version: 1.2.0-dev
host:
BuildahVersion: 1.8-dev
Conmon:
package: 'conmon: /usr/libexec/crio/conmon'
path: /usr/libexec/crio/conmon
version: 'conmon version , commit: '
Distribution:
distribution: debian
version: "9"
MemFree: 2668261376
MemTotal: 5195935744
OCIRuntime:
package: 'cri-o-runc: /usr/bin/runc'
path: /usr/bin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 0
SwapTotal: 0
arch: amd64
cpus: 4
hostname: penguin
kernel: 4.19.4-02480-gd44d301822f0
os: linux
rootless: true
uptime: 2h 23m 53.99s (Approximately 0.08 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
- registry.fedoraproject.org
- quay.io
- registry.access.redhat.com
- registry.centos.org
store:
ConfigFile: /home/maxamillion/.config/containers/storage.conf
ContainerStore:
number: 0
GraphDriverName: vfs
GraphOptions: null
GraphRoot: /home/maxamillion/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 1
RunRoot: /tmp/1000
VolumePath: /home/maxamillion/.local/share/containers/storage/volumes
@giuseppe PTAL
It is an issue with slirp4netns being too old. We started using some new features that make it more secure (such as no access to 127.0.0.1 on the host), and we need an updated package for that (or an older podman).
@lsm5 Looks like we need a PPA update to slirp then
ERRO[0002] slirp4netns failed
Can podman print (a couple of last lines of) stderr here?
Can podman print (a couple of last lines of) stderr here?
that would be helpful but the difficulty is that we don't attach to the slirp4netns stdout/stderr. The slirp4netns process is left running while podman exits.
I can run fedora successfully with podman run --rm -ti fedora /bin/bash
I got same error when I want bind localhost port with container port, e.g. podman run -d -p 8080:80 nginx
DEBU[0000] Created OCI spec for container 51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1 at /home/tianzhen/.local/share/containers/storage/vfs-containers/51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1/userdata/config.json
DEBU[0000] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0000] running conmon: /usr/libexec/crio/conmon args=[-c 51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1 -u 51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1 -r /usr/bin/runc -b /home/tianzhen/.local/share/containers/storage/vfs-containers/51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1/userdata -p /tmp/1000/vfs-containers/51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1/userdata/pidfile -l /home/tianzhen/.local/share/containers/storage/vfs-containers/51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --conmon-pidfile /home/tianzhen/.local/share/containers/storage/vfs-containers/51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/tianzhen/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1 --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog]
WARN[0000] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0000] Received container pid: 67328
DEBU[0000] Created container 51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1 in OCI runtime
DEBU[0001] Cleaning up container 51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1
DEBU[0001] Network is already cleaned up, skipping...
DEBU[0001] unmounted container "51027008a4acdbc297cf8a0caa982124cb870dad995e92470163546390223fc1"
ERRO[0001] slirp4netns failed
I installed latest slirp4netns with sudo apt install slirp4netns
➜ ~ apt-cache policy slirp4netns
slirp4netns:
Installed: 0.2.1-1~ubuntu18.04~ppa1
Candidate: 0.2.1-1~ubuntu18.04~ppa1
Version table:
*** 0.2.1-1~ubuntu18.04~ppa1 500
500 http://ppa.launchpad.net/projectatomic/ppa/ubuntu bionic/main amd64 Packages
100 /var/lib/dpkg/status
uname -a
Linux tianzhen-virtual-machine 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
podman info
debug:
compiler: gc
git commit: ""
go version: go1.10.4
podman version: 1.3.0-dev
host:
BuildahVersion: 1.7.2
Conmon:
package: 'conmon: /usr/libexec/crio/conmon'
path: /usr/libexec/crio/conmon
version: 'conmon version , commit: '
Distribution:
distribution: ubuntu
version: "18.04"
MemFree: 758857728
MemTotal: 10453848064
OCIRuntime:
package: 'cri-o-runc: /usr/bin/runc'
path: /usr/bin/runc
version: 'runc version spec: 1.0.1-dev'
SwapFree: 1331261440
SwapTotal: 1494839296
arch: amd64
cpus: 6
hostname: tianzhen-virtual-machine
kernel: 4.15.0-45-generic
os: linux
rootless: true
uptime: 98h 54m 42.71s (Approximately 4.08 days)
insecure registries:
registries: []
registries:
registries:
- docker.io
store:
ConfigFile: /home/tianzhen/.config/containers/storage.conf
ContainerStore:
number: 24
GraphDriverName: vfs
GraphOptions: null
GraphRoot: /home/tianzhen/.local/share/containers/storage
GraphStatus: {}
ImageStore:
number: 15
RunRoot: /tmp/1000
VolumePath: /home/tianzhen/.local/share/containers/storage/volumes
The 0.2.1 slirp4netns is too early.
@lsm5 Do we have an ETA on a new build for the PPA? This is hitting a lot of people
The 0.2.1 slirp4netns is too early.
@lsm5 Do we have an ETA on a new build for the PPA? This is hitting a lot of people
I'll fix this later today once I'm done with some RHEL stuff. I've yet to enable travis auto-builds on slirp, will get that done as well while I'm at it. Sorry about the lag.
Thanks, I compiled slirp4netns with source code, and it works for me!
@lsm5 thanks! no worries about lag, I barely ever use the debian machine because I live primarily on Fedora and RHEL, but when I do ... I like to use podman ;)
it appears the issue is still persisting with the new version of slirp4netns.
$ slirp4netns --version
slirp4netns version 0.3.0+dev
commit: unknown
$ podman --version
podman version 1.3.0-dev
$ podman --log-level debug run --rm -ti fedora /bin/bash
INFO[0000] running as rootless
DEBU[0000] Initializing boltdb state at /home/maxamillion/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/maxamillion/.local/share/containers/storage
DEBU[0000] Using run root /tmp/1000
DEBU[0000] Using static dir /home/maxamillion/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/maxamillion/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/tmp/1000]docker.io/library/fedora:latest"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/tmp/1000]@d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b"
DEBU[0000] exporting opaque data as blob "sha256:d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/tmp/1000]@d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b"
DEBU[0000] exporting opaque data as blob "sha256:d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b"
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/tmp/1000]@d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b"
DEBU[0000] Using slirp4netns netmode
DEBU[0000] Allocated lock 0 for container e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8
DEBU[0000] parsed reference into "[vfs@/home/maxamillion/.local/share/containers/storage+/tmp/1000]@d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b"
DEBU[0000] exporting opaque data as blob "sha256:d09302f77cfcc3e867829d80ff47f9e7738ffef69730d54ec44341a9fb1d359b"
DEBU[0001] created container "e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8"
DEBU[0001] container "e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8" has work directory "/home/maxamillion/.local/share/containers/storage/vfs-containers/e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8/userdata"
DEBU[0001] container "e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8" has run directory "/tmp/1000/vfs-containers/e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8/userdata"
DEBU[0001] New container created "e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8"
DEBU[0001] container "e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8" has CgroupParent "/libpod_parent/libpod-e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8"
DEBU[0001] Handling terminal attach
DEBU[0001] mounted container "e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8" at "/home/maxamillion/.local/share/containers/storage/vfs/dir/64832cceea377c552776fb09e0d7e68df90dc6479837f530db8d1c4257e63ca3"
DEBU[0001] Created root filesystem for container e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8 at /home/maxamillion/.local/share/containers/storage/vfs/dir/64832cceea377c552776fb09e0d7e68df90dc6479837f530db8d1c4257e63ca3
DEBU[0001] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0001] Created OCI spec for container e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8 at /home/maxamillion/.local/share/containers/storage/vfs-containers/e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8/userdata/config.json
DEBU[0001] /usr/libexec/crio/conmon messages will be logged to syslog
DEBU[0001] running conmon: /usr/libexec/crio/conmon args=[-c e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8 -u e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8 -r /usr/bin/runc -b /home/maxamillion/.local/share/containers/storage/vfs-containers/e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8/userdata -p /tmp/1000/vfs-containers/e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8/userdata/pidfile -l /home/maxamillion/.local/share/containers/storage/vfs-containers/e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --conmon-pidfile /home/maxamillion/.local/share/containers/storage/vfs-containers/e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/maxamillion/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /tmp/1000 --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8 --socket-dir-path /run/user/1000/libpod/tmp/socket -t --log-level debug --syslog]
WARN[0001] Failed to add conmon to cgroupfs sandbox cgroup: mkdir /sys/fs/cgroup/systemd/libpod_parent: permission denied
DEBU[0001] Received container pid: 3301
DEBU[0001] Created container e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8 in OCI runtime
DEBU[0002] Cleaning up container e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] unmounted container "e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8"
DEBU[0002] Cleaning up container e0cf8ad78e18b41a9c9f324a26c21319253923a17209fd88c764d623f99205f8
DEBU[0002] Network is already cleaned up, skipping...
DEBU[0002] Storage is already unmounted, skipping...
DEBU[0002] Storage is already unmounted, skipping...
DEBU[0003] [graphdriver] trying provided driver "vfs"
ERRO[0003] slirp4netns failed
I'll need to try this out then. I never really test on Ubuntu or Debian...
I'm trying on a Digital Ocean droplet, running Debian 9, after following your steps to install the packages:
root@debian-s-4vcpu-8gb-fra1-01# echo 10000 > /proc/sys/kernel/unprivileged_userns_clone
root@debian-s-4vcpu-8gb-fra1-01# adduser foo
root@debian-s-4vcpu-8gb-fra1-01# su -l foo
foo@debian-s-4vcpu-8gb-fra1-01:~$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 9 (stretch)"
NAME="Debian GNU/Linux"
VERSION_ID="9"
VERSION="9 (stretch)"
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
foo@debian-s-4vcpu-8gb-fra1-01:~$ dpkg -l podman slirp4netns
Desired=Unknown/Install/Remove/Purge/Hold
| Status=Not/Inst/Conf-files/Unpacked/halF-conf/Half-inst/trig-aWait/Trig-pend
|/ Err?=(none)/Reinst-required (Status,Err: uppercase=bad)
||/ Name Version Architecture Description
+++-=========================================-=========================-=========================-========================================================================================
ii podman 1.3.0-1~dev~ubuntu18.04~p amd64 Manage pods, containers and container images.
ii slirp4netns 0.3.0-1~dev~ubuntu18.04~p amd64 User-mode networking for unprivileged network namespaces
foo@debian-s-4vcpu-8gb-fra1-01:~$ uname -a
Linux debian-s-4vcpu-8gb-fra1-01 4.9.0-8-amd64 #1 SMP Debian 4.9.144-3.1 (2019-02-19) x86_64 GNU/Linux
foo@debian-s-4vcpu-8gb-fra1-01:~$ podman run --rm fedora echo hi
hi
are you using an updated podman too?
$ podman --version
podman version 1.3.0-dev
same version I've tried :/
FWIW, if I specify a network mode it works.
$ podman run --net host --rm -ti fedora echo hello
hello
thanks for confirming it, then it is surely something wrong with slirp4netns. Could you try these commands?
unshare -rn sleep 1000 &
slirp4netns --disable-host-loopback --mtu 65520 -c $! tap0
hopefully we get a better clue why slirp4netns is failing
$ unshare -rn sleep 1000 &
[1] 1185
$ slirp4netns --disable-host-loopback --mtu 65520 -c $! tap0
open("/dev/net/tun"): Permission denied
child failed(1)
modprobe tun
Is there a way on Ubuntu/debian to cause this to happen automatically. On Fedora I can just drop a file in /etc/modprobe.conf.d directory I think when slirp4netns gets installed.
@rhatdan
Is there a way on Ubuntu/debian to cause this to happen automatically. On Fedora I can just drop a file in /etc/modprobe.conf.d directory I think when slirp4netns gets installed.
There's a /etc/modules-load.d folder for dropping files that load modules, as well as a /etc/modprobe.d for declaring module options, module blacklisting, pre/post install/remove commands and "soft" dependencies.
Those have conf-file like semantics, meaning the user is allowed to modify them if a package drops a file there.
There are equivalent folders in /lib (or /usr/lib) for packages to drop where the user is not supposed to modify, but if those files are then copied by the user into the /etc equivalent folder while keeping the same name, then the (/usr)/lib namesake is skipped.
Man pages are available to explain the layout and the file formats as modules-load.d(5) and modprobe.d(5).
@lsm5 Can we update the slirp4netns package for Ubuntu? Or does this come from someone else? @AkihiroSuda Do you know?
It is from debian
cc @siretart
I can take care of updating the package in the development version of Ubuntu. I'll update this ticket when it's done so that someone else can backport it in a PPA.
@maxamillion Could you verify if this fixes this issue, and we can close it?
Just to clarify expectations, I've synced the 0.3.0-1 package from Debian/experimental to Ubuntu eoan, which has just started development. I think you should be able to just install the eoan .deb file on earlier version of Ubuntu just fine.
However, this package does not drop a /etc/modprobe.d/slirp4netns file to force loading the tun Kernel module. I'm not sure if this is an appropriate thing to do, the user might have chosen to run a custom kernel that has the tun kernel module statically linked into the kernel. Would this even work when used inside let's say an LXC/LXD container, or in a chroot?
I notice that the error message currently looks like this:
$ slirp4netns --disable-host-loopback --mtu 65520 -c $! tap0
open("/dev/net/tun"): Permission denied
child failed(1)
Maybe the wording / presentation of the error situation could be improved by suggesting to modprobe tun? In situations where this is not appropriate, the person seeing it is likely to understand what's going on.
is this fixed now?
Most helpful comment
@lsm5 thanks! no worries about lag, I barely ever use the debian machine because I live primarily on Fedora and RHEL, but when I do ... I like to use podman ;)