Podman: [v1.8] pkg/rootlessport: host port is intermittently opened inside the container

Created on 18 Feb 2020  Â·  23Comments  Â·  Source: containers/podman

The point is that I'm unconvinced that it is a workaround, as opposed to something in the middle of a set of intermittent failures. I have also managed to reproduce the problem with a fresh container:

√ podman run --name tbw -v /tmp/bw-data:/data -p 7080:80 bitwardenrs/server:alpine
✗ curl http://localhost:7080/
curl: (7) Failed to connect to localhost port 7080: Connection refused
√ podman ps 
CONTAINER ID  IMAGE                                      COMMAND        CREATED         STATUS                 PORTS                    NAMES
9e9e0dcbac9b  docker.io/bitwardenrs/server:alpine        /bitwarden_rs  34 seconds ago  Up 33 seconds ago      0.0.0.0:7080->80/tcp     tbw
√ podman exec -it 9e9 /bin/sh
/ # netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:7080            0.0.0.0:*               LISTEN      -
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      1/bitwarden_rs
/ # curl http://localhost:7080/
<!DOCTYPE html>
<html>

<head>
...

So in this case the port mapping has again been created inside the container - rather than exposed outside.

_Originally posted by @cbz in https://github.com/containers/libpod/issues/5167#issuecomment-585310213_

rootless stale-issue

All 23 comments

@giuseppe
Can we use nsenter binary instead of github.com/containernetworking/plugins/pkg/ns?
Is it available on all hosts?

@AkihiroSuda we are already using runtime.LockOSThread() in a few places to ensure the thread runs in the same namespace. Are we sure that is the issue? I am looking at the ns_linux.go code and I don't see how it can fail and run in a different namespace

I think I'm hitting this in production with a rootless .NET Core kestrel application that listening on port 8088.

If I create the image with :

$ podman create -p 8088:8088 --pull=never --name=qux 2cb6474985da
$ podman start qux

The kestrel instance inside the container throws an exception while trying to bind to 8088. Address already in use. However netstat on the host doesn't list anything listening on the port, additionally changing to an alternate port doesn't make a difference either.

If however I start the container interactively and run a ``dotnet nameofdll.dll ``` it works fine.
Also a the following seems to work reliably.

$ podman run -p 8088:8088 --rm --pull=never --name=qux 2cb6474985da

Activation from Systemd is very hit and miss.

@giuseppe Did you manage to reproduce the issue? I still cannot.

(this is the contents of my post in the other issue ):


I too seem to have developed this issue (which straingely at one point previously, it _was_ working but now i have fallen afoul of this same problem) and interesting bit here is that i'm using Podman 1.8.1 (contrary to the title saying _prior-to-v1.8_..


full --log-level=debug output of my podman run..

pi@raspberrypi:~ $ podman run -p 8080:80 --log-level=debug -d --name octoprint14 --device /dev/ttyUSB0:/dev/ttyUSB0 -v ./octoprint-data:/data docker.io/aleksmariusz/octoprint:1.4.0-20200315
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/pi/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/pi/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/pi/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/pi/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] No store required. Not opening container store.
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/lib/cri-o-runc/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Failed to add podman to systemd sandbox cgroup: exec: "dbus-launch": executable file not found in $PATH
INFO[0000] running as rootless
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/pi/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/pi/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/pi/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/pi/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/lib/cri-o-runc/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] parsed reference into "[vfs@/home/pi/.local/share/containers/storage+/run/user/1000/containers]docker.io/aleksmariusz/octoprint:1.4.0-20200315"
DEBU[0000] parsed reference into "[vfs@/home/pi/.local/share/containers/storage+/run/user/1000/containers]@ed54200e51ae0bada984869bf989ce15b9da688b483b2aaad76d389f68afe6e3"
DEBU[0001] exporting opaque data as blob "sha256:ed54200e51ae0bada984869bf989ce15b9da688b483b2aaad76d389f68afe6e3"
INFO[0001] Using "amd64" (architecture) on "arm" host
DEBU[0001] Adding image volume at /data
DEBU[0001] User mount ./octoprint-data:/data options []
DEBU[0001] Using slirp4netns netmode
DEBU[0001] No hostname set; container's hostname will default to runtime default
DEBU[0001] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0001] Adding mount /proc
DEBU[0001] Adding mount /dev
DEBU[0001] Adding mount /dev/pts
DEBU[0001] Adding mount /dev/mqueue
DEBU[0001] Adding mount /sys
DEBU[0001] Adding mount /sys/fs/cgroup
DEBU[0001] Adding mount /dev/ttyUSB0
DEBU[0001] setting container name octoprint14
DEBU[0001] created OCI spec and options for new container
DEBU[0001] parsed reference into "[vfs@/home/pi/.local/share/containers/storage+/run/user/1000/containers]@ed54200e51ae0bada984869bf989ce15b9da688b483b2aaad76d389f68afe6e3"
DEBU[0001] exporting opaque data as blob "sha256:ed54200e51ae0bada984869bf989ce15b9da688b483b2aaad76d389f68afe6e3"
DEBU[0269] created container "8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134"
DEBU[0269] container "8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134" has work directory "/home/pi/.local/share/containers/storage/vfs-containers/8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134/userdata"
DEBU[0269] container "8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134" has run directory "/run/user/1000/containers/vfs-containers/8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134/userdata"
DEBU[0270] New container created "8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134"
DEBU[0270] container "8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134" has CgroupParent "/libpod_parent/libpod-8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134"
DEBU[0270] mounted container "8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134" at "/home/pi/.local/share/containers/storage/vfs/dir/7c1761570617b7af18233bf1c32bc973ffc82a9ead56dd7e699e89b9183e3a1d"
DEBU[0270] Created root filesystem for container 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134 at /home/pi/.local/share/containers/storage/vfs/dir/7c1761570617b7af18233bf1c32bc973ffc82a9ead56dd7e699e89b9183e3a1d
DEBU[0270] Made network namespace at /run/user/1000/netns/cni-f90a8d46-63ff-c746-30d9-b986047843d7 for container 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134
DEBU[0270] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-f90a8d46-63ff-c746-30d9-b986047843d7 tap0
DEBU[0272] rootlessport: time="2020-03-17T13:07:24Z" level=info msg="starting parent driver"
DEBU[0272] rootlessport: time="2020-03-17T13:07:24Z" level=info msg="opaque=map[builtin.socketpath:/run/user/1000/libpod/tmp/rootlessport281369407/.bp.sock builtin.readypipepath:/run/user/1000/libpod/tmp/rootlessport281369407/.bp-ready.pipe]"
DEBU[0272] rootlessport: time="2020-03-17T13:07:24Z" level=info msg="starting child driver in child netns (\"/proc/self/exe\" [containers-rootlessport-child])"
DEBU[0272] rootlessport: time="2020-03-17T13:07:24Z" level=info msg="waiting for initComplete"
DEBU[0272] rootlessport: time="2020-03-17T13:07:24Z" level=info msg="initComplete is closed; parent and child established the communication channel"
DEBU[0272] rootlessport: time="2020-03-17T13:07:24Z" level=info msg="exposing ports [{8080 80 tcp }]"
DEBU[0273] rootlessport is ready
DEBU[0273] rootlessport: time="2020-03-17T13:07:25Z" level=info msg=ready
time="2020-03-17T13:07:25Z" level=info msg="waiting for exitfd to be closed"
DEBU[0273] skipping loading default AppArmor profile (rootless mode)
DEBU[0273] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0273] set root propagation to "rslave"
DEBU[0273] Created OCI spec for container 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134 at /home/pi/.local/share/containers/storage/vfs-containers/8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134/userdata/config.json
DEBU[0273] /usr/libexec/podman/conmon messages will be logged to syslog
DEBU[0273] running conmon: /usr/libexec/podman/conmon    args="[--api-version 1 -c 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134 -u 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134 -r /usr/lib/cri-o-runc/sbin/runc -b /home/pi/.local/share/containers/storage/vfs-containers/8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134/userdata -p /run/user/1000/containers/vfs-containers/8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134/userdata/pidfile -l k8s-file:/home/pi/.local/share/containers/storage/vfs-containers/8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/vfs-containers/8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/pi/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134]"
WARN[0273] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for cpuset: mkdir /sys/fs/cgroup/cpuset: read-only file system
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0275] Received: 4104
INFO[0275] Got Conmon PID as 4093
DEBU[0275] Created container 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134 in OCI runtime
DEBU[0275] Starting container 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134 with command [/usr/bin/entry.sh /usr/local/bin/python /usr/local/bin/supervisord -c /etc/supervisor/supervisord.conf]
DEBU[0275] Started container 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134
8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134

I am trying to work out what might have changed that caused it to get into this state, but am running out of ideas (i've tried reverting every change since it worked). Could really use some help what might have changed, what to check for next?

pi@raspberrypi:~ $ podman ps -a
CONTAINER ID  IMAGE                                            COMMAND               CREATED         STATUS             PORTS                 NAMES
8abe143bceb5  docker.io/aleksmariusz/octoprint:1.4.0-20200315  /usr/local/bin/py...  22 minutes ago  Up 18 minutes ago  0.0.0.0:8080->80/tcp  octoprint14

I also checked to make sure the port isn't being bound _inside_ the container..

pi@raspberrypi:~ $ podman exec octoprint14 netstat -tnlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name
tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      13/python
tcp        0      0 127.0.0.1:9001          0.0.0.0:*               LISTEN      1/python
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      -

and that the port inside the container is listening properly..

pi@raspberrypi:~ $ podman exec octoprint14 nc -vz localhost 80
localhost (127.0.0.1:80) open

but outside the container, on the host, the port is not open :-(

pi@raspberrypi:~ $ nc -vz localhost 8080
nc: connect to localhost port 8080 (tcp) failed: Connection refused

Also i made sure slirp4netns is running (i'm not sure if there's a difference using that, and rootlesskit for port forwarding.. or how to change which implementation i use):

  PID USER      CPU% MEM% S  NI TTY       VIRT   RES   SHR   OOM NLWP   DISK READ  DISK WRITE  UTIME+   STIME+  START Command                                                                                                                                    
    1 root       0.0  1.7 S   0 ?        14952  4008  2644     0    1     no perm     no perm  0:02.95  0:05.31 Jan01 /sbin/init                                                                                                                                 
 4093 pi         0.0  0.7 S   0 ?        14928  1652  1408     7    2    0.00 B/s    0.00 B/s  0:00.00  0:00.06 13:07 `- /usr/libexec/podman/conmon --api-version 1 -c 8abe143bceb537673e5d6ccb2fa0548c2b7c7770efd7715cd5ca885b4972e134 -u 8abe143bceb537673e5d6c
 4104 pi         0.0  5.6 S   0 ?        15804 12964  4500    55    1    0.00 B/s    0.00 B/s  0:12.54  0:01.39 13:07 |  `- /usr/local/bin/python /usr/local/bin/supervisord -c /etc/supervisor/supervisord.conf
 4129 pi         6.2 23.3 S   0 ?        70700 54136  5660   233   20    0.00 B/s    0.00 B/s  4:45.24  0:39.58 13:26 |     `- /usr/local/bin/python /usr/local/bin/octoprint --iknowwhatimdoing --basedir /data
 4128 100100     0.0  1.6 S   0 ?         6140  3788  2772    16    1    0.00 B/s    0.00 B/s  0:00.36  0:00.15 13:26 |     `- /usr/sbin/haproxy -db -f /etc/haproxy/haproxy.cfg
 4071 pi         0.0  1.1 S   0 pts/1     5780  2524  1596    10    1    0.00 B/s    0.00 B/s  0:00.21  0:00.89 13:26 `- /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-f90a8d
  747 pi         0.0  0.5 S   0 ?        41232  1064   888     4    1    0.00 B/s    0.00 B/s  0:00.01  0:00.01 11:35 `- podman
  354 pi         0.0  1.3 S   0 ?        14512  3108  2204    13    1    0.00 B/s    0.00 B/s  0:00.53  0:00.19 11:34 `- /lib/systemd/systemd --user
  369 pi         0.0  1.1 S   0 ?        16420  2492  1032    10    1     no perm     no perm  0:00.00  0:00.00 11:34 |  `- (sd-pam)

I'm using podman 1.8.1

Version:            1.8.1
RemoteAPI Version:  1
Go Version:         go1.11.6
OS/Arch:            linux/arm

p.s. i am running this on a RaspberryPi Model B, and it runs Raspbian 10 (Based on Debian 10 Buster). Fortunately i have the system booting root off an iscsi LUN, and on my storage system i can roll back the snapshot, so i'm trying to get it back to a working state so i can try to compare the difference.. If there's specific i should be checking/looking out for, please let me know.. i've been trying to reduce the complexity of the setup so i've tried to eliminate the use/need for (user) systemd/dbus at the moment

@aleks-mariusz The image isn't public?

Error: unable to pull docker.io/aleksmariusz/octoprint:1.4.0-20200315: unable to pull image: Error initializing source docker://aleksmariusz/octoprint:1.4.0-20200315: Error reading manifest 1.4.0-20200315 in docker.io/aleksmariusz/octoprint: manifest unknown: manifest unknown

I just haven't uploaded that one yet as i ran into this issue and got side tracked trying to figure out what's happening.. you can use docker.io/aleksmariusz/octoprint:1.4.0-202003062053 as a substitute.

So here's what it looks like when rolled back to when it did work

pi@raspberrypi:~ $ podman run -p 8080:80 -d --log-level=debug --name octoprint14 --device /dev/ttyUSB0:/dev/ttyUSB0 -v ./octoprint-data:/data docker.io/aleksmariusz/octoprint:1.4.0-202003062053
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/pi/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/pi/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/pi/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/pi/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] No store required. Not opening container store.
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/lib/cri-o-runc/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] Failed to add podman to systemd sandbox cgroup: dial unix /run/user/1000/bus: connect: no such file or directory
INFO[0000] running as rootless
DEBU[0000] Using conmon: "/usr/libexec/podman/conmon"
DEBU[0000] Initializing boltdb state at /home/pi/.local/share/containers/storage/libpod/bolt_state.db
DEBU[0000] Using graph driver vfs
DEBU[0000] Using graph root /home/pi/.local/share/containers/storage
DEBU[0000] Using run root /run/user/1000/containers
DEBU[0000] Using static dir /home/pi/.local/share/containers/storage/libpod
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp
DEBU[0000] Using volume path /home/pi/.local/share/containers/storage/volumes
DEBU[0000] Set libpod namespace to ""
DEBU[0000] [graphdriver] trying provided driver "vfs"
DEBU[0000] Initializing event backend file
DEBU[0000] using runtime "/usr/lib/cri-o-runc/sbin/runc"
WARN[0000] Error initializing configured OCI runtime crun: no valid executable found for OCI runtime crun: invalid argument
DEBU[0000] parsed reference into "[vfs@/home/pi/.local/share/containers/storage+/run/user/1000/containers]docker.io/aleksmariusz/octoprint:1.4.0-202003062053"
DEBU[0000] parsed reference into "[vfs@/home/pi/.local/share/containers/storage+/run/user/1000/containers]@0660822def88823aa898bdf60b0b9f1efcc34fcb4b0135ee79295bca05352ff9"
DEBU[0000] exporting opaque data as blob "sha256:0660822def88823aa898bdf60b0b9f1efcc34fcb4b0135ee79295bca05352ff9"
INFO[0000] Using "amd64" (architecture) on "arm" host
DEBU[0000] Adding image volume at /data
DEBU[0000] User mount ./octoprint-data:/data options []
DEBU[0001] Using slirp4netns netmode
DEBU[0001] No hostname set; container's hostname will default to runtime default
DEBU[0001] Loading seccomp profile from "/usr/share/containers/seccomp.json"
DEBU[0001] Adding mount /proc
DEBU[0001] Adding mount /dev
DEBU[0001] Adding mount /dev/pts
DEBU[0001] Adding mount /dev/mqueue
DEBU[0001] Adding mount /sys
DEBU[0001] Adding mount /sys/fs/cgroup
DEBU[0001] Adding mount /dev/ttyUSB0
DEBU[0001] setting container name octoprint14
DEBU[0001] created OCI spec and options for new container
DEBU[0001] Allocated lock 0 for container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd
DEBU[0001] parsed reference into "[vfs@/home/pi/.local/share/containers/storage+/run/user/1000/containers]@0660822def88823aa898bdf60b0b9f1efcc34fcb4b0135ee79295bca05352ff9"
DEBU[0001] exporting opaque data as blob "sha256:0660822def88823aa898bdf60b0b9f1efcc34fcb4b0135ee79295bca05352ff9"
DEBU[0246] created container "96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd"
DEBU[0246] container "96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd" has work directory "/home/pi/.local/share/containers/storage/vfs-containers/96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd/userdata"
DEBU[0246] container "96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd" has run directory "/run/user/1000/containers/vfs-containers/96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd/userdata"
DEBU[0247] New container created "96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd"
DEBU[0247] container "96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd" has CgroupParent "/libpod_parent/libpod-96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd"
DEBU[0247] mounted container "96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd" at "/home/pi/.local/share/containers/storage/vfs/dir/a7cdbaff650274d3e767dc863a32897c0442a521ac63d84d53822ad0ee175736"
DEBU[0247] Created root filesystem for container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd at /home/pi/.local/share/containers/storage/vfs/dir/a7cdbaff650274d3e767dc863a32897c0442a521ac63d84d53822ad0ee175736
DEBU[0247] Made network namespace at /run/user/1000/netns/cni-4fd6eadc-f81f-e1c6-d4ee-0de3561eb1bd for container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd
DEBU[0247] slirp4netns command: /usr/bin/slirp4netns --disable-host-loopback --mtu 65520 --enable-sandbox -c -e 3 -r 4 --netns-type=path /run/user/1000/netns/cni-4fd6eadc-f81f-e1c6-d4ee-0de3561eb1bd tap0
DEBU[0248] rootlessport: time="2020-03-17T17:53:26Z" level=info msg="starting parent driver"
DEBU[0248] rootlessport: time="2020-03-17T17:53:26Z" level=info msg="opaque=map[builtin.socketpath:/run/user/1000/libpod/tmp/rootlessport279464613/.bp.sock builtin.readypipepath:/run/user/1000/libpod/tmp/rootlessport279464613/.bp-ready.pipe]"
DEBU[0248] rootlessport: time="2020-03-17T17:53:26Z" level=info msg="starting child driver in child netns (\"/proc/self/exe\" [containers-rootlessport-child])"
DEBU[0248] rootlessport: time="2020-03-17T17:53:26Z" level=info msg="waiting for initComplete"
DEBU[0249] rootlessport: time="2020-03-17T17:53:27Z" level=info msg="initComplete is closed; parent and child established the communication channel"
DEBU[0249] rootlessport: time="2020-03-17T17:53:27Z" level=info msg="exposing ports [{8080 80 tcp }]"
DEBU[0250] rootlessport is ready
DEBU[0250] rootlessport: time="2020-03-17T17:53:28Z" level=info msg=ready
time="2020-03-17T17:53:28Z" level=info msg="waiting for exitfd to be closed"
DEBU[0251] skipping loading default AppArmor profile (rootless mode)
DEBU[0252] /etc/system-fips does not exist on host, not mounting FIPS mode secret
DEBU[0252] set root propagation to "rslave"
DEBU[0252] Created OCI spec for container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd at /home/pi/.local/share/containers/storage/vfs-containers/96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd/userdata/config.json
DEBU[0252] /usr/libexec/podman/conmon messages will be logged to syslog
DEBU[0252] running conmon: /usr/libexec/podman/conmon    args="[--api-version 1 -c 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd -u 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd -r /usr/lib/cri-o-runc/sbin/runc -b /home/pi/.local/share/containers/storage/vfs-containers/96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd/userdata -p /run/user/1000/containers/vfs-containers/96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd/userdata/pidfile -l k8s-file:/home/pi/.local/share/containers/storage/vfs-containers/96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd/userdata/ctr.log --exit-dir /run/user/1000/libpod/tmp/exits --socket-dir-path /run/user/1000/libpod/tmp/socket --log-level debug --syslog --conmon-pidfile /run/user/1000/containers/vfs-containers/96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd/userdata/conmon.pid --exit-command /usr/bin/podman --exit-command-arg --root --exit-command-arg /home/pi/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1000/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1000/libpod/tmp --exit-command-arg --runtime --exit-command-arg runc --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg file --exit-command-arg container --exit-command-arg cleanup --exit-command-arg 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd]"
WARN[0252] Failed to add conmon to cgroupfs sandbox cgroup: error creating cgroup for blkio: mkdir /sys/fs/cgroup/blkio/libpod_parent: permission denied
[conmon:d]: failed to write to /proc/self/oom_score_adj: Permission denied

DEBU[0254] Received: 1883
INFO[0254] Got Conmon PID as 1872
DEBU[0254] Created container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd in OCI runtime
DEBU[0254] Starting container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd with command [/usr/bin/entry.sh /usr/local/bin/python /usr/local/bin/supervisord -c /etc/supervisor/supervisord.conf]
DEBU[0255] Started container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd
96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd

I've looked through the difference in debug log-level, but only thing i noticed different is the line is missing when it's _not_ working:

DEBU[0001] Allocated lock 0 for container 96fc8003e676cc1d1e7a6c4f094635448dddbf454679f09cabb2cda0080157bd

Is that just a red-herring or does it mean something?

When it's working, querying on the host, shows the expected:

$ sudo ss -tnlp
State                      Recv-Q                     Send-Q                                          Local Address:Port                                            Peer Address:Port                                                                            
LISTEN                     0                          128                                                 127.0.0.1:8125                                                 0.0.0.0:*                         users:(("netdata",pid=246,fd=26))                     
LISTEN                     0                          128                                                   0.0.0.0:19999                                                0.0.0.0:*                         users:(("netdata",pid=246,fd=4))                      
LISTEN                     0                          128                                                   0.0.0.0:8080                                                 0.0.0.0:*                         users:(("exe",pid=1016,fd=10))                        
LISTEN                     0                          128                                                   0.0.0.0:22                                                   0.0.0.0:*                         users:(("sshd",pid=277,fd=3))

Pid 1016 is:

  PID USER      CPU% MEM% S  NI TTY       VIRT   RES   SHR   OOM NLWP   DISK READ  DISK WRITE  UTIME+   STIME+  START Command                                                                                                                                    
 1016 pi         0.0  3.7 S   0 pts/0     881M  8472  3496    36    9    0.00 B/s    0.00 B/s  0:00.70  0:00.20 17:36 `- containers-rootlessport
 1023 pi         0.0  3.6 S   0 pts/0     873M  8456  3452    36    8    0.00 B/s    0.00 B/s  0:00.59  0:00.23 17:36 |  `- containers-rootlessport-child

Any ideas on how to debug why containers-rootlessport is not even being ran (when the issue happens)..

FWIW I was on 1.8 also.
I found that using podman run usually worked but podman start triggered the issue almost always.

hmm.. curious.. as i always am using podman run (and never have used podman create or podman start)..

would the issue owners here prefer i open up a new case, since i also am not actually experiencing the port being bound _inside_ the container (i was pointed to this case maybe erroneously)..

i think _my_ issue stems from the fact of noticing that containers-rootlessport is not running, whenever i get into the situation of my port is not being bound properly? (and this was something that has worked at some point, then mysteriously stops working (until i rollback my rootfs iscsi lun)

If there are different symptoms, a different bug is definitely warranted

hmm.. curious.. as i always am using podman run (and never have used podman create or podman start)..

would the issue owners here prefer i open up a new case, since i also am not actually experiencing the port being bound _inside_ the container (i was pointed to this case maybe erroneously)..

i think _my_ issue stems from the fact of noticing that containers-rootlessport is not running, whenever i get into the situation of my port is not being bound properly? (and this was something that has worked at some point, then mysteriously stops working (until i rollback my rootfs iscsi lun)

I think mine sounds more like your problem. There is no suppose-to-be-outside port being opened inside. And as I use podman-compose, consequently I always use podman run. Let's open another issue.

I'm also running in to, which I think is, this issue. Like @aleks-mariusz I'm also running podman > 1.8. I'm running rootless podman 1.8.1 on Fedora 31. My experience is that the container works fine on starting and then after sometime suddenly starts exhibiting the above described behaviour (ie. getting connection refused from outside of the container).

The only way I found to "fix" this is to delete the container and the pod and then to recreate everything. Just deleting the container is not enough.

@siepkes guessing you came here by search engine as i did.. but for me it turned out the issue was not _this_ issue (but a different one ).. if you want to see if you have _this_ issue then please check if the requested published port is being bound inside the container (otherwise, if you have the issue i actually did, the container stopped listening from outside, it had to do with any connection to the port attempted before the container fully came up rather than it being bound inside the container).

@AkihiroSuda You're right. Turns out my issue is indeed not this one but the issue you linked to (ie. the containers-rootlessport-child dieing). Thanks!

A friendly reminder that this issue had no activity for 30 days.

@AkihiroSuda Any update on this?

Is this still reproducible?

Have seen this intermittently with:

Version: 1.9.0
RemoteAPI Version: 1
Go Version: go1.10.1
OS/Arch: linux/amd64

Could you try Podman 1.9.1 with Go 1.13?

I think this is fixed in latest upstream, so closing, reopen if I am mistaken.

Was this page helpful?
0 / 5 - 0 ratings