/kind bug
Description
In Podman 2.0.2 (and likely earlier), requests to /v1.0.0/libpod/events would immediately return an HTTP 200 OK response header, and then sleep waiting for the first event to happen.
Starting in Podman 2.0.3, requests to the same API seem to just "hang" without any server response, until something happens that generates an event.
This means that code that processes the event stream can't be assured that the event stream is actually healthy. A possible workaround is doing something that causes events (creating and deleting a pod) in parallel to setting up the event stream.
Steps to reproduce the issue:
Assuming Podman 2.0.4:
podman system service tcp:127.0.0.1:8410
wget -S -O- http://127.0.0.1:8410/v1.0.0/libpod/events
Describe the results you received:
Connecting to localhost (localhost)|127.0.0.1|:8410... connected.
HTTP request sent, awaiting response...
Describe the results you expected:
Connecting to localhost (localhost)|127.0.0.1|:8410... connected.
HTTP request sent, awaiting response...
HTTP/1.1 200 OK
Content-Type: application/json
Date: Sat, 08 Aug 2020 18:38:38 GMT
Transfer-Encoding: chunked
Additional information you deem important (e.g. issue happens only occasionally):
In Podman 2.0.3, the /events API was temporarily broken. This can be worked around by adding ?filter={}.
Since this is a regression report, here are several docker commands to show the change between versions:
Podman 2.0.2 gives a 200 OK before stalling (GOOD):
# docker run --privileged --rm -it mgoltzsche/podman:2.0.2 sh -euxc "podman system service tcp:127.0.0.1:8410 & sleep 1; wget -O- -T2 -S http://127.0.0.1:8410/v1.0.0/libpod/events'?filters={}'"
+ sleep 1
+ podman system service tcp:127.0.0.1:8410
+ wget -O- -T2 -S 'http://127.0.0.1:8410/v1.0.0/libpod/events?filters={}'
Connecting to 127.0.0.1:8410 (127.0.0.1:8410)
HTTP/1.1 200 OK
Content-Type: application/json
Date: Sat, 08 Aug 2020 18:41:42 GMT
Connection: close
Transfer-Encoding: chunked
writing to stdout
wget: download timed out
Podman 2.0.3 stalls before responding at all (BAD):
# docker run --privileged --rm -it mgoltzsche/podman:2.0.3 sh -euxc "podman system service tcp:127.0.0.1:8410 & sleep 1; wget -O- -T2 -S http://127.0.0.1:8410/v1.0.0/libpod/events'?filters={}'"
+ sleep 1
+ podman system service tcp:127.0.0.1:8410
+ wget -O- -T2 -S 'http://127.0.0.1:8410/v1.0.0/libpod/events?filters={}'
Connecting to 127.0.0.1:8410 (127.0.0.1:8410)
wget: download timed out
Podman 2.0.4 stalls before responding at all (BAD):
# docker run --privileged --rm -it mgoltzsche/podman:2.0.4 sh -euxc "podman system service tcp:127.0.0.1:8410 & sleep 1; wget -O- -T2 -S http://127.0.0.1:8410/v1.0.0/libpod/events'?filters={}'"
+ + podman system service tcp:127.0.0.1:8410
sleep 1
+ wget -O- -T2 -S 'http://127.0.0.1:8410/v1.0.0/libpod/events?filters={}'
Connecting to 127.0.0.1:8410 (127.0.0.1:8410)
wget: download timed out
Output of podman version:
Version: 2.0.4
API Version: 1
Go Version: go1.14
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
Output of
podman info --debug:
host:
arch: amd64
buildahVersion: 1.15.0
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.18, commit: '
cpus: 2
distribution:
distribution: debian
version: "10"
eventLogger: file
hostname: penguin
idMappings:
gidmap: null
uidmap: null
kernel: 5.4.40-04224-g891a6cce2d44
linkmode: dynamic
memFree: 15240634368
memTotal: 15260385280
ociRuntime:
name: runc
package: 'runc: /usr/sbin/runc'
path: /usr/sbin/runc
version: |-
runc version 1.0.0~rc6+dfsg1
commit: 1.0.0~rc6+dfsg1-3
spec: 1.0.1
os: linux
remoteSocket:
path: /run/podman/podman.sock
rootless: false
slirp4netns:
executable: ""
package: ""
version: ""
swapFree: 0
swapTotal: 0
uptime: 25h 11m 28s (Approximately 1.04 days)
registries:
search:
- docker.io
- quay.io
store:
configFile: /etc/containers/storage.conf
containerStore:
number: 1
paused: 0
running: 0
stopped: 1
graphDriverName: btrfs
graphOptions: {}
graphRoot: /var/lib/containers/storage
graphStatus:
Build Version: 'Btrfs v5.2.1 '
Library Version: "102"
imageStore:
number: 3
runRoot: /var/run/containers/storage
volumePath: /var/lib/containers/storage/volumes
version:
APIVersion: 1
Built: 0
BuiltTime: Thu Jan 1 01:00:00 1970
GitCommit: ""
GoVersion: go1.14
OsArch: linux/amd64
Version: 2.0.4
Package info (e.g. output of rpm -q podman or apt list podman):
podman/unknown,now 2.0.4~1 amd64 [installed]
Additional environment details (AWS, VirtualBox, physical, etc.):
@jwhonce PTAL
A friendly reminder that this issue had no activity for 30 days.
@jwhonce @baude @mheon This looks like a regression and should be fixed ASAP
@danopia Please confirm that this has not been fixed?
Hello, I just upgraded to 2.0.6 and still see the behavior of not receiving HTTP headers until the first event occurs.
I'll take the risk and assign the issue to me. I know that @jwhonce is working on more urgent issues at the moment.