/kind bug
When starting a container with podman play kube that uses /bin/sh as Entry point and a shell script as command,
container is behaving unexpected on start. If starting the same script from within the container and not on "Build Time", meaning setting only entrypoint to /bin/sh and starting run.sh by hand without setting cmd in Dockerfile, it is working.
It is not possible to use a shell script as CMD["myshellscript"] in Dockerfile to build a container that starts this script with an kube.yml file with podman play kube.
podman build . --tag bugimagehttps://gist.github.com/PinkJohnOfUs/848bfa32186cb562bf853e303ac93649#file-dockerfile
https://gist.github.com/PinkJohnOfUs/848bfa32186cb562bf853e303ac93649#file-run-sh
Run the container with _play kube_:
podman play kube playkube.yml
https://gist.github.com/PinkJohnOfUs/848bfa32186cb562bf853e303ac93649#file-playkube-yml
Take a look into bugreport container logs:
podman logs bash-bug-pod-0-bugcontainer
Result:
Logs showing:
bin/sh: 1: Syntax error: ")" unexpected
I would expect logs showing
hello bug
Output of podman version:
podman version 2.2.0-rc1
Output of podman info --debug:
host:
arch: amd64
buildahVersion: 1.18.0
cgroupManager: cgroupfs
cgroupVersion: v1
conmon:
package: 'conmon: /usr/libexec/podman/conmon'
path: /usr/libexec/podman/conmon
version: 'conmon version 2.0.20, commit: '
cpus: 12
distribution:
distribution: ubuntu
version: "20.04"
eventLogger: journald
hostname: pink
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 5.8.0-7625-generic
linkmode: dynamic
memFree: 13758816256
memTotal: 33070886912
ociRuntime:
name: runc
package: 'containerd.io: /usr/bin/runc'
path: /usr/bin/runc
version: |-
runc version 1.0.0-rc10
commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
spec: 1.0.1-dev
os: linux
remoteSocket:
exists: true
path: /run/user/1000/podman/podman.sock
rootless: true
slirp4netns:
executable: /usr/bin/slirp4netns
package: 'slirp4netns: /usr/bin/slirp4netns'
version: |-
slirp4netns version 1.1.4
commit: unknown
libslirp: 4.3.1-git
SLIRP_CONFIG_VERSION_MAX: 3
swapFree: 4294434816
swapTotal: 4294434816
uptime: 1h 45m 54.95s (Approximately 0.04 days)
registries:
search:
- docker.io
- quay.io
store:
configFile: /home/kiki/.config/containers/storage.conf
containerStore:
number: 11
paused: 0
running: 2
stopped: 9
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: 'fuse-overlayfs: /usr/bin/fuse-overlayfs'
Version: |-
fusermount3 version: 3.9.0
fuse-overlayfs: version 0.7.6
FUSE library version 3.9.0
using FUSE kernel interface version 7.31
graphRoot: /home/kiki/.local/share/containers/storage
graphStatus:
Backing Filesystem: extfs
Native Overlay Diff: "false"
Supports d_type: "true"
Using metacopy: "false"
imageStore:
number: 27
runRoot: /run/user/1000/containers
volumePath: /home/kiki/.local/share/containers/storage/volumes
version:
APIVersion: 2.1.0
Built: 1605874681
BuiltTime: Fri Nov 20 13:18:01 2020
GitCommit: 02843f881f9271440e7eaad8db231ddf20e33e51
GoVersion: go1.15.2
OsArch: linux/amd64
Version: 2.2.0-rc1
Package info (e.g. output of rpm -q podman or apt list podman):
That is strange expected to be 2.2.0 ??
podman/now 2.1.1~2 amd64 [installed,local]
Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?
Tested with release candidate 2.2. on physical device
@haircommander I seem to recall that play kube builds its container commands differently from the rest of Podman - could that be what's causing this?
if you only set the entry point, then it only uses the entry point for the command. https://kubernetes.io/docs/tasks/inject-data-application/define-command-argument-container/#notes
hm but neither are set in the yaml, which means we're not correctly getting the command from the image
@PinkJohnOfUs Any chance you can try with the Podman 2.2.0 RC? We did a rewrite of the play kube backend that may have resolved the issue.
@mheon the bug appeared after rewrite of play kube I guess, I was testing it with Podman 2.2.0 RC, correct me if I'm wrong (see Output of podman --version above):
git show
commit 02843f881f9271440e7eaad8db231ddf20e33e51 (HEAD, tag: v2.2.0-rc1)
Author: Matthew Heon <[email protected]>
Date: Wed Nov 18 13:15:11 2020 -0500
Can't tell you wich commit is causing this bug, but at least it isn't in issue with commit:
916825b6753086d7712ba593e5381b9bd49aae96
I can confirm that this is not working with Podman 2.2.0 RC.
@PinkJohnOfUs Can you confirm that this only appeared after 2.2.0-rc1? Your original bug report was filed against 2.1.1. Does the bug happen with both, or just 2.2.0-rc1?
@mheon I can confirm that this appeared after 2.2.0-rc1, it is not happening with 2.1.1. The output of apt list podmanwas maybe misleading, sorry for that.
Also tested it with v2.2.0 and it is still not working.
git show
commit db1d2ff111ee9b012779ff3a5279a982520ccda4 (HEAD, tag: v2.2.0)
Author: Matthew Heon <[email protected]>
Date: Mon Nov 30 16:29:41 2020 -0500
Bump to v2.2.0
Signed-off-by: Matthew Heon <[email protected]>
diff --git a/changelog.txt b/changelog.txt
index 326f52718..a54028e1c 100644
--- a/changelog.txt
+++ b/changelog.txt
@@ -1,4 +1,25 @@
Hi, I am observing the same problem with podman version 2.2+ (up to latest master) on Arch Linux. When I updated, I experienced continuously restarting containers in pods, started from podman play kube. The continuous restarting was probably because of the new restart policy for pods created in this way. The biggest concern for me was that the containers "crashed" without any error message. When I ran the images manually, I found out they work fine though. I can reliably reproduce the problem with the docker.io/nginx:mainline-alpine image and the following pod.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: web
name: web_pod
spec:
restartPolicy: Never
containers:
- name: nginx
image: docker.io/nginx:mainline-alpine
I did a bit of digging and found out that the combination of ENTRYPOINT and CMD seems to be the problem as the following Containerfile shows:
FROM docker.io/alpine:latest
ENTRYPOINT ["/bin/sh", "-c"]
CMD ["while true; do cat /etc/os-release; sleep 1; done"]
Note: moving the -c to the CMD part doesn't help, but tries to run /bin/sh as a shell script ?
For me this bug is a show-stopper. I did see the extent of breakage after updating on my server. I am using the play kube feature as a replacement of docker-compose since before podman-compose was a usable alternative for me. Also I wanted to pave the way to Kubernetes.
In light of this, the following line doesn't seem right:
https://github.com/containers/podman/blob/5c6b5ef34905f40562b518799c35be8d06694e65/pkg/specgen/generate/kube/kube.go#L119
Additionally the code doesn't seem to care about imageData.Config.Cmd.
Should be fixed by #8807