/kind feature
Description
I'd like to be able to run and test batches of containers defined with docker-compose.yml. As it is now, doing this with actual Docker inside an environment that runs through Docker gets rather risky and leaky in all kinds of bad ways.
For building containers, I'm starting to use buildah for this, but I don't quite yet have an answer for running them. The goal is to be able to build and test in a manner that is consistent with how people can do it on their local machines, and easily transition to OpenShift for production run environments.
Additional environment details (AWS, VirtualBox, physical, etc.):
GitLab CI runners with Docker container (of Fedora with buildah + podman)
I'd love to get something similar to docker compose up and running (I don't know if we'd go for exact compatability in this case, though). The ability to define a pod with a number of containers, sharing various namespaces and starting in a specific order as needed, is already baked into the backend, though work would be needed to make a good user interface to expose all of that.
slightly related: we need to ensure that we correctly pass NOTIFY_SOCKET from systemd down to runC. We will not only have startup ordering with systemd dependencies, but containers wouldn't need to poll for another service to be ready (and if they do, it will return immediately) if it configured to use NOTIFY_SOCKET.
@giuseppe I worked on that a while ago. But not sure if I got it all working. Would also like to get socket activation working properly. Both would be cool features that don't work in Docker.
@edsantiago @TomSweeneyRedHat Could you guys attempt to setup a test to make sure NOTIFY_SOCKET and SD_NOTIFY works with podman?
@rhatdan in progress ... but infuriatingly nonworking. And according to my notes, reminiscent of my frustrations in October 2016. It Just Ain't Working. The simplest reproducer I can come up with is:
# NOTIFY_SOCKET=/run/systemd/notify podman run --rm fedora date
This just hangs. No error, also no output. It also hangs in such a way that podman ps also hangs -- maybe the locking issue in #658? /bin/ps reports a long-running, presumably-also-hung runc create process:
# ps auxww --forest |grep -5 runc
...
root 20545 0.0 0.0 86084 1808 ? Ssl 20:50 0:00 /usr/libexec/crio/conmon -c ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7 -u ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7 -r /usr/bin/runc -b /var/lib/containers/storage/overlay-containers/ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7/userdata -p /var/run/containers/storage/overlay-containers/ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7/userdata/pidfile -l /var/lib/containers/storage/overlay-containers/ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7/userdata/ctr.log --exit-dir /var/run/libpod/exits --socket-dir-path /var/run/libpod/socket
root 20546 0.0 0.2 392772 10656 ? Sl 20:50 0:00 \_ /usr/bin/runc create --bundle /var/lib/containers/storage/overlay-containers/ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7/userdata --pid-file /var/run/containers/storage/overlay-containers/ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7/userdata/pidfile ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7
root 20555 0.1 0.2 316848 8836 ? Ssl 20:50 0:00 \_ /usr/bin/runc init
The init process is not killable, even with -9. The create process can be killed, but only with -9. Attempting to podman rm the container while runc is running results in:
failed to delete container ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7: cgroups: unable to remove paths /sys/fs/cgroup/systemd/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/freezer/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/pids/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/net_cls/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/net_prio/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/perf_event/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/cpuset/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/cpu/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/cpuacct/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/memory/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/blkio/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/devices/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7, /sys/fs/cgroup/hugetlb/libpod_parent/libpod-ced645cf9da7b61613126a938300b796c47ec76ae275dda4d76719af101949b7
Same results when running from a systemd init file (without the NOTIFY_SOCKET= declaration, since systemd presumably sets that). Using NOTIFY_SOCKET=/nonexistentfile works perfectly fine.
Am close to giving up for today. This has taken a good chunk of time.
@edsantiago Separate locking issue from #658 - this is us holding the container lock until runc has finished executing, to try and order container operations. The root cause here appears to be the runc init hang.
This seems to be close, although I am getting Connection Refused.
#!/bin/sh
export NOTIFY_SOCKET=/run/podman_notify.sock
$(rm -f ${NOTIFY_SOCKET}; nc -U ${NOTIFY_SOCKET} -l) &
sleep 1
podman run -v /usr/bin/nc:/usr/bin/nc fedora /usr/bin/nc -U ${NOTIFY_SOCKET}<<EOF
echo ready
EOF
# sh -x notify_sock.sh
+ export NOTIFY_SOCKET=/run/podman_notify.sock
+ NOTIFY_SOCKET=/run/podman_notify.sock
+ sleep 1
++ rm -f /run/podman_notify.sock
++ nc -U /run/podman_notify.sock -l
+ podman run fedora mount
+ grep podman
tmpfs on /run/podman_notify.sock type tmpfs (rw,nosuid,nodev,seclabel,mode=755)
+ podman run -v /usr/bin/nc:/usr/bin/nc fedora /usr/bin/nc -U /run/podman_notify.sock
Ncat: Connection refused.
The passing of the socket is there and the mounting of the socket is there. I don't know why ncat is refusing the connection.
I think systemd uses a DGRAM socket, so you need --udp. In my runs this morning, this hangs consistently:
window1# rm -f /run/mysock;ncat -l -U --udp /run/mysock
window2# NOTIFY_SOCKET=/run/mysock podman run --rm fedora date (hangs as described above)
It does not hang without --udp, but my sdnotify test container fails with EPROTOTYPE (Protocol wrong type for socket).
Date is not doing anything with the socket, so this looks like the integration between podman/runc and the socket file is causing issues. I will see if I can repeat the failure on my machine.
Yes - my use of date was simply to try the simplest container that would not be mucking with sd_notify. ISTM that runc is the bottleneck
Seems to be working for me now
# NOTIFY_SOCKET=/run/mysock podman run --rm fedora date
Thu May 24 11:43:32 UTC 2018
With podman in master.
Never mind It is hanging.
This is where runc is hanging.
openat(AT_FDCWD, "/proc/self/fd/4", O_WRONLY|O_CLOEXEC
I'm somewhat leaning toward it being a runc issue, not podman, but have no actual evidence to base that on.
I agree, I am now thinking this is an issue with runc. I need to setup runc with the NOTIFY_SOCKET to see if it hangs also.
It looks like the --udp is the key flag that is causing the issue. If I remove the --udp runc finishes right a way.
Yes, but as best I can tell --udp recreates the way systemd creates the /run/systemd/notify socket
# lsof /run/systemd/notify
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd 1 root 30u unix 0x000000006092aaba 0t0 1412 /run/systemd/notify type=DGRAM
So it this the equivalent of doing what Systemd does for socket activation.
That's what I _think_, and it's what I'm trying to do, and the behavior is consistent... but I don't really know.
I sent an email off to systemd-maint/lennart asking them what is the best way to implement this.
lsof /run/systemd/notify /run/mysock
lsof: WARNING: can't stat() fuse.gvfsd-fuse file system /run/user/3267/gvfs
Output information may be incomplete.
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
systemd 1 root 33u unix 0x000000009b008e4d 0t0 13639 /run/systemd/notify type=DGRAM
ncat 13931 root 3u unix 0x00000000cddfc00c 0t0 362330 /run/mysock type=DGRAM
But it looks like you are correct. I would figure runc will hang if done with socket activation.
@giuseppe Did you ever run systemd containers using sd_nodify?
I think this happens because of the interaction of conmon and runc. We have system containers using NOTIFY_SOCKET, but system containers don't pass in additional file descriptors and this makes the difference.
In any case, the issue is for sure in runc:
$ NOTIFY_SOCKET=/run/systemd/notify sudo -E podman --runtime /usr/local/bin/crun run --rm alpine date
Thu May 24 14:19:42 UTC 2018
I will be gone for 2 weeks and unable to play here during that time. Once the runc hang gets resolved, Shishir has a great, tiny, simple way to test sdnotify in a container: https://github.com/shishir-a412ed/runc-notify
Actually I think this is pure runc.
NOTIFY_SOCKET=/run/systemd/notify /usr/bin/runc create --bundle /var/lib/containers/storage/overlay-containers/f4a79a66f2ece89aae2038017596cb4b3928bebae05095e598a8695506962809/userdata --pid-file /var/run/containers/storage/overlay-containers/f4a79a66f2ece89aae2038017596cb4b3928bebae05095e598a8695506962809/userdata/pidfile f4a79a66f2ece89aae2038017596cb4b3928bebae05095e598a8695506962809
Hangs Not conmon involved.
this is probably a regression, I remember NOTIFY_SOCKET working well with runc, I'll take a look
I've opened a PR for runc: https://github.com/opencontainers/runc/pull/1807
@mrunalp Suggests that we should go back to runc run, rather then using runc create/start. He believes this is the best way to run runc, and that the PR opened by @giuseppe is not likely to get merged.
@mheon @baude Why did we switch to runc crate/start? Can we go back?
@mrunalp Any comments?
@giuseppe @mheon I would like to get this fixed. @mheon what happens if we go back to just run?
in order to use "run", we will need to add this functionality to conmon as well. Beside the additional step, it will be a problem to properly attach to the socket. In the create+start sequence we have the time to connect to the UNIX socket before starting the container. With "run" this part will be always racy as the container starts immediately before podman can connect to the socket.
Can you work on this?
Using runc run instead of create / start means you'll probably only work with runc. I don't expect the runtime-spec maintainers to revert opencontainers/runtime-spec#384 after all the discussion that went into that change, and there's no longer a run operation defined in the runtime spec. Also, if we punt to runc run ... for the podman run ... case, won't we still be exposed to underlying runc issues for the podman create ... / podman start ... case?
I'm looking for a way to get sd-notify to work. It would be nice if this could work in the case of kata containers and others, but it is a cool feature that Docker can not do.
If we can get this into create/start it would be best, but runc does not seem willing to merge it.
I though the podman create/start case would be handled by just doing a runc run. podman create does not need to establish runc configuration, only start, I believe.
I also would like to make sure socket activation works.
I have some serious reservations about how runc run would handle our network ns code. I want to move towards the postconfigure solution @giuseppe created for user namespaces and make that standard, but that won't work with runc run.
Actually, I'm inclined to believe that runc run will also break our user namespace code because of this...
If we can get this into create/start it would be best, but runc does not seem willing to merge it.
Time to switch to crun? The point of standardizing the OCI runtime interface is that we can swap in alternative runtimes, and @giuseppe can push crun forward as fast as he likes.
Sticking with runc, I'm not clear on what opencontainers/runc#1807 is blocking on unless it's the integration test request. Maybe it's just runc's maintainer shortage?
Hey all,
Socket activation would be a nice bonus from my perspective, but having something with similar functionality to Docker Compose to spin up multiple podman containers that can talk to each other would be huge. And without something similar, getting people to use podman over Docker is going to be hard (outside of the Kubernetes case, where you have Kompose).
Any updates or advice on this? I want consumers of my multi-process web app (with a Go backend that proxies through to an API process, which in turn talks to Postgres) to be able to get it up and running in just a couple commands, and I've had enough issues with Docker that I don't trust it.
Thanks!
@elimisteve We will look into podman Compose. We have some difficulty in that we don't have a daemon, other then conmon. @mheon has some ideas on this. But getting more contributors would help make it happen, hint hint, PRs welcome.
Why is a daemon needed to create a pod with multiple containers in it that can talk to each other? (I've used rkt and am now attracted to podman as well partially due to its superior architecture of not needing a daemon.)
Hint understood :-) .
I thaught with compose you would be starting up multiple containers and having them communicate, so something to keep track of these communications and requirements.
BTW We are also trying to do some experimental stuff with handling of pods.
I think we can do some cool things with Compose and pods if we're not aiming for 100% compatability - would be an interesting this to look at.
@rhatdan Seems like "Podman Compose" could do the following and not need a daemon at all:
docker-compose.ymlI assume that this last step can be accomplished by using systemd or whatever podman uses now that is running the containers listed in podman ps.
Conmon is the thing that manages the livetime of the container.
I mean, the pod itself is probably unnecessary, and just running the containers however podman does now should suffice.
Don't even need to order container starts if we add dependencies between service containers as appropriate - podman pod start will guarantee proper startup order of containers base on their dependencies.
Also means you'll be able to interact with compose-defined pods in a natural fashion using normal pod commands (start, stop, restart will all work on the compose pod without issue).
I think it would be best to wait for the work on pause containers for sharing pod namespaces done that @haircommander is working on to land before seriously investigating this, though. That will make getting all services connected and talking to each other much easier.
I believe @mheon is referring to #1126, which was merged 8 hours ago! :tada:
That was fast.
@elimisteve Actually separate - I don't think we have a PR open for it yet. We're working on a way to easily share namespaces within a pod without worrying about container startup ordering. We can probably make this work without it, though, now that I think about it.
I really would rather rely on systemd unit files for creating dependencies on container startup. But I guess we need to think about this some more.
+1 for systemd. Systemd unit files will definitely work for managing the dependencies, we will an api for the lifecycle (systemctl start/stop/status my-pod), logging (journalctl -u my-pod) and as a plus can have unit files defined as an unprivileged user. Question is, is there a programmatic api to create unit files, or we would have to create something that will build and update the conf file?
Ping, where are we with this? Do we expect podman to work with NOTIFY_SOCKET from a systemd unit file? Because AFAICT it still doesn't. And I also think there's a broken unit test in podman; see below.
What I'm seeing is that, if NOTIFY_SOCKET is defined, runc creates a directory /run/notify in the container and redefines NOTIFY_SOCKET=/run/notify/notify.sock ... and I _think_ it intends to bind-mount that to the host socket, but it isn't. The /run/notify directory in the container is always empty:
# NOTIFY_SOCKET=/tmp/mysock podman run --privileged fedora bash -c 'printenv NOTIFY_SOCKET'
/run/notify/notify.sock
# NOTIFY_SOCKET=/tmp/mysock podman run --privileged fedora bash -c 'ls -laZ /run/notify'
total 8
drwxr-xr-x. 2 root root unconfined_u:object_r:container_var_run_t:s0 40 Nov 6 16:02 .
drwxr-xr-x. 1 root root system_u:object_r:container_file_t:s0:c54,c149 4096 Nov 6 16:02 ..
(same results with NOTIFY_SOCKET=/run/systemd/notify)
I believe that test/e2e/run_test.go:285 is incorrect. It sets NOTIFY_SOCKET=/run/notify then tests that the container passes that through; but in reality it doesn't matter what you set the envariable to, it will be /run/notify/notify.sock inside the container. (And it will not exist). At best the test is misleading, at worst it's broken.
It is supposed to work, but it required a patched version of runc to work. Did we loose the patch?
@giuseppe could you look into this?
Fedora and Cent/RHEL runc were carrying the patch as of a month ago, hopefully it hasn't been dropped
The behavior I'm seeing (on f29) is consistent with 1807.patch, Giuseppe's sd-notify patch from May 25. I'm just not sure if the patch is working as intended: it does prevent a hang on start (the problem we started discussing in this thread), I just don't see the socket bind-mounted and able to be used by a process for sd_notify().
Found the problem, but not the solution (I could keep poking, but at this point I think it'll be much quicker for one of you to solve it than me).
In the runc patch, in start.go, line 35:
notifySocket, err := notifySocketStart(context, os.Getenv("NOTIFY_SOCKET"), container.ID())
...$NOTIFY_SOCKET is undefined. I don't understand this, since podman/libpod/oci.go takes pains to preserve it; that's why I'm passing it back on to those more familiar with the code. Thanks.
Addendum: the rest of the patch seems to work. Replacing the os.Getenv() above with a hardcoded string "/run/systemd/notify" results in the socket being created, and sd_notify() (in a test program inside the container) succeeding. Subsequent test using a systemd unit file with Type=notify succeeds.
@giuseppe Could conmon be eating the environment variable?
@rhatdan @edsantiago I've opened a PR here: https://github.com/containers/libpod/pull/1798
The issue was in libpod that was setting NOTIFY_SOCKET only for runc create but not for runc start.
Also the test was wrong, as we cannot assume how the NOTIFY_SOCKET will look like inside of the container. As @edsantiago already pointed out, it is hard coded to a fixed path for runc, that can be different from the NOTIFY_SOCKET specified to podman/runc.
Also, it is important to specify NotifyAccess=all as the notification is not coming from the process launched by systemd
@giuseppe I can't seem to get it to work. Pulled the PR, built podman, but my tests all fail. The closest I can come to a diagnostic is:
# NOTIFY_SOCKET=/run/systemd/notify podman run fed_runc /home/sd_notify
[hangs]
[try ^C] ERRO[0002] Error forwarding signal 2 to container b2d93084f6bbc8d62de74a140a42be45f4b400edaf1d0f9540ecbe1b05f0f1e2: can only kill running containers: container state improper
[fed_runc is an image containing a binary, /home/sd_notify, that runs sd_notify(0, "READY=1")]
Running podman ps while container is hung shows it Created, not Running.
I'm out of time tonight but will get back to it tomorrow (Monday) morning. Hope this helps somewhat. Thank you for looking into this.
this is the test I've done to test podman+runc connects to the notify socket:
In a terminal:
# rm /tmp/socket; python -c "import socket as s; sock = s.socket(s.AF_UNIX,s.SOCK_DGRAM); sock.bind('/tmp/socket'); print(sock.recv(1024))"
and from another:
# NOTIFY_SOCKET=/tmp/socket podman run --rm fedora systemd-notify --ready
Do you get READY=1 in the first console?
@giuseppe with a custom-built runc (master + your May 25 patch) it works. With runc-1.0.0-57.dev.git9e5aa74.fc29, I get:
# NOTIFY_SOCKET=/tmp/mysock podman run fedora systemd-notify --ready
Failed to notify init system: Permission denied
It's an AVC:
type=AVC msg=audit(1542040072.759:1474): avc: denied { sendto } for pid=14229 comm="systemd-notify" path="/run/runc/e9bda7b07b785518a48f4fbc5e88f72fc777abee7a191e50634a493717915e0a/notify/notify.sock" scontext=system_u:system_r:container_t:s0:c308,c422 tcontext=unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 tclass=unix_dgram_socket permissive=0
Running with --privileged (runc-...-9e5a) succeeds.
ls -lZ /usr/bin/runc /usr/bin/podman
-rwxr-xr-x. 1 root root system_u:object_r:container_runtime_exec_t:s0 38251664 Nov 12 01:00 /usr/bin/podman
-rwxr-xr-x. 1 root root system_u:object_r:container_runtime_exec_t:s0 8882280 Nov 12 16:23 /usr/bin/runc
I don't understand why building my own runc gets it to work. @giuseppe are you using the rpm-packaged runc, or your own? And, are you running in enforcing mode?
ops my SELinux was tweaked to allow that. On a fresh installation I see the same issue. I think it is fine if SELinux blocks the example I gave above, it should only work from within systemd
It's an AVC:
type=AVC msg=audit(1542040072.759:1474): avc: denied { sendto } for pid=14229 comm="systemd-notify" path="/run/runc/e9bda7b07b785518a48f4fbc5e88f72fc777abee7a191e50634a493717915e0a/notify/notify.sock" scontext=system_u:system_r:container_t:s0:c308,c422 tcontext=unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 tclass=unix_dgram_socket permissive=0Running with
--privileged(runc-...-9e5a) succeeds.ls -lZ /usr/bin/runc /usr/bin/podman -rwxr-xr-x. 1 root root system_u:object_r:container_runtime_exec_t:s0 38251664 Nov 12 01:00 /usr/bin/podman -rwxr-xr-x. 1 root root system_u:object_r:container_runtime_exec_t:s0 8882280 Nov 12 16:23 /usr/bin/runcI don't understand why building my own runc gets it to work. @giuseppe are you using the rpm-packaged runc, or your own? And, are you running in enforcing mode?
Maybe you did not build SELinux support, i.e., BUILDTAGS="seccomp selinux"?
Maybe you did not build SELinux support, i.e., BUILDTAGS="seccomp selinux"?
That was it, thank you.
it should only work from within systemd
This means that any container run as a systemd service requiring Type=notify is going to have to be run with podman run --privileged. I'm not savvy enough to know if this is fine or not; am leaving this comment here in hopes it helps a future someone.
Fixed in container-selinux-2.76-1.git87fae85.fc*
SELinux would block this if run directly or run within systemd, so need to get this new container-selinux package out.
No, please try with updated container-selinux
https://bodhi.fedoraproject.org/updates/FEDORA-2018-a353e572a9
https://bodhi.fedoraproject.org/updates/FEDORA-2018-30f2bfe441
Thanks, @rhatdan. Works as expected on f29.
Wait, what? Why did this get closed? I don't see anything relating to having functionality like this implemented in podman git master...
At some point we started discussing issues related to sdnotify. Those are fixed. The core of the issue, is not.
@baude is working on some things that do touch the scope of the original issue, but I don't think they're exactly what you're looking for
@mheon If it's something like OpenShift ImageStream+BuildConfig+DeployConfig yaml with podman, that works too.
The opposite, actually - Kube (and maybe Openshift) YAML from Podman containers
@Conan-Kudo Not really sure what that means.
We want to experiment with using podman commands that people are used to to generate the environment. Our goal is not to force a user to edit a configuration/yaml/json... file to build an application containing multiple pods/containers working together, using podman. Then use podman to extract out of the libpod configuration, kubernetes yaml files to be able to easily launch the same environment into OpenShift/Kubernetes.
Simplest would be to launch a container with podman and then extract out a yaml file to describe how to run the same container in kubernetes.
@rhatdan The idea would be that you'd be able to make a minimal YAML/JSON definition in the OpenShift style to spin up groups of containers with Podman that also happened to just easily import right into OpenShift, so the mechanical process of starting an application as a container would work the same way for single node (Podman) and multi-node (OpenShift).
I would really like to get that case covered - I think our current Kubernetes/Openshift JSON generation misses the original point of Compose (single-node orchestration)
Well one case @baude is looking at is Replay which would take the generated kubernetes yaml and recreate the containers/pods with podman. I was actually asked about that last night after mentioning it in a talk I was giving to the NYLUG.
My fear though is going to be trying to support all Kubernetes Yaml config, which could become a huge time sync.
I wouldnt worry about supporting all the kube yaml config. Most of it will be too tough to process and make assumptions in podman about. Again, if we take this approach of only kube, we will have what i would refer to as a "lite" approach to this.
Perhaps slightly unrelated, but as Podman provides a basic "Docker-compatible CLI" via the docker script (which seems to just be a script to place in /usr/bin/docker that in turn executes /usr/bin/podman), how well does Docker Compose work with this script? Does it not work at all due to reliance on specifics of Docker itself? Does it work, but require minor adjustments?
I'm pretty sure docker-compose interfaces with docker via the socket, not
the cli... so a #nobigfatdaemons design breaks this stuff.
On Tue, Feb 5, 2019, 10:08 PM Gert-dev <[email protected] wrote:
Perhaps slightly unrelated, but as Podman provides a basic
"Docker-compatible CLI" via the docker script (which seems to just be a
script to place in /usr/bin/docker that in turn executes /usr/bin/podman),
how well does Docker Compose work with this script? Does it not work at all
due to reliance on specifics of Docker itself? Does it work, but require
minor adjustments?
We don't support Docker-compose, so I am not sure how much of it is talking to the docker socket versus executing the command. POdman is a replacement for the Docker CLI, not the Docker engine API.
We do have podman varlink for a remote API, but it does not follow the Docker API.
Also tried to find alternative of docker-compose for podman, because creating/updating pods using bash ends with heavy logic and ansible roles - executes much slower.
Using 'podman play' is very promissing feature, but I think this might lead to podman will have to adopt to k8s api changes all the time.
Docker-compose is a separate tool, so maybe separate podman-compose/podman-kube/podman-
It's probably need to add a notice in README that docker-compose functionality is out of the scope of podman. People sometime thinks that docker-compose is a part of docker cli and want to find it in podman.
It's probably need to add a notice in README that docker-compose functionality is out of the scope of podman. People sometime thinks that docker-compose is a part of docker cli and want to find it in podman.
Thanks for the suggestion, @SergeyBear. I've opened https://github.com/containers/libpod/pull/2428 to address it.
IMHO, adding k8s generator to podman may lead to problems like k8s had with runtime and storage drivers, when developers had to add and support a lots of techs, that eventually ended with creating CRI and CSI. Some will need k8s support, others - swarm and so on...
May be it is better to represent 'podman play' as 'composer alternative' for local deployment with k8s api compliant syntax? Because eventually people will want to have ReplicaSets, Services and so on to work on podman and will start to create issues...
@SergeyBear Sure. the goal was to make it easy to transition from a traditional container environment to a Kubernetes environment. But once we did that we needed a way to allow users to transition back, which is why we added play. But we did not want to lock our selfs to just Kubernetes, so I definitely could see us supporting other formats. Which is why we have podman generate kube, if some other format took off we might support that also.
I just found great post about podman play and generate that describes in detail usecases.
Looks like it will indeed can be an alternative to docker-compose :+1:
I found this project called podman compose. Haven't tried it but maybe what you are looking for: https://github.com/muayyad-alsadi/podman-compose
We are actually working to move this under the github.com/containers umbrella.
Most helpful comment
We are actually working to move this under the github.com/containers umbrella.