Similar in theme to #60 and #61, we want to share data between tasks in a group. We will need to mount the task group allocation directory as a docker volume so we can share data between these processes.
+1
Fundamental!
Thanks
+1
+1
This PR closes this https://github.com/hashicorp/nomad/pull/290
Is the doc updated with this? I don't see anything on https://nomadproject.io/docs/drivers/docker.html regarding volumes.
@cleung2010 So this issue was about tasks in a group sharing an ephemeral volume. If you're looking for mounting host volumes in docker containers, that would land in the future.
@diptanu do you mean we cannot mount parts of the host's filesystem into Docker containers right now?
@c4milo I think that's the case, don't see that in the docs or source. I believe @diptanu's response confirms it.
@cleung2010 I believe you can pass -v arguments to the docker command, but I haven't tested it to be really sure. That's why I'm asking, I was under the impression this was possible today.
ah nevermind, it is intended for the actual binary run with the container. So, it is not possible.
Ok so we currently create an allocation directory on the host and mount it on every container, so every container gets a host volume for writing data. But it is currently not possible to mount an arbitrary path on the host on a container.
This involves allowing operators to configure which paths on the host is allowed for mounting and which tasks can mount which paths on the host, etc. We want the design to be right so we haven't done anything on this yet. But this is something we understand is very important, so I think it would be nice to just down a bunch of use cases that would need us to mount host volumes.
@diptanu, so my current use case is being able to access the host's Consul agent from containers. I was thinking in sharing the directory where unix sockets are being created so containers can use them. Unfortunately, the current allocation directory implementation does not seem to help with this specific use case as it purges the allocation directory shared among the containers.
@c4milo Yeah we have been wondering how Tasks which have network isolation could talk to Consul, I guess that would be a good way to implement this.
I've been using cephfs to persist and share volumes across containers using host volume mounts when starting docker. For example, I bring up containers and mount users' home directories, and that's very important for my application.
Should I be mounting cephfs volumes within the containers themselves? That would (I think) require the containers run in privileged mode which would seem otherwise unnecessary.
I have a simple use case I'm sure lots of other people do: I'd like to run a docker task which mounts a volume as a path from the host. For example, in my case, a configuration management or logging agent would need access to the underlying host filesystem.
I have a lot of respect for Hashicorp and the projects you've collectively put forth, and I am often very grateful for the patience and care put into the abstractions, interfaces, etc, that are put to use.. in many respects, that is what sets Hashicorp apart from the others.. but this situation here seems counter-intuitive.
Please confirm, is our workaround for the moment to enable to _raw_exec_ driver so we can run docker run .... -v ... ....? Seems silly to do that _instead_ of using the docker driver directly, and _only_ because there's no easy way to tell a task "add this filesystem path as a volume".
Another use case: Volumes are frequently how secure details like Certificates are injected into web servers. Building the cert into the docker image is terrible. And a plethora of little loader scrips as the entry point into said images is an anti-pattern and should be avoided where possible. (loader script to go retrieve a cert from
Yet one more: There are some system services that I'd like to use that would preferably be launched as containers. Some of them require access to the engine socket. The typical way to handle this with --volume=/var/run/docker.sock:/var/run/docker.sock. There is currently no way to do this with nomad. Neither exec nor raw_exec is acceptable alternative: Do you run the container attached, for example?
In most cases, it's hard to say raw_exec would not be sufficient, though with docker it means you need to create a wrapper script to do it correctly. The general flow in the wrapper is to:
a) remove the named container if one exists (the script should || true or similar to avoid exiting if the named container does not exist)
b) create a new named container
c) start that named container in daemon mode, then
d) detached, run docker logs with "follow" (similar to tail -f), the script should exist if that docker logs exits (which means the container failed/stopped)
This follows the recommendations from docker upstream, see https://github.com/docker/docker/issues/6791 for more details on that.
Most helpful comment
Yet one more: There are some
systemservices that I'd like to use that would preferably be launched as containers. Some of them require access to the engine socket. The typical way to handle this with--volume=/var/run/docker.sock:/var/run/docker.sock. There is currently no way to do this withnomad. Neitherexecnorraw_execis acceptable alternative: Do you run the container attached, for example?