Compose: Epic: "compose run" should support every "docker run" option

Created on 29 Jul 2014  路  59Comments  路  Source: docker/compose

fig run should support every option of docker run, just referring to a service instead of an image.

Usage: docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

Run a command in a new container

  -a, --attach=[]            Attach to stdin, stdout or stderr.
  -c, --cpu-shares=0         CPU shares (relative weight)
  --cidfile=""               Write the container ID to the file
  --cpuset=""                CPUs in which to allow execution (0-3, 0,1)
  -d, --detach=false         Detached mode: Run container in the background, print new container id
  --dns=[]                   Set custom dns servers
  --dns-search=[]            Set custom dns search domains
  -e, --env=[]               Set environment variables
  --entrypoint=""            Overwrite the default entrypoint of the image
  --env-file=[]              Read in a line delimited file of ENV variables
  --expose=[]                Expose a port from the container without publishing it to your host
  -h, --hostname=""          Container host name
  -i, --interactive=false    Keep stdin open even if not attached
  --link=[]                  Add link to another container (name:alias)
  --lxc-conf=[]              (lxc exec-driver only) Add custom lxc options --lxc-conf="lxc.cgroup.cpuset.cpus = 0,1"
  -m, --memory=""            Memory limit (format: <number><optional unit>, where unit = b, k, m or g)
  --name=""                  Assign a name to the container
  --net="bridge"             Set the Network mode for the container
                               'bridge': creates a new network stack for the container on the docker bridge
                               'none': no networking for this container
                               'container:<name|id>': reuses another container network stack
                               'host': use the host network stack inside the container.  Note: the host mode gives the container full access to local system services such as D-bus and is therefore considered insecure.
  -P, --publish-all=false    Publish all exposed ports to the host interfaces
  -p, --publish=[]           Publish a container's port to the host
                               format: ip:hostPort:containerPort | ip::containerPort | hostPort:containerPort
                               (use 'docker port' to see the actual mapping)
  --privileged=false         Give extended privileges to this container
  --rm=false                 Automatically remove the container when it exits (incompatible with -d)
  --sig-proxy=true           Proxify received signals to the process (even in non-tty mode). SIGCHLD is not proxied.
  -t, --tty=false            Allocate a pseudo-tty
  -u, --user=""              Username or UID
  -v, --volume=[]            Bind mount a volume (e.g., from the host: -v /host:/container, from docker: -v /container)
  --volumes-from=[]          Mount volumes from the specified container(s)
  -w, --workdir=""           Working directory inside the container

Todo (in rough priority order)

kinepic kinparity stale

Most helpful comment

:+1: I need -v, --volume=[] option

All 59 comments

:thumbsup:

:+1:

:+1:

Agree in principle, but since we're already using the -T short flag, we need to hope docker never adds that. Likewise, if you give a name to --link that is not in the fig,yml, should it use it? If it is in the fig.yml, should it assume it needs to be converted to the actual service name? (i.e. It may actually be intended an external container that happens to be called 'db' without any prefix/suffix).

A common pattern for this kind of thing is to support passing args after a -- (I'm not sure how docopt supports this, but hopefully it does).

So fig --verbose run console /bin/bash -- -c 3 -e foo=bar

Would pass -c and -e along to the docker command.

This is nice because

  1. You don't have to worry about handling anything in fig, just pass everything along
  2. You can continue to support extra arguments to fig run without ever worrying about conflicting with future docker args

This is nice because

  1. You don't have to worry about handling anything in fig, just pass everything along
  2. You can continue to support extra arguments to fig run without ever worrying about conflicting with future docker args

Actually, since we don't shell out to the docker client and talk to the API, this won't hold true unfortunately.

Fair enough, you still have to do some translation to the docker-py args, but from what I've seen they seem to be pretty similar. docker-py may not support every argument yet either.

To be more specific I think fig.yml should support all these options.

Especially since I need --cpuset and --memory

:+1: We also need this, especially to pass environment variables to the docker run command!

these options need to be specified within fig.yml and be overridden with fig run/fig up. To mitigate possible future conflicts of fig and docker options/flags we could prefix them with a 'd' -> --d-cpu-shares, --d-memory, ...

good idea! - or like @dnephin suggested, use a double-slash separators for directly passed parameters e.g: fig up -- -cpu 2 -memory 128m

I think some of the biggest ones are memory, CPU, and ENV settings. These tend to be the ones that change the most for one off commands and tasks

If anyone fancies implementing -e, here's how we did it for the old Orchard client: https://github.com/orchardup/python-orchard/blob/e35fa72f90558b1fc1f875f71a4dbe7eabd37b96/orchard/cli/docker_command.py#L257

So, what is the status of this request? Currently using fig without full Docker options is very limited.

@flycatr which options are missing that you need?

This would be extremely useful.

--restart=""               Restart policy to apply when a container exits (no, on-failure, always)

Restart policy would be quite welcome yes.

@saidimu @thaJeztah Thanks! If you have the time to write a patch, that would be very much appreciated. ;)

@bfirsh I'm looking for --device, --cap-add and --cap-drop. I'm running a pptp client inside my container, here is an example of my raw docker command:

docker run -i -t --name mybackup --rm --device=/dev/ppp --cap-add=NET_ADMIN db_ubuntu pppd call my_setting nodetach

@bfirsh @saidimu restart policy - #478

I'm currently playing around with 'fig scale' and would find it useful if there was support for the docker '--publish-all' option.

+1

+1

:+1: Not having restart policy is blocking for me

Restart policies have been implemented and will be part of the docker-compose 1.1 release (fig has been re-named to docker-compose in the next release)

Release candidates are available for download in the releases tab and the list of changes can be found here; https://github.com/docker/fig/blob/master/CHANGES.md

Thank you @thaJeztah I'll take a look at this.

:+1: I desperately need --cpuset and --memory

I've added some pull requests of options being added to the description in case anybody needs an example to help them add these.

:+1: I need -v, --volume=[] option

:+1:

:+1:

--net=host -e

if it just supported --tlscacert --tlscert --tlskey and -H it would make it possible to use it conjuction with the output from docker machine config like so:

docker-compose $(docker-machine config) up -d

@andrewwatson
Have you tried:

$ eval $(docker-machine config) 
$ docker-compose up -d

Yes, that works fine but if you're invoking compose from something like the remote-exec block of terraform it's not awesome having to eval things into the environment in the middle of what's already a giant embedded shell script...

so what I'm working on would be to tack this on to TopLevelCommand...

      -H                        Daemon socket(s) to use or connect to
      --tlsverify               Use TLS and verify the remote
      --tlscacert               Trust certs signed only by this CA
      --tlscert                 Path to TLS certificate file
      --tlskey                  Path to TLS key file

straight out of the output from "docker help" and then grab those CLI options in command.py here https://github.com/docker/compose/blob/master/compose/cli/command.py#L50 and then pass them down to docker_client.py.

Sound good?

yup, that sounds right

+1 for supporing -H and -tls options!

@ekristen @andrewwatson I have created this to track that feature: https://github.com/docker/compose/issues/1716

@bfirsh thanks

-w #332 is actually not completed :(

docker-compose does not support the new --shm-size option

Another thing which is really important here is --rm=false which CircleCI needs to not print errors all over the place. You can't remove containers on CircleCI (for some reason relating to permissions I think) so therefore you can't use docker-compose to build containers satisfactorily.

Will this, by way of association, work with docker-compse "name" up so we could do docker-compse "name" up --cpuset="0-3"?

This currently is odd since in the docker-compose.yml it is possible to set which cpu to use.

@Nokel81 docker-compose up is not the equivalent of docker run, because it starts the whole "stack" (all services). Adding --cpuset="0-3" would not make sense there, because it would then be applied to all services.

@thaJeztah Then how would I allow cpuset: "3" in the docker-compose.yml file? I get an error saying "ERROR: Requested CPUs are not available - requested 3, available: 0."

I'd love to be able to mount a --volume for a one-off command. It would facilitate being able to load DB data from a prod snapshot for instance.

Work on -v is being done here: https://github.com/docker/compose/pull/4042

Hi, Any update on the --init option, recently introduced with Docker v1.13

--cpuset would be very useful!

I want to use resource constraints without a swarm. How can I do that in my version: "3" docker-compose.yml?

https://docs.docker.com/compose/compose-file/#resources

deploy:
  resources:
    limits:
      memory: "1G"
      cpus: "0.01"

WARNING: Some services ... use the 'deploy' key, which will be ignored. Compose does not support 'deploy' configuration

@jasonben did you find an answer for this?

@zimmertr use the --compatibility flag; see https://github.com/docker/compose/pull/5684

Can we setup signal proxying as a thing? --proxy-signal=True It would be nice if signals received by docker-compose run would be proxied to the underlying command. Additionally getting the exit code form the underlying command would be good as well.

EDIT:

I have given this more thought and it seems like we could just install the signal handler to forward whatever signals we could capture on either platform to the container ID via the kill mechanism. Though it would really only make sense for linux host signals to linux dockerd and windows host signals to windows dockerd. Maybe we only support this flag on non-windows platforms for simplicity?

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

This issue has been automatically closed because it had not recent activity during the stale period.

Was this page helpful?
0 / 5 - 0 ratings