Hi,
Running docker 0.6.7 on Ubuntu 13.04.
I built a container whose CMD executes a uwsgi process (also tried with other executables). I run without detaching, and sigproxy is True.
Hitting Ctrl-C seems to have no effect over the running container and the only way to leave it or kill it is with docker kill
and docker stop
...
Is this a known issue?
Starting with docker 0.6.5, you can add -t
to the docker run
command, which will attach a pseudo-TTY. Then you can type Control-C
to detach from the container without terminating it.
If you use -t
and -i
then Control-C
will terminate the container. When using -i
with -t
then you have to use Control-P Control-Q
to detach without terminating.
Test 1:
$ ID=$(sudo docker run -t -d ubuntu /usr/bin/top -b)
$ sudo docker attach $ID
Control-C
$ sudo docker ps
The container is still listed.
Test 2:
$ ID=$(sudo docker run -t -i -d ubuntu /usr/bin/top -b)
$ sudo docker attach $ID
Control-C
$ sudo docker ps
the container is not there (it has been terminated). If you type Control-P Control-Q
instead of Control-C
in the 2nd example, the container would still be running.
A pull request to fix the docs for the Hello World daemon sample is here:
https://github.com/dotcloud/docker/pull/2845
I'm unaware of where you saw the recommendation to Control-C
outside of this example. If you saw this reference somewhere else, can you please submit a new pull request to fix the docs you referenced?
You might also find this mailing list thread helpful.
@lhazlewood Well, -t -d lets you kill the container with ctrl-c in the docker attach, however the problem is:
$ ID=$(docker run -d ubuntu bash -c "while true; do echo foo; sleep 5; done")
$ docker attach $ID
In this case ctrl-C does nothing. I expected it to kill the "docker attach" process (not the container).
If the daemon was not started with -t, i think docker attach should default to -sig-proxy=false
From what I can gather you need both -t
and -i
for Ctrl-C to work as expected...
@vmalloc That depends on what you expect. I expect to be able to detach from "docker attach" with ctrl-c, not kill the daemon. That is not currently possible without manually specifying -sig-proxy=false. If you accidentally do this the only way out is to kill the docker attach process from another terminal.
Ah, sorry I was a bit confused above. If you run "docker run -d -t" and the docker attach, then ctrl-c _does_ detach, not kill the daemon. The problem is that if you forgot the -t, and then ever use docker attach you end up with something you can't detach without killing it from a different terminal.
@alexlarsson Yes, the two test cases I show above show what happens w/ and w/o -i
.
Yes, I am experiencing the same issue. An easy way to reproduce is docker run busybox sleep 60
and then CTRL-C all you want. The command will not terminate or detach until the 60 seconds is up. I would expect a CTRL-C to send a SIGTERM to the sleep command, which should stop the docker instance.
Also see #2855
To have ctrl+c stop the container you must use -it
To detach from the container you should use ctrl+pq
Closing.
Why was this closed? Can you please explain the reason?
I asked because it is obvious that the majority of the community does not want the current default behavior.
Also note that ctrl+pq _does not work_ on Mac OS X running Ubuntu in VirtualBox.
@lhazlewood Because the issue wasn't about ctrl+pq it was about getting out of attach with ctrl-c, which is not supported since "attach" is litterally attaching you to the running process and should be expected behavior.
If you just want the stdout+stderr streams you should use docker logs
not attach.
I see this as a training issue and not a Docker issue.
If we want to discuss changing ctrl-pq, and I know I've seen other issues around it that's fine.
I'm not sure that we can change it at this point.
We talked about introducing a configurable command for it but was implemented server side and just wasn't ideal and implementing it client-side is difficult.
And I write all this in the friendliest of tone, I'm terse by nature, sorry about that.
I am VERY confused by all this.
The default behaviour when being in a shell prompt is that ^C kills the running foreground process. And if no such process exists, it just do nothing, except maybe show a new prompt. ^D on the other hand do exit the shell, without stopping the machine. This is what I expect, because this is what everyone else do.
So, why do Docker do it differently? And why do I read this strange discussion? Am I missing something?
@rolkar Because there are two foreground processes in this case.
The process you're attached to in the container and the docker client.
I personally think this should behave in much the same way as SSH, though with a different escape sequence.
FYI SSH's escape sequence is <enter>~.
Still don't get it. Why Ctrl-C doesn't send SIGINT?
@Vanuan - because the process is running in a docker daemon and you only attach to that daemon's tty. Lets say that ^C would kill the process that the daemon has started. Then you would kill the container daemon. Not unreasonable. Maybe what you and I want (in most cases). But, it is not obviously the right thing to do. And some have other opinions.
Yeah, I figured it out. Docker actually sends SIGINT, but the kernel ignores it because process id is 1. And since the process doesn't have its own SIGINT handler, nothing happens.
The kernel doesn't ignore it, the process does.
@cpuguy83
So you're saying that if pid is not 1, SIGINT is sent to the parent process?
@Vanuan The signal is always sent to the process you specified, but some processes don't load signal handlers when they are pid 1, so the signal gets ignored.
The only way to kill it is to kill -9
, in which case kernel terminates the process rather than the process terminating itself.
Doesn't appear to be true:
// main.c
#include <unistd.h>
#include <signal.h>
#include <stdio.h>
#include <stdlib.h>
void catch_interrupt(int sig) {
printf("Interrupting...\n");
}
int main() {
signal(SIGINT, catch_interrupt);
sleep(10);
return 0;
}
$ docker run -it --rm -v `pwd`:/src -w /src iron/gcc:dev gcc main.c -o main
$ docker run -it --rm -v `pwd`:/src -w /src iron/gcc:dev ./main
^CInterrupting...
Same setup with signal
commented out:
$ docker run -it --rm -v `pwd`:/src -w /src iron/gcc:dev ./main
^C^C^C^C^C
# nothing happens
As you see, signal handler is loaded. And even though PID remains to be "1", it responds to Ctrl-C and terminates (though we didn't ask it to terminate, we only handled it). But when there's no signal handler registered, it doesn't respond to Ctrl-C and nothing happens.
@Vanuan Yes, this is exactly as I said.
The signal is still sent to the program, the program just doesn't respond to it.
I referred to this part:
some processes don't load signal handlers when they are pid 1
Does it mean something else? Signal handlers are always loaded when they're registered.
But it appears that there are some default signal handlers which are not loaded if pid = 1 AND signal handler is not registered. Is that what you meant?
@Vanuan exactly.
I'm still confused about this. Example run:
docker run -i $IMAGE
ping google.com
PING google.com (216.58.217.46) 56(84) bytes of data.
64 bytes from 216.58.217.46: icmp_seq=1 ttl=61 time=31.5 ms
^C64 bytes from 216.58.217.46: icmp_seq=1 ttl=61 time=31.5 ms
^C^C^C^C^C^C^C^C64 bytes from 216.58.217.46: icmp_seq=1 ttl=61 time=31.5 ms
So ctrl+c doesn't send SIGINT to either the docker process or the ping process. Is docker stop
my only option here? (ctrl+p, ctrl+c does also not work. I'm on OSX)
@johshoff It depends on how ping
was started by the image.
Is it running inside a /bin/sh
? If so, the signal is sent to /bin/sh
, which ignores signals when run as pid 1.
Thanks, @cpuguy83, that makes it clearer. ping
was indeed running under /bin/sh
.
Is it running inside a /bin/sh? If so, the signal is sent to /bin/sh, which ignores signals when run as pid 1.
No, /bin/sh doesn't ignore Ctrl-C. Ping doesn't ignore it either. The issue here is missing -t
flag:
docker run -i $IMAGE
should've been
docker run -it $IMAGE
E.g.:
docker run --rm -it alpine sh -c "ping -c 4 google.com"
vs
docker run --rm -i alpine sh -c "ping -c 4 google.com"
vs
docker run --rm -ti alpine ping -c 4 google.com
vs
docker run --rm -i alpine ping -c 4 google.com
I agree with @cpuguy83 in that I don't see a problem with docker here, the annoyance come from the fact that the signal is sent to /bin/sh
. The very simple solution to this is use
sh -c "exec ping google.com"
which replaces the shell process with the ping process
instead of
sh -c "ping google.com"
When using sh -c "exec ping google.com"
the signal (Ctrl-C) is sent to ping
opposed to sh
. And ping does not ignore this signal when run as pid 1
@jarl-dk
And ping does not ignore this signal when run as pid 1
If it doesn't, why wouldn't you use ping directly?
You can tell node.js to handle the signals:
// For Docker
process.on('SIGINT', function() { console.log('Caught Ctrl+C...'); process.exit(); }); // Ctrl+C
process.on('SIGTERM', function() { console.log('Caught kill...'); process.exit(); }); // docker stop
I have started container with ping command, when i attached conatiner Ping process is keep on running even after i pressed ctrl+c
How to stop the ping In container ?
Steps used:
docker run --name centos-linux -d centos /bin/sh -c "while true; do ping 8.8.8.8; done"
docker attach centos-linux
I'm sorry but I don't understand why there is sometimes no way to detach from an attached container. I understand that some processes may ignore SIGINT if running as pid=1 but there must be a way to kill or detach an attached container in such cases from within the same terminal session.
For example when using the eboraas/apache image which executes apache in foreground by using the CMD command in exec form within the Dockerfile:
CMD ["/usr/sbin/apache2ctl", "-D", "FOREGROUND"]
When I start a container from this image with docker run -p 80:80 -d eboraas/apache
and attach to it with docker attach
or if I accidentally miss the -d
flag I have no change to either detach from the container or stop/kill the container from the same terminal session.
To stop the container I have to use another terminal session to execute docker stop
. ctrl+c, ctrl+d and ctrl+pq has no effect even if I start the container with the -i
flag.
I'm using Docker for Mac under Mac OS X.
Put this to your alpine 3.4 Dockerfile:
RUN apk add tini
ENTRYPOINT ["/sbin/tini", "--"]
And you're good to go.
1 year later we still can't Ctrl+C a docker container
@sebdelvalle You can. It's just that not all docker containers respond to Ctrl+C. If you created software running in a docker container it's your responsibility to handle Ctrl+C properly. If you don't shutdown your software on Ctrl+C nobody will do it for you.
You see, when you run your software without docker you have a terminal and init system which does handle Ctrl+C. When you run it in docker, you don't have any terminals. Your software becomes the system's entrypoint. It's like booting your software straight from reboot.
I had the same issue with my node.js apps.... I fixed it with
/**
* Does what it says :-)
*/
function endProcess(reason) {
// eslint-disable-next-line no-console
console.log(`Quitting... Reason: ${reason}`);
process.exit();
}
/**
* Adds hooks for Docker CTRL-C and Stop
*/
function dockerConfig() {
[
'SIGINT',
'SIGTERM'
].forEach((signal) => {
process.on(signal, () => {
endProcess(signal);
});
});
}
module.exports = dockerConfig;
Which command for stop the docker?
ctrl + C dont stops
@romenigld docker stops when the process stops
exec works for me.
for example, running uwsgi, ctrl+c does not work:
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1)
spawned uWSGI worker 1 (pid: 6, cores: 1)
spawned uWSGI worker 2 (pid: 7, cores: 1)
spawned uWSGI worker 3 (pid: 8, cores: 1)
spawned uWSGI worker 4 (pid: 9, cores: 1)
spawned uWSGI worker 5 (pid: 10, cores: 1)
^C^C^C^C^C^C^C^C^C
But then running exec uwsgi, ctrl+c is explicitly received:
*** uWSGI is running in multiple interpreter mode ***
spawned uWSGI master process (pid: 1)
spawned uWSGI worker 1 (pid: 6, cores: 1)
spawned uWSGI worker 2 (pid: 7, cores: 1)
spawned uWSGI worker 3 (pid: 8, cores: 1)
spawned uWSGI worker 4 (pid: 9, cores: 1)
spawned uWSGI worker 5 (pid: 10, cores: 1)
^CSIGINT/SIGQUIT received...killing workers...
@montanaflynn it was able just open another tab from terminal and put:docker stop <container_id>
Another way if you're using node like @alvarow you can use npm start
and a rule in package.json
and npm will shut down on ctrl-c
:
Like this:
{
"name": "prom-koa-example",
"version": "1.0.0",
"description": "Expose prometheus metrics in koa",
"main": "index.js",
"scripts": {
"start": "node index.js",
},
"dependencies": {
"koa": "^1.2.0",
"koa-router": "^5.4.0",
"prom-client": "^9.1.1"
}
}
I used a shell script because my command docker-php-entrypoint apache2-foreground
was not exiting on Control+C.
docker-php-entrypoint apache2-foreground &
apache_pid="$!"
kill_apache() {
kill "$apache_pid"
}
#trap 'kill_apache' INT
wait "$apache_pid"
Here, docker-php-entrypoint apache2-foreground
is just an example of a command which does not respond to Control+C. If you have an other command which does not respond, you can try to replace it with yours and see if the script responds to Control+C.
You're starting apache using apache2-foreground
but then daemonize it using &
?
Here is my understanding of it.
When writing a Dockerfile, there are two commands relating to starting the container process: ENTRYPOINT
and CMD
.
ENTRYPOINT
defines the executable to be launched, and CMD
defines the arguments to be passed to the executable. By default, ENTRYPOINT
is equal to /bin/sh
.
Therefore, when a Dockerfile doesn't have a ENTRYPOINT
and only has a CMD "node src/index.js"
, the process that is actually running in the container is something like: /bin/sh -c "node src/index.js"
Now for some reason I do not fully understand, when sh
receives a SIGINT, whether it is through a docker stop
or ctrl-c
in docker run -it ...
, it does not forward this signal to the process it started (node src/index.js
in that case).
So the correct way of defining a Dockerfile to run a nodejs container would be:
FROM node:8
# some stuff...
ENTRYPOINT ["node"]
CMD ["src/index.js"]
Where src/index.js
features a signal handler for SIGINT.
Now, docker stop <container>
works perfectly.
This can easily be reproduced for other types of processes, so I hope it helps :smiley:
Now for some reason I do not fully understand, when sh receives a SIGINT, whether it is through a docker stop or ctrl-c in docker run -it ..., it does not forward this signal to the process it started
Correct; processes running as pid 1
often act different as the same process running as a different pid. (pid 1
is "special" as it's the computer's (in this case: container's) main process). When running as pid 1, sh
won't forward signals, thus its child processes are not killed.
Specifying the process directly through ENTRYPOINT
/ CMD
is one option; another options is to exec
the process you want to run at the end of your entrypoint script.
For example, the official httpd
image uses an httpd-foreground
wrapper script to perform some initialisation tasks, but the last step is to exec
the container's main process (httpd
); https://github.com/docker-library/httpd/blob/53654452889ae3af537eef2dbb981ccac6fb907f/2.2/httpd-foreground#L1-L7
Using exec
makes the shell itself quit, and run the process that you specify instead (thus making it pid 1
inside the container)
Quoting my recommendation above:
RUN apk add tini
ENTRYPOINT ["/sbin/tini", "--"]
Now you don't care whether you use sh
or you not in CMD.
If you don't need sh
, don't run it; using exec
saves you from one extra process running inside the container, which may be helpful.
(Also, using docker run --init
will automatically inject tini
in your container, without the need to add it)
Cool! Is there a corresponding option for compose?
Looks like there's not: https://github.com/docker/cli/issues/51
So for swarm and if you use docker-compose you'd still need tini
.
It's possible to enable the init option as a default (by setting a daemon option); then all container, including those started as part of a service get --init
. There's a PR in progress https://github.com/docker/cli/pull/479
Can't detach with CTRL+P,CTRL+Q in my Ubuntu 17.04 OS. Arrrghh
1 year later we still can't Ctrl+C a docker container
Now it's 2 years 馃巶
Still getting the error on MacOSX.
@felipekm not helpful: what problem are you running into? Have you read this thread?
Yes I have @thaJeztah, I'm just trying to kill a process initialized bydocker run -ti <image_id>
.
@felipekm in another terminal type docker ps
and then docker stop <container_id>
If you want Ctrl+C
to work you'll need to change your docker container to respond to SIGINT
.
Update: This has been answered here https://github.com/moby/moby/issues/37200
Real world use-case: running Docker container with tests inside CI with:
docker run -ti my-image my-test-script
The CI does not support TTY so I get:
the input device is not a TTY
stdin: is not a tty
ERROR: Job failed: exit status 1
How do I propagate a SIGTERM up to the container entrypoint when I don't have a TTY?
My desired process tree:
bash
docker run -ti my-image my-test-script
CI runner will send a SIGTERM to the bash script that will propagate to the docker run but since there is no TTY support it can't run with -ti.
This works in my terminal.
@aalexgabi This what I used on CI (jenkins):
# Fix docker signal proxy issue without tty
function docker() {
case "$1" in
run)
shift
if [ -t 1 ]; then # have tty
command docker run --init -it "$@"
else
id=`command docker run -d --init "$@"`
trap "command docker kill $id" INT TERM
command docker wait $id
fi
;;
*)
command docker "$@"
esac
}
# export to sub-shell
export -f docker
@felipekm Try adding --init
when starting your container. This will start Tini as PID 1, and Tini will ensure your application gets SIGINT (instead of being intercepted by /bin/sh).
More details in the thread above.
In summary:
docker run -it
won't be enough if the program you run doesn't handle it. specially problematic if it's not your program... such as is the case when another program interprets your program in a limited way.ENTRYPOINT ["/sbin/tini", "--"]
the entry point... instead of your app.docker run -it --init
which will solve all your problems.@guiambros that is so useful it's going in my aliases.
Also for swarm mode:
The corresponding compose change was recently merged, so in the next version you'd be able to do this:
version: '3.7'
services:
myservice:
image: myimage
init: true
Finally you won't have to modify images just to add tini
so that your containers aren't killed (exit code 137) when not responding to SIGTERM. Especially useful for databases to prevent data loss.
Finally you won't have to modify images just to add tini so that your containers aren't killed (exit code 137) when not responding to SIGTERM. Especially useful for databases to prevent data loss.
Just adding this as additional information (some of which is mentioned in earlier comments);
While using --init
(or adding tini
in your image) can be a quick way to help with images that don't handle signals properly, be sure you're not "masking" an actual problem with the image.
Tini provides the following features;
If you know that the process running inside the container is a "bad actor", and doesn't handle reaping, using --init
is definitely a good choice to get you going. Be aware though, that you're effectively running into a bug; be sure to report the issue with the maintainer/publisher of the software you're running in the container: perhaps they're not aware of this situation, and can fix it.
If you're using --init
because of 2.
, this may be because the image you're running was not well-designed; Is the container's main process _actually_ the main process, or is it running in a shell?
For example; the following CMD
will start a shell (/bin/sh
or /bin/bash
, depending on the image) in which mysqld
is started;
CMD /usr/sbin/mysqld
Because of this, /bin/sh
(not mysqld
) has become the container's main process (PID-1), and any signal sent to the container will be handled by /bin/sh
(and not forwarded to mysqld
). Using --init
will "resolve" that situation (/bin/sh
now runs as PID-2, and mysqld
runs as PID-3), but still runs an unnescessary shell process in the container.
To make the container's process run _without_ a shell, use the JSON ("exec" form). For example, below is the CMD
for the official mysql image;
CMD ["mysqld"]
But what if you need to run some commands when starting the container (before running the container's main process), such as setting permissions, or doing some setup the first time the container is run?
These steps can be done in an entrypoint-script. The entrypoint script can run a shell, perform initialization (see for example the init steps in the official WordPress entrypoint script), and at the end switch to the container's main process using exec
.
When using exec
, the current process is replaced with another process, so when using exec
at the end of the entrypoint script, the container's main process will run as PID-1, and not be running inside a shell; here's the last line of the WordPress entrypoint script;
exec "$@"
The Dockerfile uses both an ENTRYPOINT
and CMD
, in which case CMD
is used as parameter for the ENTRYPOINT
;
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["apache2-foreground"]
exec "$@"
will thus default to exec apache2-foreground
, but will be replaced with the command that's used to start the container (for example docker run mysql echo "hello"
, will run the entrypoint script, then exec echo "hello"
)
TL;DR; the --init
option is safe to use, and may help your situation, but be sure you're not tapering over the actual problem in the image :smile:
Update: This has been answered here https://github.com/moby/moby/issues/37200
@rayfoss @thaJeztah Again --init
and -ti
are not a solution when you don't have a TTY see #37200
I would like docker run myScriptImage
to be the exactly the same as running ./myScript
except that it runs in a isolated and controlled environment. I would like that Ctrl-C works out of the box as for "any other script". It would be ideal if in any bash script I could replace any system command with an equivalent docker run myEquivalentCommand
and have it work the same way as if I launched a the command locally (stdin, stdout, stderr, exit code, input buffering, output buffering, handling of signals etc.).
Update: This has been answered here https://github.com/moby/moby/issues/37200
@tsl0922 Thank you but that's what I'm trying to avoid doing. I feel that I might be doing a command called wrap-docker-run-as-if-is-a-regular-program
what will handle all that using bash traps instead of copy-pasting that code in all ci pipelines (there are 30 or so.)
@aalexgabi
--init and -ti are not a solution when you don't have a TTY see #37200
TTY has nothing to do with being able to send a signal or catch it. Please read my messages in that issue thread you mentioned.
One question: wonder what's the solution used by official images...say nginx. Nginx container can be terminated using SIGINT when not in detach mode.
@bidiu First of all, it uses exec form:
CMD ["nginx", "-g", "daemon off;"]
Secondly, nginx handles signals by its own.
And finally, it uses
STOPSIGNAL SIGTERM so that when you send SIGINT to docker in interactive mode or run docker stop container it would send SIGTERM, which would be handled by nginx to terminate itself
You could figure out it yourself by looking at its Dockerfile:
https://github.com/nginxinc/docker-nginx/blob/master/mainline/alpine/Dockerfile
@Vanuan
Aha, didn't know the directive STOPSIGNAL
before, and also not familiar with modes other than "exec form". Thanks for your clarification馃槉
So there's shell form and exec form. Shell form is used like this:
CMD nginx
Exec form like this:
CMD ["nginx"]
In first case the shell handles your signal, in the second case the nginx process
For folks coming here after not being able to stop a node
process with a SIGINT, I found this other issue also very helpful in understanding this behaviour.
As of right now, with Windows 10, you need to run like this inside of git bash
winpty docker run -it i<id>
Most helpful comment
Starting with docker 0.6.5, you can add
-t
to thedocker run
command, which will attach a pseudo-TTY. Then you can typeControl-C
to detach from the container without terminating it.If you use
-t
and-i
thenControl-C
will terminate the container. When using-i
with-t
then you have to useControl-P Control-Q
to detach without terminating.Test 1:
The container is still listed.
Test 2:
the container is not there (it has been terminated). If you type
Control-P Control-Q
instead ofControl-C
in the 2nd example, the container would still be running.A pull request to fix the docs for the Hello World daemon sample is here:
https://github.com/dotcloud/docker/pull/2845
I'm unaware of where you saw the recommendation to
Control-C
outside of this example. If you saw this reference somewhere else, can you please submit a new pull request to fix the docs you referenced?You might also find this mailing list thread helpful.