hi folks
This is not an issue. I am sorry if this is the wrong place to ask this question. If this is not the right place, please point me to the right place.
I understand that we can use docker-machine to connect to different hosts using drivers like Virtualbox, cloud provider driver etc. If I already have a host running Docker inside baremetal Linux, how do we integrate this with docker-machine? Without docker-machine, I could do the same by running a Docker daemon on a particular port and connecting externally from docker client to the docker daemon ip and port. There is an option in docker-machine to create a host without any driver, is that for this purpose? I couldnt find how to use it to connect.
Thanks
Sreenivas
Hi @smakam, I believe what you are looking for is the url
driver: http://docs.docker.com/machine/#adding-a-host-without-a-driver
Hi @nathanleclaire
Thanks for the response.
I did look at the link you mentioned and I tried the following, no luck.
First, I tried without any TLS:
On Ubuntu machine I did this to start docker agent:
sudo docker -d -H unix:///var/run/docker.sock -H tcp://192.168.56.101:2376 &
On Windows, where I had docker-machine installed, I did this:
$ docker-machine create --url=tcp://192.168.56.101:2376 custom6
←[34mINFO←[0m[0000] "custom6" has been created and is now the active machine.
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
custom6 * none tcp://192.168.56.101:2376
I get this error:
$ docker-machine env custom6
←[31mFATA←[0m[0000] open C:\Users\srmakam.docker\machine\machines\custom6ca.pem: The system cannot find the file specified.
Not sure if docker-machine enforces TLS. I tried starting docker agent with certificate and key and tried with docker client, no luck with that either.
Thanks
Sreenivas
We do essentially mandate TLS in our current form. If you set up your own CA and certs/keys, I think you could probably use them via the --tls-ca-cert
, --tls-ca-key
gloabl options to Docker Machine. @ehazlett Any comment?
Hi
I tried with TLS, I still couldnt get it working, not sure what I am missing:
I started docker on my Ubuntu host:
sudo /usr/bin/docker -d --tlsverify --tlscacert=
Then I tried to create a docker-machine host without driver:
docker-machine --tls-client-cert=
When I tried to see environment, I get the following error:
$ docker-machine env custom3
←[31mFATA←[0m[0000] open C:\Users\srmakam.docker\machine\machinescustom3ca.pem: The system cannot find the file specified.
Thanks
Sreenivas
You would most likely want to use existing CA, client, etc for all machine related stuff as the --tls-ca-cert
etc settings are global.
As for your docker daemon, you want to use the CA and server cert /keys -- in the above you are using the CA cert, but client key and cert. You would need something like:
docker -d --tlsverify --tlscacert ca.pem --tlscakey ca-key.pem --tlscert server.pem --tlskey server-key.pem
@ehazlett
I tried to use global settings in "certs" directory for docker-machine create specifying options in commandline, still it complains about "ca.pem" missing in machines/
How do I get "server.pem" and "server-key.pem"? Should I generate it?
I was able to connect between docker client and docker agent on separate machine using TLS without using docker-machine.
Thanks
Sreenivas
Can you show the command line args used? If you specify the certs machine should just use them. If not, it's a bug :)
Hi @ehazlett
First, I started docker agent like this:
sudo /usr/bin/docker -d --tlsverify --tlscacert=/home/xxx/.docker/machine/certs/ca.pem --tlskey=/home/xxx/.docker/machine/certs/key.pem --tlscert=/home/xxx/.docker/machine/certs/cert.pem --host=unix:///var/run/docker.sock --host=tcp://0.0.0.0:2376
Then I started docker-machine client like this:
docker-machine --tls-ca-cert=/home/xxx/.docker/machine/certs/ca.pem --tls-client-key=/home/xxx/.docker/machine/certs/key.pem --tls-client-cert=/home/xxx/.docker/machine/certs/cert.pem create --url=tcp://0.0.0.0:2376 custom3
I got this error when setting environment:
xxx@ubuntu:~$ docker-machine env custom3
open /home/xxx/.docker/machine/machines/custom3/ca.pem: no such file or directory
Here, I am running docker agent and docker-machine on same Ubuntu machine. I get similar error when running docker agent in Ubuntu and docker-machine in Windows.
Thanks
Sreenivas
You should not be using key.pem
and cert.pem
for the Docker engine. The engine needs a server key/cert pair (Machine will create these).
For the environment, how did you create the custom3
machine? It looks like something went wrong during creation if that file doesn't exist.
Hi @ehazlett
How should I start docker engine? As per my understanding, docker-machine mandates using TLS.
This is how I created custom3 machine. This didnt give me any error.
docker-machine --tls-ca-cert=/home/xxx/.docker/machine/certs/ca.pem --tls-client-key=/home/xxx/.docker/machine/certs/key.pem --tls-client-cert=/home/xxx/.docker/machine/certs/cert.pem create --url=tcp://0.0.0.0:2376 custom3
Should I be creating server key/cert pair manually like what machine would do if driver was used with docker-machine?
Thanks
Sreenivas
@smakam this should be correct. It should generate a server key based on that existing CA. I will do some testing to see if there is an issue.
@smakam Any update on this issue or can we close it?
@nathanleclaire I am still not able to get it to work. I even tried with docker-machine 3.0 with generic driver procedure mentioned here(http://blog.docker.com/2015/06/docker-machine-0-3-0-deep-dive/)
This is what is mentioned:
docker-machine create -d generic \
--generic-ssh-user ubuntu \
--generic-ssh-key ~/Downloads/manually_created_key.pub \
--generic-ip-address 12.34.56.78 \
jungle
I assume .pub is a typo and we need to give private key. I got 2 kinds of errors in 2 different hosts:
case 1:
Importing SSH key...
Error creating machine: exit status 1
You will want to check the provider to make sure the machine and associated resources were properly removed.
case 2:
Importing SSH key...
Error getting SSH command to check if the daemon is up: exit status 1
Error getting SSH command to check if the daemon is up: exit status 1
btw, where does docker-machine detailed logs get stored?
I tried with both docker-machine for windows and linux.
Thanks
Sreenivas
The same problem.
Both --url and -d generic ways do not work.
I tried the same steps as original poster and got similar issue at which point I tried the generic driver and ran into error similar to case 1 (CentOS 7)
Same problem. Can't use --url and specify the certificate.
Might I suggest the Docker team write a short tutorial taking us through step by step, it would be very helpful.
Same issue here with anyconnect vpn connected, well after I restarted my laptop without anyconnect, it is gone.
+1 for the issue.
I've tried to use docker-machine create --url=
with my existing docker setup on the remote host, but no luck. DM fails trying to get TLS from $HOME/.docker/machine/machines/<name>/ca.pem
.
Same problem. I tried same steps, but still no server cert/keys generated.
docker-machine version 0.4.0
Docker version 1.8.0
Same problem. I tried same steps, but still no server cert/keys generated.
docker-machine version 0.4.1
Docker version 1.8.1
@csokun @miracle-in-sunday @narqo Generally it is assumed that using --url
you "bring your own certs" although I do admit, that bit of the code has been languishing so it may be broken.
If you would like certficates and keys to be generated automatically, try the generic
driver: https://docs.docker.com/machine/drivers/generic/
If that doesn't work for your use case, can I please ask that you file a separate issue detailing the exact steps you are taking, and the results you are seeing?
Thanks!
Hi Guys, Did you try with --virtualbox-hostonly-cidr speciefied? Worked for me:
BartSlaman@VLRNB176 ~
$ docker-machine create -d virtualbox --virtualbox-hostonly-cidr "192.168.99.100/24" dev4
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: C:\Program Files (x86)\Git\bindocker-machine env dev4
BartSlaman@VLRNB176 ~
$ docker-machine env dev4
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="C:\Users\BartSlaman.docker\machine\machines\dev4"
export DOCKER_MACHINE_NAME="dev4"
Regards
Bart Slaman
Any update on this issue? I tried --url with --tls-* but I got err "open /Users/user/.docker/machine/machine/ss/ca.pem no such file or directory". I use docker-machine version 0.4.1
Same here, but different error on creating:
Importing SSH key...
SSH cmd error!
command: sudo hostname internal && echo "internal" | sudo tee /etc/hostname
err : exit status 1
output : sudo: no tty present and no askpass program specified
And then when runing eval "$(docker-machine env internal)":
open /Users/marlon/.docker/machine/machines/internal/ca.pem: no such file or directory
That means the certs are not being generated.
Funny thing, I can ssh to the machine running "docker-machine ssh internal".
I was able to get --url
to work and point to an existing Docker engine on DigitalOcean created with Docker Machine.
To use --url
or the none
driver, I just copied the existing ~/.docker/machine/machines/dobox/
folder on the machine that I used to create the host and deleted cert.pem
, key.pem
, id_rsa.pub
, id_rsa
and config.json
(leaving ca.pem
, server.pem
and server-key.pem
). I then generated a new client cert/key pair (https://docs.docker.com/articles/https/, starting at "For client authentication") inside the newly created directory from before and copied it over to the machine I was trying to connect from. Lastly, I added the remote host using docker-machine create --url=tcp://SOME_IP:2376 dobox
and moved the certs to where Docker Machine expects them to be: ~/.docker/machine/machines/dobox/
. The folder should be there already and contain config.json
, so you are just adding your certs. I am not trying to change the TLS flags/auth scheme that Docker is using, which are:
--tlsverify \
--tlscacert="/home/roberto/.docker/machine/machines/dobox/ca.pem" \
--tlscert="/home/roberto/.docker/machine/machines/dobox/cert.pem" \
--tlskey="/home/roberto/.docker/machine/machines/dobox/key.pem" \
-H=tcp://SOME_IP:2376
Now I can docker $(docker-machine config dobox) images
or just eval "$(docker-machine env dobox)"
etc on the second client machine.
docker-machine version 0.4.0
Docker version 1.8.2
How to authenticate client and server in docker.whose username and password I have to configure with it
I am using https://docs.docker.com/reference/api/docker_remote_api_v1.20/
and have docker client and server on same host machine.
And also let me know if both are not on same machine.
Guys for the sake of the God, add tutorial how to import existent docker machines.
I've created 2 different cloud azure machines from 2 separates PCs and now I can't connect from PC to azure machine which were created on another.
Already spend 2 evenings, trying all suggestions and no result.
@nathanleclaire wdyt?
@dmp42 About what exactly? --url
being broken is a pretty well known issue, and we want to support using Machines from multiple different computers soon with portable configs.
I suppose it would be great to have at least some clear statement that import of existent hosts, from PC to PC, is not working for now (or have some known issues) somewhere in official documentation or here on github.
@baio +1
I have a Centos 7 host on which I have already installed the standard docker package from the standard Centos repo -- i.e. I did not use the package from the Docker website.
From my local workstation, when I try to create the machine for this Centos 7 host using the generic driver, it fails with "exit status 1" when it tries to install a package called "docker-engine".
I'm guessing this doesn't work if Docker is already installed on the remote host?
Related: #2270
I think I have a minimal reproducible example:
VAGRANTFILE_API_VERSION = "2"
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
# Box
config.vm.box = "ubuntu/precise64"
config.vm.box_url = "http://cloud-images.ubuntu.com/vagrant/precise/current/precise-server-cloudimg-amd64-vagrant-disk1.box"
# To make this easily reproducible
config.ssh.insert_key = false
config.vm.network "private_network", ip: "192.168.50.4"
end
$ vagrant up
ssh -i ~/.vagrant.d/insecure_private_key [email protected]
^ This works for me
$ docker-machine create -d generic --generic-ssh-user vagrant --generic-ssh-key ~/.vagrant.d/insecure_private_key --generic-ip-address 192.168.50.4 repro
Importing SSH key...
Error creating machine: Maximum number of retries (60) exceeded
You will want to check the provider to make sure the machine and associated resources were properly removed.
Are you guys getting the same result? Let me know if there's anything further I can try to help debug.
Same issue here. Unable to make --url
(no driver) work with:
--tlsverify -H=unix:///var/run/docker.sock -H=0.0.0.0:2376 --tlscacert=/root/.docker/ca.pem --tlscert=/root/.docker/cert.pem --tlskey=/root/.docker/key.pem
~/.docker/machine/machines/mymachine/
When typing docker-machine env mymachine
, It fails into:
Error running connection boilerplate: Error checking and/or regenerating the certs: There was an error validating certificates for host "": open /Users/f2i/.docker/machine/machines/anakin/server.pem: no such file or directory
You can attempt to regenerate them using 'docker-machine regenerate-certs name'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
Wihout docker-machine everything seems fine:
> docker --tlsverify -H=myhost:2376 ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
Just wondering, what's the real difference between generic (--driver "generic" --generic-ip-address ...
) and no driver (--driver "none"
--url ...) ?
I'am understanding the generic connects through SSH (so it's like using docker directly on the remote server) and no driver connects to the host using TCP.
Consequently, using no driver, why do we need to have server.pem and server-key.pem on the client host ? They should not be managed by machine. Right ?
Even providing those last PEM files, it fails into:
Error running connection boilerplate: Error checking and/or regenerating the certs: There was an error validating certificates for host "": crypto/tls: failed to parse private key
You can attempt to regenerate them using 'docker-machine regenerate-certs name'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
The "no driver" is more a integration test driver used for development. But if you hack enough your setup can be used to register an existing machine. It is scheduled to be removed in a future release ( #2437 ). The current documentation is misleading on this none
driver.
The generic
driver should be used to register and properly install docker on any host.
Note that it will at the very least restart the docker daemon on the target host. But it's a reliable way to register any 'generic' machine into docker machine as long as you provide an ssh access.
@jeanlaurent the problem with the generic
driver is that it mucks with your machine if docker is already installed (e.g. changing the hostname; running yum update
). I think what many of us are trying to say is that the none
driver has a first-class use case, and if it properly worked would be a valuable capability to many of us (in addition to the targeted drivers).
Agree with @metasim on this. The none
driver is a so common use case. This is one of the most basic way to interact with remote docker API. It would be unfortunate to name it test
. Peraphs the misleading comes from the fact that we don't want to create/manage a new machine (docker-machine create
) but only connect to an existing machine ...
I'm not saying we shouldn't be able to 'register' existing machine into docker-machine one way or the other, we just need to do it properly. There is a lot of confusion around the none
driver, it doesn't work in a lot of places, and people are misusing it. The intent here is to prepare ground for a real register existing machine
feature.
But once a machine is 'hacked' into docker-machine through the none driver there is a lot of stuff that is going to break down the chain -> restart
, upgrade
, and ssh
obviously. Doing a manual change to the json file is as efficient as using the none
driver to register a machine actually.
@metasim Upgrading the generic
driver to not 'mucks with your machine` is one way of handling it.
@vpusher We may need to design a proper register feature.
@jeanlaurent Thanks for your response. I can see that that upgrading generic
to handle this use case (including existing keys) may be a good way to go. However, in the spirit of "separation of concerns", I'd recommend considering a new driver (existing
?, preconfig
?, manual
?, diy
? :-) ) to handle this use case, particularly so that stability within this narrow context can be better established.
FYI. #2260 and #2269 (closed but not really addressed) were an attempt at capturing some of this confusion around the none
driver
PS: I think it's wholly acceptable for some commands to be explicitly unsupported in this use case (e.g. restart
, upgrade
, etc.).
@metasim Yup a dedicated driver is probably the best way actually to handle a register feature but not the only one, a dedicated command is another. But before deciding how to do it, we need to clarify/define in which case we all need to register a new machine without docker-machine
installing docker on it.
If you take a look at PR #2442 for instance we ping the docker host for the docker version. Because we want to be able to provide upgrading at some point, or warn you that your docker host is too old with your current docker client. With a machine on which we didn't install the docker daemon this will prove difficult or at the very least very flaky.
If we consider that we can update the generic
driver in some way, I'm wondering for which use case someone would like to register a machine into docker-machine without providing ssh access ?
What's yours ? Let's list them.
Some commentary on defining the use case for none
and/or potentially deprecating it here: https://github.com/docker/machine/pull/2437#issuecomment-160768813
My feeling is that we (the Machine team) should:
none
driver, we should wait to completely remove it until 0.7.0.none
driver, and work towards defining the ideal future workflow (whether it's import
/register
or changes to the generic
driver) so that interested users can follow along and have adequate time to prepare for changes coming down the pipe.@dgageot @jeanlaurent How does that sound?
@jeanlaurent I can see that ssh
would pretty much be required, which isn't so bad. Up to now I'd just assumed most everything could be done over the REST API.
I use docker-machine
mostly as a tool to setup the env variables required to make docker
talk to another host. I never use the upgrade
/restart
etc. commands. My Docker hosts are in vSphere with a custom template created through a vOrchestrator workflow -- using the vSphere driver in docker-machine is not an option.
That said, I realize that my use-case is not the only use-case, but I hope that it's something to consider at least.
I could probably replace docker-machine
with a shell alias, for my purposes at least.
Here is a sample use case in case it helps:
I would like to create my ec2 machines in amazon using elasticbeanstalk instead of docker-machine, because elasticbeanstalk has a lot of goodies (like auto-scaling and machine restarts). I would like to register those with docker-machine and control them using docker-swarm.
Upgrading those machines to have docker 1.9 is trivial, but they run some amazon linux (ancient centos fork) and generic does not work with them. Because the docker engine there does not have tls, the trick I use to manage them remotely with plain docker is to expose the docker.sock in localhost:2375 using socat (much like in https://github.com/sequenceiq/docker-socat) and do a ssh tunnel from my local machine with something like ssh -i id_rsa [email protected] -L 2375:localhost:2375 -N . Then I can connect over the network with docker --tls=false -H tcp://localhost:2375.
It is a lot of gymnastics. The combination of ssh access and a working docker setup (even without ssl) should be all that docker-machine needs for many interesting cases (no upgrade, restart as other noticed, but I never use those anyway).
Hope that is useful.
@bonitao Glad you spelled that out. I also have the "tunnel through ssh" use case in some enterprise engagements.
I'm just starting trying again to use Docker machine with existing hosts... I agree with @metasim statements on around the none
driver... It has value today for the community, since the generic
is not yet doing what its intention is... I spent inumerous hours trying to get the none
working to learn that there's a proposal to get it renamed to test
. @nathanleclaire Any plans to make the generic
driver work as described about???
I speng a good amount of time on https://github.com/docker/machine/issues/2628 and this is a use case that we have... Supporting existing teams across the company with docker...
I've been googling, assuming I was just completely missing some understanding, trying to figure out how to connect up to an azure docker machine from my CI server that I created elsewhere. I ended up here and am surprised this actually is not possible (without hackery).
I created and got azure docker machine instances fully up and running from a workstation and I just want to be able to control it and deploy to it from CI scripts, which can run from a number of CI slave instances. Is there really no official way this is supposed to be accomplished?
I've updated my docker and now I can no longer connect to my old docker-machine instance. I get
Error running connection boilerplate: Error checking and/or regenerating the certs: There was an error validating certificates for host "xxxxx:2376": open : no such file or directory
You can attempt to regenerate them using 'docker-machine regenerate-certs [name]'.
Be advised that this will trigger a Docker daemon restart which will stop running containers.
the regeenrate-certs commands gives me
Regenerate TLS machine certs? Warning: this is irreversible. (y/n): y
Regenerating TLS certificates
Detecting the provisioner...
Installing Docker...
Copying certs to the local machine directory...
Copying ca.pem to machine dir failed: open : no such file or directory
funny thing is that I can
docker-machine ssh
without any problems..
any fixes?
Scenario: Existing and working server with Docker and TLS enabled.
Add existing server/machine to docker-machine:
docker-machine --tls-ca-cert path/to/ca.pem --tls-ca-key path/to/ca-key.pem --tls-client-cert path/to/client.pem --tls-client-key path/to/client-key.pem create --driver none --url tcp://HOST:2376 NAME
In your user directory (~/.docker/machine/machines/NAME) add the same client certificate as "cert.pem" and "server.pem" and the client certificate key as "key.pem" and "server-key.pem" also adjust your config.json to include the relevant SSH settings...
I had raised this issue. I recently got this working and I have put the instructions here(https://sreeninet.wordpress.com/2015/05/31/docker-machine/) in case anyone wants to refer.
Here's a script that's kinda working for me:
https://github.com/docker/machine/issues/3344#issuecomment-212536797
@devcrust where exactly are those :
path/to/ca.pem
path/to/ca-key.pem
path/to/client.pem
path/to/client-key.pem
???
$ ls certs/
ca-key.pem ca.pem cert.pem key.pem
$ ls machines/adhoc/
ca.pem cert.pem config.json id_rsa id_rsa.pub key.pem server-key.pem server.pem
I cannot set it up properly. I have this issue too :
Copying ca.pem to machine dir failed: open : no such file or directory
after upgrading to 0.6.0
.... I lost tcp access to 12 machines
@nathanleclaire Any update on this? I'm trying to figure out how I'd connect to a Docker Host running in Microsoft Azure, that I created from a different computer, using docker-machine
. Right now, I have zero solutions.
Solution: _(you wish)_
So sad that so much time passed and still no feature is introduced to solve this problem.
For what it's worth. You can create a machine using the generic driver but that will restart all your containers.
Adding restart: always on your containers will guarantee that they don't stop. I'd second the docker-machine add option
I'm also game for having a docker-machine add option.
Here are two projects with different approaches to machine sharing:
@jeanlaurent
What's yours ? Let's list them.
My idea was to connect my local machine to an existing physical server with a load of containers running. Doing docker-machine create --driver generic
will most likely stop them, and more. I wonder why it needs to restart docker...
Well, I can simply run commands over ssh, but from its description it looked like docker-machine
could be used as well.
But then, if you created a VM from a computer, and want to manage it from another one. Or you reinstalled your local OS... Or want to delegate control over the VM to somebody else...
P.S. I'm making my first steps with Docker, so there might be points I'm missing...
@x-yuri To "create" a machine manually, just copy its files from .docker/machine/machines
and adjust its paths.
Most helpful comment
@nathanleclaire Any update on this? I'm trying to figure out how I'd connect to a Docker Host running in Microsoft Azure, that I created from a different computer, using
docker-machine
. Right now, I have zero solutions.