Machine: Make machine to be ready for production or add a message "Not ready for production"

Created on 12 Jan 2016  路  6Comments  路  Source: docker/machine

I found that in September 2015 message "Machine is currently in beta, so things are likely to change. We
don't recommend you use it in production yet." has been removed from the main page, see https://github.com/docker/machine/commit/b2d8bcdd75558e5112b97c239c2fe29a60cf386c

Good that it is not in beta, but personally it feels like you should keep "We
don't recommend you use it in production yet." until two issues will not be covered better.

  1. Upgrades. Drivers break upgrades. Just search for upgrade or production keywords in issue tracker and you will feel the pain. It is clearly that docker-machine should run compatibility tests, which verifies that drivers can without any issues use machines created with previous versions. Drivers which break this compatibility / do not have tests - should be marked as non for production. Personally I believe that docker should set up test environment and request each driver maintainer to add appropriate tests.
  2. Support for multiple docker clients. I just don't see how you can manage production without easily getting access to old installations. See copy pasted examples (from https://github.com/docker/machine/issues/2046#issuecomment-170991316 )

If you are building 12-factor apps - you don't care about docker-machine. If you are not there yet - you may manually upgrade your machines.

Let's just take a look on one example

  1. You have Floating IP + Docker Droplet, which you created with docker-machine vX
  2. New docker and docker-machine came out with vNext. You decided that you want to upgrade. And you want to do that with minimum downtime.
  3. Create new droplet with docker-machine
  4. Prepare setup (maybe docker-compose or just scripts). Verify that this droplet handles everything as you would expect.
  5. Switch floating IP from old machine to new one.

Sounds good right? What would you do if something bad will happen between 1 and 4 and you want really quick fix/check something on old machine. You cannot do that - because it has old docker, so you need to downgrade to previous version of docker-machine (and obviously you need to know exactly which one).

Another example

You have dev and production environments. You upgraded docker-machine to start working integrating/testing new docker features on dev environment. But at the same moment something bad happens with production, how can you connect to it? Yes - you need to downgrade - do you know to which version of docker-machine? Probably not.

I would not be surprised if there are will be more requirements from folks. But these two I think very important.

kinquestion

Most helpful comment

@nathanleclaire first of all thank you for keep a conversation going.

As an example I will share my home projects / machines:

  1. Dev on OSX (VirtualBox) - trying to keep the latest version, so I can always try new docker stuff.
  2. Home Server with docker (CI, log management, VPN, few databases and few home projects). Usually doing only major upgrades or when I see that I really need something from new version.
  3. Few DO droplets with WebSites.

Let's say that right now all of the machines have machine 0.5 and docker 1.9.x. My regular day with these machines.
On machine (1) - development, adding/removing machines, a lot of docker kill / rm -v $(docker -aq) as I do not care what is installed on them.
On machine (2) - just regular upgrades for installed containers. Sometimes adding new services / killing old.
On machines (3) - doing nothing. if it works - I do not touch it.

Now 0.6.x and 1.10.x are coming out.
I upgrade (1) right away to play with new stuff.

At this point of time I do not want to upgrade (2) as I do not know how much time it will take me to upgrade, plus I usually want to combine docker upgrade with apt-get distr-upgrade to minimize downtown / number of reboots. But because I upgraded docker-machine/docker - I cannot manage this machine anymore. Which means that I cannot upgrade just one required service. It actually requires me to upgrade everything (or downgrade docker client)

I upgrade machines (3) only when I see that there are security patches or I need to upgrade the service / website itself (usually it happens very rarely). Some of the droplets have CI automation behind them, some of them don't, as they installed with docker-machine. After not touching them for 4 months I usually find that I do not have access to them anymore: because driver broke something and (after I manually fix config file for driver) because version of API is different.

All 6 comments

Thanks for the issue @outcoldman.

The original message in the documentation was just too confusing. We found that people were mistaking "not ready for production" with "not OK to use on my local computer with VirtualBox", which Machine is perfectly acceptable to use for. "Ready for production" means a lot of different things to a lot of different people and I think that maintaining a version <1.0 sends at least some signal that the software is still young and some issues might come up.

Upgrades. Drivers break upgrades. Just search for upgrade or production keywords in issue tracker and you will feel the pain. It is clearly that docker-machine should run compatibility tests, which verifies that drivers can without any issues use machines created with previous versions. Drivers which break this compatibility / do not have tests - should be marked as non for production. Personally I believe that docker should set up test environment and request each driver maintainer to add appropriate tests.

We are actively working on this and being a lot more rigorous about testing. Our test coverage (integration and unit test) is increasing, we're being more strict about allowing changes in without tests, and we are collecting error reports that are generally helping a lot to identify the most common issues and fix them.

Sounds good right? What would you do if something bad will happen between 1 and 4 and you want really quick fix/check something on old machine. You cannot do that - because it has old docker, so you need to downgrade to previous version of docker-machine (and obviously you need to know exactly which one).

Most of the time you should be able to update Docker without needing to first update Docker Machine (we just update the package or ISO without hardcoding any version in the Machine binary), so my suggestion would be to do your Docker upgrade first, then upgrade Docker Machine. If some specific version incompatibilities come up here, I'd ask you please report them as bugs.

Support for multiple docker clients. I just don't see how you can manage production without easily getting access to old installations. See copy pasted examples (from #2046 (comment) )

Why not just use the Docker client for the oldest Docker daemon that you are currently running in production? e.g. If you are running Docker 1.8, 1.9, and 1.10-dev in production, why not always use the 1.8 client? In general, older Docker clients are intended to be able to communicate with newer Docker daemons without issue.

Most of the time you should be able to update Docker without needing to first update Docker Machine (we just update the package or ISO without hardcoding any version in the Machine binary), so my suggestion would be to do your Docker upgrade first, then upgrade Docker Machine. If some specific version incompatibilities come up here, I'd ask you please report them as bugs.

If you will perform upgrade of docker host - this means that for some time your service will be unavailable. Usually there are just no need for you to keep old droplet (if we are talking about DO). When you want to upgrade one droplet - it is much easier for you to create new droplet with latest patches on OS and latest docker installed, after that replace old container with new one. If you have clustered environment - you will add new droplets and remove old one. Doing upgrades on Cloud instances is just waste of time and money.

Why not just use the Docker client for the oldest Docker daemon that you are currently running in production? e.g. If you are running Docker 1.8, 1.9, and 1.10-dev in production, why not always use the 1.8 client? In general, older Docker clients are intended to be able to communicate with newer Docker daemons without issue.

What will be the point of using 1.9/1.10-dev on dev environment if your client will be 1.8 and you will not be able to use new features?

I am not saying that implementing docker version manager is only one option - you maybe can find better way of doing that. I just trying to point that some of the important scenarios of using docker in cloud in production is not covered.

If you will perform upgrade of docker host - this means that for some time your service will be unavailable. Usually there are just no need for you to keep old droplet (if we are talking about DO). When you want to upgrade one droplet - it is much easier for you to create new droplet with latest patches on OS and latest docker installed, after that replace old container with new one. If you have clustered environment - you will add new droplets and remove old one. Doing upgrades on Cloud instances is just waste of time and money.

Sure, that makes sense, but you don't need to update Machine to create servers with new versions of Docker, consider this scenario:

(1) New versions of Docker and Docker Machine are released (say, 1.10 and 0.6.0)
(2) You want to upgrade Docker, so you create a new droplet with Machine in anticipation of throwing the old one away. Even if you're using Machine 0.5.X, the Docker version on the created instance will still be 1.10, since Machine follows the upstream installation process dictated by the script at get.docker.com.

I am not saying that implementing docker version manager is only one option - you maybe can find better way of doing that. I just trying to point that some of the important scenarios of using docker in cloud in production is not covered.

I'm having a hard time understanding . Either you get compatibility between versions or you get new features, how could you get both? I usually wouldn't expect to start using brand new features in production right away as well.

It might help me to understand better what your current architecture looks like. How many different versions of Docker are you running in production, and which ones?

@nathanleclaire first of all thank you for keep a conversation going.

As an example I will share my home projects / machines:

  1. Dev on OSX (VirtualBox) - trying to keep the latest version, so I can always try new docker stuff.
  2. Home Server with docker (CI, log management, VPN, few databases and few home projects). Usually doing only major upgrades or when I see that I really need something from new version.
  3. Few DO droplets with WebSites.

Let's say that right now all of the machines have machine 0.5 and docker 1.9.x. My regular day with these machines.
On machine (1) - development, adding/removing machines, a lot of docker kill / rm -v $(docker -aq) as I do not care what is installed on them.
On machine (2) - just regular upgrades for installed containers. Sometimes adding new services / killing old.
On machines (3) - doing nothing. if it works - I do not touch it.

Now 0.6.x and 1.10.x are coming out.
I upgrade (1) right away to play with new stuff.

At this point of time I do not want to upgrade (2) as I do not know how much time it will take me to upgrade, plus I usually want to combine docker upgrade with apt-get distr-upgrade to minimize downtown / number of reboots. But because I upgraded docker-machine/docker - I cannot manage this machine anymore. Which means that I cannot upgrade just one required service. It actually requires me to upgrade everything (or downgrade docker client)

I upgrade machines (3) only when I see that there are security patches or I need to upgrade the service / website itself (usually it happens very rarely). Some of the droplets have CI automation behind them, some of them don't, as they installed with docker-machine. After not touching them for 4 months I usually find that I do not have access to them anymore: because driver broke something and (after I manually fix config file for driver) because version of API is different.

@nathanleclaire I know you probably live in a brave new world that the rest of us can only dream of knowing 馃槣, but...

I'm having a hard time understanding . Either you get compatibility between versions or you get new features, how could you get both? I usually wouldn't expect to start using brand new features in production right away as well.

Some of us old-timers would expect ABI/API compatibility between major versions so are we to assume that Docker is using supermega.major.minor? Where's patch?

In my world, having fast moving parts which are installed on old laptops, new laptops and vacuum cleaners (i.e. a client) that doesn't understand at least a few previous versions of a slow moving part which can't be updated by a developers Jedi mind tricks (i.e. a server) is not normal! Or maybe I'm not normal. Sometimes I'm not sure.

What are they feeding you lot that makes you see things in reverse?

Since this issue is not actionable, I am going to close it. Thanks all.

Was this page helpful?
0 / 5 - 0 ratings