Nomad v0.12.0 (8f7fbc8e7b5a4ed0d0209968faf41b238e6d5817)
On Ubuntu 18.04
I upgraded from 0.11.2 to 0.12 and my job file stopped working with error:
0 seconds 0 seconds Driver Failure Failed to create container configuration for image "registry.example.com/app:latest" ("sha256:d28fa12ffa5229073a6d7c7c9a5675ff24d76c04e74f796ec9d47637f5759fee"): volumes are not enabled; cannot mount host paths: "/dev/log:/dev/log" 0
6
I've never had to use volume_mount, mounts or other in 0.11.
Relevant section of job file (which works fine in 0.11)
task "director" {
...
driver = "docker"
config {
volumes = [
"/dev/log:/dev/log",
"/etc/localtime:/etc/localtime:ro",
"local/etc/hosts:/etc/hosts",
]
}
}
0 seconds 0 seconds Driver Failure Failed to create container configuration for image "registry.example.com/app:latest" ("sha256:d28fa12ffa5229073a6d7c7c9a5675ff24d76c04e74f796ec9d47637f5759fee"): volumes are not enabled; cannot mount host paths: "/dev/log:/dev/log" 0
6
From Release notes:
### BACKWARDS INCOMPATIBILITIES:
driver/docker: The Docker driver no longer allows binding host volumes by default.
Operators can set volume enabled plugin configuration to restore previous permissive behavior. GH-8261
In client configuration:
client {
options = {
"docker.volumes.enabled" = "True"
}
}
Hi @kneufeld ! Sorry for not highlighting the change further - it's indeed an intentional change to get Nomad closer to secure-by-default posture, though an annoying backward-incompatible one for sure. We've noted it in the 0.12 Upgrade Guide along with the suggested config option:
plugin "docker" {
config {
volumes {
enabled = true
}
}
}
Thanks for the quick response and sorry I missed that in the release notes.
Just run into this issue myself. Since Nomad is new to me I've just spent a stressful evening trying to figure out what I did wrong, only to discover the upgrade guide. Then I have just spent another few stressful hours chasing around the same bug even after implementing the suggested change - for future reference the only thing that helped was to stop all nomad nodes (both clients and servers), check all the config files 1 by 1 to ensure they have the fix implemented, and then start them again.
.....not fun at all.
Another nomad noob here. I've spent days trying to figure this out. It would be really nice if nomad produced a more helpful error message, and/or the official docs for the docker driver included this change.
Thanks for your work on nomad, it is greatly appreciated.
Most helpful comment
Another nomad noob here. I've spent days trying to figure this out. It would be really nice if nomad produced a more helpful error message, and/or the official docs for the docker driver included this change.
Thanks for your work on nomad, it is greatly appreciated.