Neither volume nor volume_mount support mounting a subdirectory from inside the volume, just the volume root.
I would avoid restarting Nomad clients for host volume configuration changes, since the project I'm working on requires dynamic adding and removing of volumes. I would like to create only one host volume and mount dynamically-named subdirectories from it. This is not possible (or not documented) in the 0.10.0 version, the only option available being to mount the whole volume.
I would appreciate it too.
I would like to have something similar. I would like to define a single host_volume in the client stanza. Then use that in the volume stanza for a group and specify a sub-dir and be able to use the nomad variables.
I have a job that creates thousands of tasks in a group. Each task needs its own unique sub-dir volume to be mounted into the same location in the container in order for each instance to write its stateful workload to.
e.g.
// Client hcl
host_volume "my-volume" {
path = "/var/lib/my-volume"
}
// job hcl
job "docs" {
group "example" {
count = 1000
volume "example" {
type = "host"
source = "my-volume"
**sub_dir = "docs/${NOMAD_ALLOC_INDEX}**
}
task "example" {
volume_mount {
volume = "example"
destination = "/var/lib/example/.state"
}
}
}
}
Each instance of "example" task would write out the same file - example-cfg.json - in /var/lib/example/.state/example-cfg.json inside the container.
On the host file system, you would see:
/var/lib/my-volume/docs/0/example-cfg.json
/var/lib/my-volume/docs/1/example-cfg.json
/var/lib/my-volume/docs/999/example-cfg.json
I hope I am able to describe my use case properly. If mine should be in a separate issue, I can create a separate issue, but this issue created by @gabriel-v seemed similar.
I am not sure if this is relevant, but please do comment whatever you think.
I feel Volume itself should be schedulable on any client(at least the read only Volume), irrespective of the client config "having" that volume. Restarting client seems really like a bottleneck.
I feel Volume itself should be schedulable on any client(at least the read only Volume), irrespective of the client config "having" that volume. Restarting client seems really like a bottleneck.
That's part of what CSI is intended to help with. I have had some thoughts kicking around about "dynamic host volumes" but that's not on the near-term roadmap at the moment.
CSI suffers the same problem since there is currently no way to dynamically list volumes from your cloud provider and have them available, or to dynamically provision volumes if they don't happen to exist yet - I have more or less the same use case as @4BitBen where even with CSI, there is still no way for a volume mount to specify a subdirectory of a mount (wether host or CSI) using variable interpolation.
Some more discussion and requests for this feature in https://github.com/hashicorp/nomad/issues/7110 and https://github.com/hashicorp/nomad/issues/7877
I would love to see this as well. My use case is running many wordpress sites across my nomad cluster, with each nomad client having one or more nfs shares mounted into, say, /opt/sites/<file_cluster_id> on the host. Each wordpress site gets it's own job file that mounts a volume like /opt/sites/fs1/<site_id> where site_id is a subdirectory of a host_volume created in the client config.
ex.
// client.hcl
client {
host_volume "file-server-1" {
path = "/opt/sites/fs1"
read_only = false
}
}
// job-site-1001.hcl
job "site-1001" {
group "wordpress" {
volume "fs" {
type = "host"
read_only = false
source = "file-server-1"
}
task "web" {
volume_mount {
volume = "fs"
// mounting full path /opt/sites/fs1/1001 here
subdir = "/1001"
destination = "/var/www/html"
}
}
}
}
Without this, I would need to declare hundreds (potentially thousands) of host_volume entries on each of potentially dozens of nomad clients. Every nomad clients would also need to be restarted every time a new site was created. This workflow is a non-starter without being able to access subdirectories.
Without this, I would need to declare hundreds (potentially thousands) of host_volume entries on each of potentially dozens of nomad clients.
If you're using Docker, you can actually do this with volumes in the config section of docker. (Don't forget to turn on docker.volumes.enabled in the server settings.)
You can still control host affinity if needed manually.
Kind of hacky but it will work.
docker.volumes.enabled
What is the config syntax for this? The documentation is not clear.
The docker.volume.enabled syntax is the older HCL syntax that we're encouraging people to move away from. From the example in https://www.nomadproject.io/docs/drivers/docker#client-requirements it should be:
plugin "docker" {
config {
volumes {
enabled = true
}
}
I've been using the docker volumes solution but I'd like to have sub directories as well.
Most helpful comment
The
docker.volume.enabledsyntax is the older HCL syntax that we're encouraging people to move away from. From the example in https://www.nomadproject.io/docs/drivers/docker#client-requirements it should be: