Nomad v0.5.0-dev ('40d0a4e074b15ee056c5d5cc73d4a338cd19db72')
OS X
artifact stanza can't "download" files, example
artifact {
source = "/opt/deploy/www/frontend/_infrastructure/consul-template/php-fpm.ctmpl"
destination = "local/php-fpm.ctmpl"
}
yields
{
"Type": "Failed Artifact Download",
"Time": 1477913991678771294,
"FailsTask": false,
"RestartReason": "",
"SetupError": "",
"DriverError": "",
"ExitCode": 0,
"Signal": 0,
"Message": "",
"KillTimeout": 0,
"KillError": "",
"KillReason": "",
"StartDelay": 0,
"DownloadError": "GET error: download not supported for scheme 'file'",
"ValidationError": "",
"DiskLimit": 0,
"FailedSibling": "",
"VaultError": "",
"TaskSignalReason": "",
"TaskSignal": ""
}
Nomad prevents arbitrary host file system access except for docker/rkt volume mounting and raw_exec; all of which can be disabled to lockdown a system and ensure containers can't access the host.
I'd like to keep the surface area for accessing the host file system to a minimum. Is there some compelling reason to use artifacts instead of volume mounts here?
@schmichael today we got all the source/config files in each project's git repo, which we zip and move to each server. So i basically want to copy the config files from the host system into the alloc dir to be used by nomad.
Using volume mounts did not work either, since template{} seem to error of missing source file before both artifact downloads or docker mount volumes even gets executed (or so it seem from my test today)
@schmichael though copying a file from the host system should be equally insecure as to downloading any random file from the internet, so seem like an weird omission to have :)
@jippi What you can do in the meantime is running a web server hosting those files and use the artifact block. In a latter release we may add a set of directories that can be accessed. The issue right now is that if you allow arbitrary file downloads then you could access anyones alloc data which is bad.
As for downloading any file from the internet, we allow you to add the checksum for the file so you get what you expect.
copying a file from the host system should be equally insecure as to downloading any random file from the internet, so seem like an weird omission to have :)
Also worth noting preventing copying from the host is to enforce container isolation, not secure containers against malicious code (which is what the checksum feature is for).
@dadgar would be nice to have as an option for directly access indeed.. we run a single "monolith" environment where the entire box is trusted :)
Also our PHP-FPM, nginx and redis configuration is contained in the repo we deploy through nomad, so those files will only be available either directly on the host disk or inside the docker container, which the current Nomad setup doesn't work with out of the box.
I'm not sure about the best practice on this, but not being able to use the docker image content or the host file system seem very limiting to me.
I completely get this is the first release with this (awesome!) feature, and it need time to mature and grow into a more feature rich and secure solution over time - I'm just bummed that this was my silver bullet to putting everything into nomad, and now it doesn't seem like it will work without major changes to our 4 year old deployment strategy, causing me a bit of headache :)
Additionally, having nomad in production / staging and development require different level of strictness.. for example running nomad on OS X to mirror production, running a separate server to serve files from the local environment is ... not ideal either.. it should be as simple as nomad run <file> and done.
Having it as a flag similar to raw_exec would be nice :)
@dadgar @schmichael
What I'm looking for is to do one of the following things, to keep moving parts to a minimum
all our app code lives as directories on the docker host machines, and we use a volume to map the code into our "execution container"
our repos contain all nginx/php-fpm/php-cli configuration files for that project in particular, so I need to find a way to get the configuration from a project directory into the alloc directory, without having to spin up an local nginx server _just_ to work around the limitation of copy being disallowed.
i think it's a pretty tall requirement to have to service template files from a webserver, when the files already exist locally both in a docker volume and in the host filesystem.
any advice on how to make this a more sane setup is highly welcome
Hey,
We will get the artifact from host working in 0.5.1. It is just a bit too much to plumb for 0.5
@dadgar <3 ! :)
@dadgar I'm running 0.5.1 here and I still get this error. Did it land and I'm missing something? :)
I just got a GET error: download not supported for scheme 'file' on 0.5.2 as well.
Sadly I didn't get to it in time for 0.5.3. I'll try again for 0.5.4.
We want to implement it such that the client has a whitelist of paths artifacts can be copied from to maintain a tunable level of encapsulation between tasks.
did it get in yet @schmichael ? :)
I'm also looking forward for this feature. Either being able to copy files from the host to the task or being able to provide an additional list of files to be copied to the chroot in the task level instead of the client config.
I have a use case where one would have some templates already on the host and they just need to make it to the allocation directory.
A workaround for sure is the web server to serve these files but sounds like an overkill for this case. You would be introducing yet another service into the moving parts of your system.
Is this ever going to be addressed?
Is using the local filesystem as the artifact source on the horizon at all? I need nomad to work with the artifacts already on the servers box as contacting an external api isn't possible and spinning up a web server does seem like over kill especially when the go getter library does support the file scheme. Or is there some specific reason for this option being excluded? I'm running 0.8.6 and still getting this error message.
Any news on that ?
Hello, is there anyone there? Would really like to know if this is ever likely to be implemented? If not could someone update the issue with appropriate comments and then close it off?
Seems strange that I can't do this BUT I can map a directory using chroot_env.
@danapoklepovich : as @schmichael pointed out, we're hesitant to add this feature because it allows for arbitrary content from the client (including, other tasks) to be copied into a task. Towards @andye2004's comment, simply enabling the file schema without a safelist allows anyone job submitter access to the potentially the entire client filesystem; contrast this with configuring the chroot_env, which is performed during the privileged act of running the client.
I'll ask the dev team about a timetable for adding this capability.
Additionally, we are designing a number of features around volume support, including a host volume capability that should address these use cases without compromising security.
@cgbaker @dantoml Thanks guys, appreciate the follow up. Volumes sounds like a much better solution IMO. I'll keep an eye on #5377, got excited when I saw the ticket and then a bit disappointed to see it has only just been opened.....
Closing this, Host volumes is now available in Nomad 0.10 and we don't intend to support the file schema for the reasons outlined above.
@preetapan @endocrimes Host volume is not going to help this issue. For detail check
https://github.com/hashicorp/nomad/issues/6846
I'll reopen this as #6846 outlines a use case host volumes do not cover.
That being said: even if the security issues are solved (perhaps with an allow list), the most naive approach's user experience is not ideal:
Unlike host volumes, the scheduler has no knowledge of what files exist on a client's filesystem, so there's no ability to properly schedule a job onto a client that has the files the job expects to exist. This means a user would have to put the files on every node or remember to use node metadata and job constraints when referencing files on the host filesystem.
This isn't a blocker per se, but I'm still unsure if the use cases solved by the file scheme aren't better suited in other ways (eg "build" support for baking artifacts into images, better image management support to avoid the need for locally sourced images, etc).
@schmichael Thank you.
This means a user would have to put the files on every node or remember to use node metadata and job constraints when referencing files on the host filesystem.
That can be the use cases, or using GlusterFs.
This isn't a blocker per se, but I'm still unsure if the use cases solved by the file scheme aren't better suited in other ways (eg "build" support for baking artifacts into images, better image management support to avoid the need for locally sourced images, etc).
To load the images from local is a valid requirement when the images are within the cluster nodes. From "artifact" perspective, it looks focus loading outside of the cluster nodes.
Seems like minimally this would be useful for troubleshooting locally, even if it is insecure for production.
I think this feature would be nice to have for ease of local development. nomad agent -dev and local artifact sources would make it super easy to get up and running, and would be useful for experimentation as well as rapid iteration.
Most helpful comment
Is this ever going to be addressed?