I made the following task:
...
task "build" {
driver = "docker"
config {
image = "${CONTAINER}:latest"
}
...
And then I run:
$ CONTAINER=builder nomad run build.nomad
In logs of Nomad:
[ERR] client: failed to start task 'front' for alloc '9b50f709-23cd-bbd5-6861-d0ad10216ef3': Failed to pull `${CONTAINER}:latest`:
API error (500): Error parsing reference: "${CONTAINER}" is not a valid repository/tag
Could you explain how to use variables in a right way?
Hey,
Nomad doesn't use the environment variables you set when launching it. To set a task's environment variables you use the env block in the task. See the docs here.
Your task would become:
task "build" {
driver = "docker"
config {
image = "${CONTAINER}:latest"
}
env {
CONTAINER = "builder"
}
}
We however do not currently support interpolation inside all config fields. The fields we support it in are the arguments and on the exec/raw_exec driver also in the command.
@dadgar I've looked through the documentation and a lot of issues, but I'm unable to determine:
Is there a way to use enviroment variables in the job spec. file (like how docker-compose easily allows you to access them)? I'm not trying to pass these variables to the container
eg-
While using the auth attribute inside config, I wouldn't want to hard-code my credentials for docker hub (when using docker driver). Instead, I'd like to access them through environment variables available to me.
task "foo" {
driver = "docker"
config {
image = "my-private-org/my-private-image"
auth {
username = "${DOCKERHUB_USERNAME}"
password = "${DOCKERHUB_PASSWORD}"
}
}
}
I export the credentials as env. vars before running the nomad job spec file:
export DOCKERHUB_USERNAME=hello
export DOCKERHUB_PASSWORD=world
Is this currently possible?
@duaraghav8 it's probably worth a quick google about the dangers of storing secrets in environment variables. IMO you're a lot safer storing secrets in a file which you can explicitly control access to than in the environment which implicitly gets inherited by every subprocess.
thanks @hvindin, you're right about that. In fact, we're going to store the secrets in Vault. This was just an example to illustrate how I wish to access data through env vars inside nomad job spec file. I'm mostly going to use the vars for accessing some directory structure info.
@duaraghav8 No it is not currently possible but will be in future versions of Nomad (its on the roadmap).
Thanks @dadgar
cc: @shivamdixit
+1
Any movement on this?
I have a gitolite server using post-receive to kick off builds by calling makefile commands from the pushed repo. What I'm hoping for, is the ability to interpolate the branch_name into the nomad job config (called from the makefile). This would give us the ability to deploy containers, based on branch, which we can then identify/route to using consul.
ATM, I have all env vars available in the makefile for all jobs, but I don't seem to have the ability to use those env vars in the nomad job config.
Unless this feature is supported I think I may end up relying on something like envsubst.
If there is a better solution I'm all ears, but I'd love to see something similar to packer's {env BRANCH_NAME} ability.
@MrRacoon
I do the nearly the same thing you have described using "meta" parameters and a "parameterized" job.
My use case is the same, pass a branch name, from which code gets checked out and built.
@shantanugadgil Would you mind posting a small example of what that looks like? I'm sure many people could benefit seeing how this can be done.
@MrRacoon
apologies for the extreme delay ... contents of my CentOS (Packer) build job which I run under Nomad:
File: centos_ami.nomad
# set ft=hcl
job "centos_ami" {
region = "myregion"
datacenters = ["mydatacenter"]
type = "batch"
constraint {
attribute = "${node.unique.name}"
value = "mybuildmachine"
}
parameterized {
payload = "forbidden"
meta_required = ["BRANCH"]
}
group "centos_ami" {
count = 1
restart {
attempts = 0
interval = "30m"
delay = "15s"
mode = "fail"
}
task "centos_ami" {
driver = "exec"
env {
AWS_POLL_DELAY_SECONDS = "100"
AWS_MAX_ATTEMPTS = "100"
}
template {
data = <<__EOT__
{
"aws_access_key": "ACCESS_KEY",
"aws_secret_key": "SECRET_KEY"
}
__EOT__
destination = "variables.json"
}
template {
data = <<__EOT__
#!/bin/bash
set -u
set -e
set -x
_log ()
{
local msg="$1"
local dd=$(date '+%F %T')
echo "$dd $msg"
return 0
}
#####
_log "Start"
echo "====="
env | sort
echo "====="
sleep 10
svn_branch="{{env "NOMAD_META_BRANCH"}}"
_log "Building AMI from branch [$svn_branch]"
dest_dir=$(echo "${svn_branch}" | tr '/' '_')
svn_export_cmd='svn --non-interactive export --force --trust-server-cert --username myreadonlyusername --password mypassword'
src_url="https://mysvnserver.mydomain.com/svn/mysoftware/${svn_branch}/packer_build_centos"
dest="${dest_dir}_packer_build_centos"
_log "Exporting source [$src_url]"
${svn_export_cmd} ${src_url} ${dest}
cp -fv variables.json ${dest}/.
cd ${dest}
template="centos.json"
_log "Getting packer version ..."
v=$(packer --version)
_log "Packer Version [$v]"
packer validate -var-file=variables.json ${template} || exit 1
packer build -color=false -var-file=variables.json ${template}
_log "Done"
exit 0
__EOT__
destination = "wrapper.bash"
perms = "0755"
}
config {
command = "/bin/bash"
args = ["-x", "wrapper.bash"]
}
resources {
cpu = 500
memory = 256
network {
mbits = 10
}
}
}
}
}
Ways to submit the dispatch job ...
# nomad job dispatch -meta BRANCH="trunk" centos_ami
# nomad job dispatch -meta BRANCH="branches/mydevbranch" centos_ami
HTH,
Shantanu
This is helpful but wow does this make me feel gross. ^^
Most of the job file is boilerplate stuff.
Instead of inlining the script, you could artifact it too.
Most helpful comment
@dadgar I've looked through the documentation and a lot of issues, but I'm unable to determine:
Is there a way to use enviroment variables in the job spec. file (like how docker-compose easily allows you to access them)? I'm not trying to pass these variables to the container
eg-
While using the
authattribute insideconfig, I wouldn't want to hard-code my credentials for docker hub (when using docker driver). Instead, I'd like to access them through environment variables available to me.I export the credentials as env. vars before running the nomad job spec file:
Is this currently possible?