Perhaps this is already on the roadmap but it would nice to be able to use values from consul as metadata for a job as well as env variables this way configuration and such can be updated dynamically causing a restart of a job?
perhaps allowing for something like this in the env section:
env {
MY_CONFIG_OPTION="{{consul "myapp/config/option"}}"
}
This would tell nomad to fill that env variable in with the value from the consul path given.
It would also be nice if after allocation it would watch that value for changes and restart the job accordingly.
Please let me know if you think this is a good idea or should I be composing tool on top of nomad (consul-template) to accomplish this.
Thanks for the suggestion. We are planning to add this. We have not yet done so because we don't yet have a way to provide security or access control around the K/V store. Since all clients appear to be Nomad, Nomad needs some idea of who owns which job and what permissions they should have.
Cool, yeah I kind of started working on this a little bit using the consul client available in the task_runner I might look in to the sec and acl stuff as my implementation does not take that in to account.
Robust security features are pretty complicated and still a ways out but I think this is likely useful in the mean time. :+1:
I would like to help on that too. Allowing to add consul KV access to the tasks is a nice addition. But I don't know if the implementation must allow to set variables like terraform definition make, since it can be nice to be able to iterate on the keys if array.
Also it must set and get on KV. If there are already a design document for this feature I could like to join and help there.
Something like consul terraform provider is already battle tested: (copying from the tf documentation)
job "aaa" {
...
task "bbb" {
provider "consul" {
address = "demo.consul.io:80"
datacenter = "nyc1"
}
resource "consul_keys" "app" {
token = "xxxxxxxxxxxxxxxxxx"
key {
name = "ami"
path = "service/app/launch_ami"
default = "ami-1234"
}
}
# Use our variable from Consul
env {
ami = "${consul_keys.app.var.ami}"
}
}
Probably the provider part must not be settable since its already on the configuration.
Yeah I like this way a little better you get a bit more flexibility on how consul is configure (not just whatever the nomad agent is set to!)
By the way, if you have an immediate itch to scratch you may also be able to use envconsul or consul-template, both hashicorp projects which have been around awhile and have a lot of features.
Has a design doc for this been made, anything we might be able to help with?
I work around this issue by rendering job specs as templates, which are put in place by CM. Consul KV are consulted by CM when rendering the template. This has worked well for my deployments.
Maybe I imagined it, but I thought I heard during one of the keynotes at Hashiconf that there was a plan to integrate consul-template's functionality directly into nomad, along with an integration with Vault so that it can get a consul ACL token to act on behalf of the application.
I remember seeing an example something like this:
task "..." {
# ..
template {
source = "something/foo.tmpl"
destination = "something/foo"
}
# ..
}
This was the closest ticket I could find to that. So _did_ I imagine it, or is this something that will come along with the forthcoming Vault token/policy integration?
@apparentlymart Its coming :)
@dadgar in 0.5.0 ? :o
Yep!
I've learned from @dadgar that a template stanza will be added in order to generate config files, but there won't be a direct way to read Consul KV in the env stanza unfortunately.
A wrapper script, reading from a config file and setting those environment variables would be needed before starting your main job if your app respects 12 factor.
Hey I am going to close this since initial support has landed in 0.5.0 via the template stanza: https://www.nomadproject.io/docs/job-specification/template.html
Retrieving via env vars is a different enhancement :)
@dadgar I can't see any other ticket regarding an enhancement for env vars or am I wrong? Is there any plan to add this functionality in the future?
The template stanza is good but similar functionality applied to the env stanza would be awesome!
@paddycr In the infrastructure we are building, we never run jobs with the nomad client. We use Terraform.
1- We retrieve environment variables from consul with terraform data sources consul_keys and consul_key_prefix (not merged yet, need some fixes: https://github.com/hashicorp/terraform/pull/10353).
2- We inject data source outputs in a templated Nomad job file.
3-We send the rendered template in the Terraform nomad_job resource (Nomad provider has been released today in Terraform 0.8)
nomad_job/nomad_job.tf
##
# NAME : nomad_job/nomad_job.tf
# DESCRIPTION : Templated nomad job
# DOC :
##
##
# VARIABLES
##
variable "vars_env" { type = "map" } // variables to template the nomad job description (ex: datacenter, exposed ports...)
variable "vars_job_config" { type = "map" } // variables to send into container env vars
variable "nomad_tpl" { }
variable "_max_number_vars" { default = 42 }
##
# Resources
##
data "template_file" "vars_env" {
// Terraform does not allow seting a count to a "computed" value (ex: length(keys(vars)) does not works).
// So "count" will loop 42 times on "vars", but also make (42 - length(keys(vars))) duplicated keys
// To remove duplicates, we just apply the "distinct" interpolation. Then length(distinct(data.template_file.vars_env)) will be equal to length(keys(vars))
// :puke:
count = "${var._max_number_vars}"
template = "$${key} = \"$${value}\""
vars = {
key = "${element(keys(var.vars_env), count.index)}"
value = "${lookup(var.vars_env, element(keys(var.vars_env), count.index))}"
}
}
data "template_file" "jobspec" {
template = "${file("${var.nomad_tpl}")}"
vars = "${merge(
var.vars_job_config,
map("ENVIRONMENT", join("\n", distinct(data.template_file.vars_env.*.rendered)))
)}"
}
// warning: terraform 0.8.0
resource "nomad_job" "job" {
jobspec = "${data.template_file.jobspec.rendered}"
}
##
# Outputs
##
output "jobspec" { value = "${data.template_file.jobspec.rendered}" }
redis.nomad
job "redis" {
region = "${region}"
datacenters = ["${datacenter}"]
[...]
group "redis" {
constraint {
attribute = "$${node.class}"
operator = "="
value = "data"
}
task "redis" {
driver = "docker"
config {
image = "redis:3.2"
port_map {
db = 6379
}
}
resources {
cpu = 2000
memory = 8000
disk = 0
network {
mbits = 500
port "db" {
static = ${REDIS_PORT}
}
}
}
env {
${ENVIRONMENT}
}
}
}
main.tf
variable "region" { }
variable "datacenter" { }
provider "consul" {
address = "1.2.3.4:8500"
datacenter = "${var.datacenter}"
}
provider "nomad" {
address = "http://1.2.3.4:4646"
region = "${var.region}"
}
data "consul_keys" "read_env" {
key {
name = "REDIS_PASSWORD"
path = "redis/env/REDIS_PASSWORD"
}
}
data "consul_keys" "read_job_config" {
key {
name = "REDIS_PORT"
path = "redis/nomad_job/REDIS_PORT"
}
}
module "redis_job" {
source = "./nomad_job"
vars_env = "${data.consul_keys.read_env.var}"
vars_job_config = "${merge(
data.consul_keys.read_job_config.var,
map(
"region",
"${var.region}",
"datacenter",
"${var.datacenter}"
)
)}"
nomad_tpl = "${path.module}/redis.nomad"
}
output "redis_jobspec" { value = "${module.redis_job.jobspec}" }
Applying the nomad job:
$ terraform plan
$ terraform apply
This is generic and this nomad_job module can be called as often you need, with different env variables and nomad job file.
Interesting approach. I can see its working well in Atlas, where sandboxes are isolated well.
Running this from managed CI server over internet could be too complicated.
Also terraform is way too slow in plan/apply, with hundreds of jobs I imagine it will be painful.
@paddycr There is an issue for that open: https://github.com/hashicorp/nomad/issues/1765
Most helpful comment
@apparentlymart Its coming :)