It is needed to add the host ip as a dns server to a the docker driver (for querying consul).
I know about NOMAD_IP_
:+1:
What exactly do you think the host ip would be given that there can be many network devices, multiple IPs per NIC, etc?
good question, for a "standard" setup I'd like to get the private ip,
the use case I had in mind is setting dns_servers in a docker task to the local consul/dnsmasq client or passing url arguments for connecting tasks to other local running services (such as nsq), which I'm currently doing with 172.17.0.1 (docker)..
If you are on a bridge network, that private IP wouldn't be routable? I do not want to add an environment variable that doesn't have clear behavior or benefit. It may be better to add an option to the client config to allow the operator to set the --dns flag for docker when using the bridge network.
@dadgar AFAIK it is routable on a bridge network
@bsphere Sorry you are right about the routing. Did my point about the DNS make sense though?
It'll definitely improve the integration with consul and seems like a good
feature. But it won't solve the issue of passing arguments or env vars of
locally running services such as nsq.
On Mar 14, 2017 23:17, "Alex Dadgar" notifications@github.com wrote:
@bsphere https://github.com/bsphere Sorry you are right about the
routing. Did my point about the DNS make sense though?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/nomad/issues/2429#issuecomment-286563535,
or mute
the thread
https://github.com/notifications/unsubscribe-auth/AB2LTC07MX-HW-LDTXYiz4f6qEbmpQGlks5rlwPRgaJpZM4MaP5c
.
Can you not just register those with consul and do a lookup of their IP using DNS?
Thanks,
Alex Dadgar
On Mar 14, 2017, 10:54 PM -0700, Gal Ben-Haim notifications@github.com, wrote:
It'll definitely improve the integration with consul and seems like a good
feature. But it won't solve the issue of passing arguments or env vars of
locally running services such as nsq.On Mar 14, 2017 23:17, "Alex Dadgar" notifications@github.com wrote:
@bsphere https://github.com/bsphere Sorry you are right about the
routing. Did my point about the DNS make sense though?—
You are receiving this because you were mentioned.Reply to this email directly, view it on GitHub
https://github.com/hashicorp/nomad/issues/2429#issuecomment-286563535,
or mute
the thread
https://github.com/notifications/unsubscribe-auth/AB2LTC07MX-HW-LDTXYiz4f6qEbmpQGlks5rlwPRgaJpZM4MaP5c
.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
We run consul-template in each container. We want all the consul-template in each container to point to the consul installed in the host machine. So that loads on the consul are minimized. To get the host ip of the containers right now we are exposing an portmap. It would be great if the host ip (Nomad client_addr) is exposed inside the containers.
I'm experiencing this problem as well, currently we are pointing all of our containers in our cluster to our master consul nodes. But we really want them to just query the consul node local to the host that they are running on. Is anyone aware of a workaround that could work in the meantime?
[ Edit: dont use this.
Use @dadgar HOST_IP = "${attr.unique.network.ip-address}"
https://github.com/hashicorp/nomad/issues/2429#issuecomment-289320884 ]
@LinusU Here is my workaround.
The script that starts nomad on my machines generates the config.
I use that to set the IP address in the node_class of the nomad config:
cat >${PREFIX}/etc/nomad/config.hcl <<EOL
(...)
client {
enabled = true
node_class = "role=$ROLE,address=$PRIVATE_IPV4"
(...)
EOL
I then pass the node_class to the environment (https://www.nomadproject.io/docs/job-specification/env.html#interpolation):
env {
NODE_CLASS = "${nomad.class}"
}
Is this applicable to your situation?
Hehe, that's quite clever 😄 I think I could use something similar, thanks!
@hmalphettes I would not do that. The node class is used to optimize the scheduler by detecting feasibility of placements at a class level. This allows skipping nodes that we know wont work when picking a new placement. So by making the node class unique you will loose that and it goes against the purpose of the node class.
Instead use this which already provides the IP:
env {
HOST_IP = "${attr.unique.network.ip-address}"
}
I had actually forgotten about this but I think have the above lets us close this issue as there is a direct work around. Thoughts?
that seem like a solid hack @dadgar :)
yeah you close the issue. can you add it to the documentation?
On 27 March 2017 at 01:16, Alex Dadgar notifications@github.com wrote:
@hmalphettes https://github.com/hmalphettes I would not do that. The
node class is used to optimize the scheduler by detecting feasibility of
placements at a class level. This allows skipping nodes that we know wont
work when picking a new placement. So by making the node class unique you
will loose that and it goes against the purpose of the node class.Instead use this which already provides the IP:
env {
HOST_IP = "${attr.unique.network.ip-address}"
}I have actually forgotten about this but I think have the above lets us
close this issue as there is a direct work around. Thoughts?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/nomad/issues/2429#issuecomment-289320884,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AB2LTFY8CmfyEp2OO6LX81Vp3wT0fRyCks5rpuPSgaJpZM4MaP5c
.
Thanks all!
Docs cc7cdc11311e22b3681bab72fef33937f58e3249
dns_servers = ["${attr.unique.network.ip-address}", "8.8.8.8"]
this doesn't work for me with Nomad 0.5.5, results in a strange /etc/resolv.conf
@bsphere It worked for me:
Starting nomad:
$ ifconfig
ens32 Link encap:Ethernet HWaddr 00:0c:29:b1:7c:df
inet addr:192.168.74.136 Bcast:192.168.74.255 Mask:255.255.255.0
inet6 addr: fe80::20c:29ff:feb1:7cdf/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:683723 errors:0 dropped:0 overruns:0 frame:0
TX packets:311208 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:615142211 (615.1 MB) TX bytes:35101803 (35.1 MB)
...
$ sudo nomad agent -dev -network-interface=ens32
Running this job:
job "example" {
type = "batch"
datacenters = ["dc1"]
group "foo" {
task "foo" {
driver = "docker"
config {
image = "redis:3.2"
dns_servers = ["${attr.unique.network.ip-address}", "8.8.8.8"]
command = "sleep"
args = ["1000"]
}
}
}
}
Resulted in this resolve.conf
docker exec -it foo-92ae988f-b374-e0a1-e726-b448349cc515 cat /etc/resolv.conf
search localdomain
nameserver 192.168.74.136
nameserver 8.8.8.8
hmm. maybe because I use bind_addr = "0.0.0.0" ?!
advertise addresses are a private AWS ip
@bsphere If you run nomad node-status -verbose <node-id> what is the value of attr.unique.network.ip-address.
This value is actually separate from the bind_addr.
@dadgar its the private ip address.. i'll have to try it again.
I can also see there's unique.platform.aws.local-ipv4.
is there a way to use those runtime values in a template?
is ${attr.unique.network.ip-address} new?
@bsphere Will be for 0.5.6: https://github.com/hashicorp/nomad/pull/2488
@lnguyen Nope it has been around for many releases!
Most helpful comment
@hmalphettes I would not do that. The node class is used to optimize the scheduler by detecting feasibility of placements at a class level. This allows skipping nodes that we know wont work when picking a new placement. So by making the node class unique you will loose that and it goes against the purpose of the node class.
Instead use this which already provides the IP:
I had actually forgotten about this but I think have the above lets us close this issue as there is a direct work around. Thoughts?