If I'm not wrong, currently, the service stanza automatically registers the client's address configured in network_interface to Consul. However, if the client is behind NAT, this address might not be reachable for other hosts in the multi-DC cluster.
Is it a good idea to include an address in the advertise block of the client configuration file, under service for example that will override the address found by network fingerprinting? Or alternatively, shouldn't nomad uses the local Consul's WAN address instead?
I think that I have the same problem. When I deploy a service say hello-world on two vagrant VMs, the service is affected the NAT address :
Here is an example :
The hello-world runs on two machines having respectively172.17.4.201 and 172.17.4.202 IPs but the service's IP address is 10.0.2.15:
curl http://172.17.4.101:8500/v1/catalog/service/hello-world
[
{
"Node": "w1",
"Address": "172.17.4.201",
"ServiceID": "_nomad-executor-e1f0e43e-59b9-e1ea-33af-05e1caae7f9b-hello-world-hello-world-urlprefix-hello-world.service/",
"ServiceName": "hello-world",
"ServiceAddress": "10.0.2.15"
...
},
{
"Node": "w2",
"Address": "172.17.4.202",
"ServiceID": "_nomad-executor-b695c62a-bcb8-5781-c213-0a02071629e5-hello-world-hello-world-urlprefix-hello-world.service/",
"ServiceName": "hello-world",
"ServiceAddress": "10.0.2.15"
...
}
]
Just an update, currently, I work around this by creating a loopback interface with the desired IP address and make Nomad bind to it. Then I use 'iptables -j NETMAP' to redirect traffic from the actual interface to it.
On the other side, the "nomad-client" service has the correct IP address :
[
{
"Node": "w1",
"Address": "172.17.4.201",
"ServiceID": "_nomad-client-nomad-client-http",
"ServiceName": "nomad-client",
"ServiceAddress": "172.17.4.201"
...
},
{
"Node": "w2",
"Address": "172.17.4.202",
"ServiceID": "_nomad-client-nomad-client-http",
"ServiceName": "nomad-client",
"ServiceAddress": "172.17.4.202"
...
}
]
I have a similar request (tell me if it needs a separate issue): I want to register a service a second time, but with a different (already allocated) port. The reason for this is that I run the https://github.com/fabiolb/fabio load balancer on each machine, which picks up serving the service on the :80 port through service tags that are set in the nomad config. So basically what I would like to do is:
service {
name = "chronograf"
port = "http"
tags = ["urlprefix-chronograf.service.consul/"]
check {
type = "http"
path = "/ping"
interval = "10s"
timeout = "3s"
}
}
service {
name = "chronograf-http"
port = "fabio's port!" # or 80
}
This way I can access a service through <service-name>-http.service.consul, without having to query consul for the port, which is quite a hassle, or add consul-template as a dependency.
I have another use case for this. Although I agree it a bit stretched one.
I'm trying to build a docker-compose replica of the production stack, for local environments.
A docker-compose of consul, vault and nomad.
I got everything to work apart from consul health-checks for the running tasks. This is because the service will register in consul as being in 127.0.0.1 and the consul health check can not find it there, as 127.0.0.1 from within the consul container is itself obviously.
This is very similar if not identical to the nomad behind NAT scenario; being able to override the service advertise address (so that I can specify the docker bip) would be great.
Yes, I appreciate that running nomad in a docker in docker setup is a bit of a corner case...
Just an update, currently, I work around this by creating a loopback interface with the desired IP address and make Nomad bind to it. Then I use 'iptables -j NETMAP' to redirect traffic from the actual interface to it.
As it was not straightforward for me, it is how I did (you need to replace {{ public_ip }} and {{ private_ip }}):
# /etc/network/interfaces.d/nomad.cfg
auto nomad1
iface nomad1 inet manual
pre-up /sbin/ip link add nomad1 type dummy
up /sbin/ip addr add {{ public_ip }} dev nomad1
up /sbin/iptables -t nat -A PREROUTING -d {{ private_ip }}/32 -j NETMAP --to {{ public_ip }}/32
down /sbin/iptables -t nat -D PREROUTING -d {{ private_ip }}/32 -j NETMAP --to {{ public_ip }}/32
post-down /sbin/ip link del nomad1
And just run:
ifup nomad1
I had to use this hack as Scaleway provides only public IPs via a NAT.
As @mafonso mafonso said, this is very similar if not identical to the nomad behind NAT scenario. I'd like to be able to do "services behind NAT". In my case, I am using Fabio, but it's trying to route to the IP defined by the service stanze, which right now seems to be only the local IP of the host running the service, rather than the public IP of the host (because NAT is being used).
Another possible approach might be to have a config in the service/checks stanza to use the Nomad agent's advertise address, just like how the nomad agent service checks can be configured via checks_use_advertise in the consul stanza for the nomad agent configs.
I realize the challenge is to make job specs not require any knowledge of the underlying nomad host's config. However, I think it might make sense to support the ability to configure the Nomad agent to use one of the following for service registration:
This approach would work on both Linux and Windows hosts, and it would support NAT without jobspecs needing to have any knowledge of the underlying Nomad host's networking. It would mean operators would have to make it clear to job spec writers that services register with one of:
Most helpful comment
I have another use case for this. Although I agree it a bit stretched one.
I'm trying to build a docker-compose replica of the production stack, for local environments.
A docker-compose of consul, vault and nomad.
I got everything to work apart from consul health-checks for the running tasks. This is because the service will register in consul as being in
127.0.0.1and the consul health check can not find it there, as 127.0.0.1 from within the consul container is itself obviously.This is very similar if not identical to the nomad behind NAT scenario; being able to override the service advertise address (so that I can specify the docker bip) would be great.
Yes, I appreciate that running nomad in a docker in docker setup is a bit of a corner case...