We have a STRICT_DNS type of a cluster defined in bootstrap config. In one of our test Pods, the membership count of this cluster became zero. This is understandable because the DNS resolution might have resulted in zero hosts. However this remained like this for quite a long time and after killing the container, Envoy is able to successfully resolve the DNS.
I have taken debug logs when Envoy is not able to resolve this. I see the following line
"source/common/network/dns_impl.cc:118] DNS request timed out 4 times",,
And I see these lines repeatedly
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds",,
"source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 22 milliseconds"
So at this point I am not very clear if it is Envoy issue or container DNS issue - as container restart resolved the issue.
Has any one seen similar issues with DNS? and another question is it the DNS resolution timer behaviour correct in the sense it is trying to resolve 0 milliseconds?
I have just experienced a very similar situation to this. Our proxy was deployed in an environment experiencing a lot of DNS failures, and at some point all DNS lookups just stopped working. We fixed the DNS issue, but the envoy instances never recovered and we had to kill them and restart. The new instances worked just fine.
We also had logs similar to the above:
[2018-10-06 00:22:39.202][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds
[2018-10-06 00:22:39.202][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 0 milliseconds
[2018-10-06 00:22:39.202][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 4949 milliseconds
[2018-10-06 00:22:41.160][15][debug][main] source/server/server.cc:119] flushing stats
[2018-10-06 00:22:44.152][15][debug][upstream] source/common/network/dns_impl.cc:118] DNS request timed out 4 times
[2018-10-06 00:22:44.152][15][debug][upstream] source/common/network/dns_impl.cc:147] Setting DNS resolution timer for 3866 milliseconds
It seems like after some number of DNS failures, the async resolver gets into some bad state and is unable to resolve things permanently.
We are also using STRICT_DNS with active health checks. Using Envoy 1.7.0 from the published docker image.
Looking a little more, the sequence of events in our situation is:
DNS request timed out 4 times log messages continue and no hosts are added back to the clusters. This state persisted until the envoy hosts were restarted.I also have full debug logs from one instance while it was in this state if it's helpful.
@jasonmartens, sorry, are you testing on master now?
It sounds like there might be a bug here in how we are interacting with c-ares but I'm not sure. I would definitely try on current master and see if we can come up with a repro.
@dio The problem I described above happened with master only but may be a few weeks old build.
@ramaraochavali got it, I'll take a look at it.
I was not testing on master, using 1.7.0 from the Envoy docker image repo.
@dio, @htuch made the DNS resolver use c-ares a long time ago, and the code really hasn't changed since then. The timeout handling is complicated in that library so I would probably start with some auditing of all the timeout code. I suspect there might be some case in which we aren't handling timeouts properly. IIRC c-areas has default timeouts in place, but I would check that also. @htuch might also have some ideas.
Possibly the timeout handling in DnsResolverImpl::PendingResolution::onAresHostCallback is not correct. It looks like it isn't posting failure back to the dispatcher when there is a failure and timeouts is non-zero.
@mattklein123 @htuch got it. Let see what I can do to help.
@dio just a ping. Did you find any thing on this?
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.
@dio were you able to spend time on this? Any thing you found?
I meet the same problem, Any progress for this?
@ramaraochavali @gatesking sorry that I haven't got anything. Will update you when I have it. OTOH if you want to help, that will be nice!
I had the same problem.
If the cluster with type "strict_dns" can't be resolved successfully by dns, it would take a long time (DNS request timed out 4 times) for envoy to reach the status of "all clusters initialized" and being ready to accept connections. Are there any settings about " strict dns resolution timeout " ? Or any mechanisms can be used to accelerate envoy's startup for readiness?
envoy static_resources config( service1 can be resolved by dns ,while service2 can't be) :
static_resources:
listeners:
- address:
socket_address:
address: 0.0.0.0
port_value: 10000
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"@type": type.googleapis.com/envoy.config.filter.network.http_connection_manager.v2.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: backend
domains:
- "*"
routes:
- match:
prefix: "/service/1/"
route:
cluster: service1
- match:
prefix: "/service/2"
route:
cluster: service2
http_filters:
- name: envoy.router
typed_config: {}
clusters:
- name: service1
connect_timeout: 2s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service1
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: foo-service
port_value: 80
- name: service2
connect_timeout: 0.25s
type: strict_dns
lb_policy: round_robin
load_assignment:
cluster_name: service2
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: bar-service
port_value: 80
admin:
access_log_path: "/dev/stdout"
address:
socket_address:
address: 0.0.0.0
port_value: 8001
some logs(It takes almost 75s to start up):
[2019-04-22 19:07:52.810][98941][debug][config] [source/extensions/filters/network/http_connection_manager/config.cc:312] config: {}
[2019-04-22 19:07:52.810][98941][debug][config] [source/server/listener_manager_impl.cc:627] add active listener: name=101a3091-b02e-4ccc-811b-6d7907231331, hash=5487848201015756333, address=0.0.0.0:10000
[2019-04-22 19:07:52.810][98941][info][config] [source/server/configuration_impl.cc:85] loading tracing configuration
[2019-04-22 19:07:52.810][98941][info][config] [source/server/configuration_impl.cc:105] loading stats sink configuration
[2019-04-22 19:07:52.810][98941][info][main] [source/server/server.cc:481] starting main dispatch loop
[2019-04-22 19:07:52.810][98941][debug][main] [source/common/event/dispatcher_impl.cc:169] running server.dispatcher on thread 98941
[2019-04-22 19:07:52.810][98945][debug][grpc] [source/common/grpc/google_async_client_impl.cc:41] completionThread running
[2019-04-22 19:07:53.070][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:07:53.208][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:07:57.813][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:07:58.210][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 10000 milliseconds
[2019-04-22 19:08:08.212][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 20000 milliseconds
[2019-04-22 19:08:28.213][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 40000 milliseconds
[2019-04-22 19:09:08.213][98941][debug][upstream] [source/common/network/dns_impl.cc:118] DNS request timed out 4 times
[2019-04-22 19:09:08.213][98941][debug][upstream] [source/common/upstream/upstream_impl.cc:747] initializing secondary cluster service2 completed
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/manager_impl.cc:45] init manager Cluster service2 contains no targets
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/watcher_impl.cc:14] init manager Cluster service2 initialized, notifying ClusterImplBase
[2019-04-22 19:09:08.213][98941][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:92] cm init: init complete: cluster=service2 primary=0 secondary=0
[2019-04-22 19:09:08.213][98941][info][upstream] [source/common/upstream/cluster_manager_impl.cc:137] cm init: all clusters initialized
[2019-04-22 19:09:08.213][98941][info][main] [source/server/server.cc:465] all clusters initialized. initializing init manager
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/manager_impl.cc:45] init manager Server contains no targets
[2019-04-22 19:09:08.213][98941][debug][init] [source/common/init/watcher_impl.cc:14] init manager Server initialized, notifying RunHelper
[2019-04-22 19:09:08.213][98941][info][config] [source/server/listener_manager_impl.cc:1005] all dependencies initialized. starting workers
[2019-04-22 19:09:08.213][98955][debug][main] [source/server/worker_impl.cc:98] worker entering dispatch loop
[2019-04-22 19:09:08.213][98955][debug][main] [source/common/event/dispatcher_impl.cc:169] running worker_0.dispatcher on thread 98955
[2019-04-22 19:09:08.213][98955][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:819] adding TLS initial cluster service1
[2019-04-22 19:09:08.213][98955][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:819] adding TLS initial cluster service2
[2019-04-22 19:09:08.213][98955][debug][upstream] [source/common/upstream/cluster_manager_impl.cc:980] membership update for TLS cluster service1 added 1 removed 0
[2019-04-22 19:09:08.213][98956][debug][grpc] [source/common/grpc/google_async_client_impl.cc:41] completionThread running
[2019-04-22 19:09:13.215][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:13.216][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:09:14.028][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:09:14.151][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:09:18.217][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:19.152][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 10000 milliseconds
[2019-04-22 19:09:23.219][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:28.221][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:29.154][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 20000 milliseconds
[2019-04-22 19:09:33.222][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:38.223][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:43.225][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:48.225][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:49.155][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 40000 milliseconds
[2019-04-22 19:09:53.214][98941][info][main] [source/server/drain_manager_impl.cc:63] shutting down parent after drain
[2019-04-22 19:09:53.226][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:09:58.227][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:03.228][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:08.228][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:13.316][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:18.317][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:23.318][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:28.318][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:29.157][98941][debug][upstream] [source/common/network/dns_impl.cc:118] DNS request timed out 4 times
[2019-04-22 19:10:33.319][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:34.163][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 4998 milliseconds
[2019-04-22 19:10:38.276][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:10:38.322][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:38.385][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 5000 milliseconds
[2019-04-22 19:10:43.324][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:43.388][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 10000 milliseconds
[2019-04-22 19:10:48.325][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:53.326][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:10:53.391][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 20000 milliseconds
[2019-04-22 19:10:58.329][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:03.330][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:08.332][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:13.332][98941][debug][main] [source/server/server.cc:147] flushing stats
[2019-04-22 19:11:13.393][98941][debug][upstream] [source/common/network/dns_impl.cc:158] Setting DNS resolution timer for 40000 milliseconds
[2019-04-22 19:11:18.334][98941][debug][main] [source/server/server.cc:147] flushing stats
envoy admin endpoint of /server_info:
{
"version": "4304b18819546f585e1e51c52fa5df0f01831633/1.11.0-dev/Clean/RELEASE/BoringSSL",
"state": "PRE_INITIALIZING",
"command_line_options": {
"base_id": "0",
"concurrency": 1,
"config_path": "/root/hpf/envoy/front-envoy-jwt.yaml",
"config_yaml": "",
"allow_unknown_fields": false,
"admin_address_path": "",
"local_address_ip_version": "v4",
"log_level": "debug",
"component_log_level": "",
"log_format": "[%Y-%m-%d %T.%e][%t][%l][%n] %v",
"log_path": "",
"hot_restart_version": false,
"service_cluster": "front-proxy",
"service_node": "",
"service_zone": "",
"mode": "Serve",
"max_stats": "16384",
"max_obj_name_len": "60",
"disable_hot_restart": false,
"enable_mutex_tracing": false,
"restart_epoch": 0,
"cpuset_threads": false,
"file_flush_interval": "10s",
"drain_time": "30s",
"parent_shutdown_time": "45s"
},
"uptime_current_epoch": "64s",
"uptime_all_epochs": "64s"
}
I think I see a bug here in V6 -> V4 fallback handling WRT to timeouts. @ramaraochavali I would try configuring the resolution for v4 only and see if that helps. I will also take a look at making the timeout and tries configurable.
Disregard my previous message. The timer reset is working OK. Not seeing any other issue currently from code inspection.
Is this still an issue for anyone watching this issue? I investigated and I couldn't find anything obviously wrong. It's possible this has been fixed somehow along the way.
It is possible that it might have been resolved along the way - We can close this and possibly revisit if someone complains about it.
I just found this issue on one envoy 1.10.0 instance.
From what we noticed in the past:
Comparing two instances with identical configuration, here is what I noticed:
"Good" instance has only:
Setting DNS resolution timer for 5000 millisecond
"Bad" instance has:
Setting DNS resolution timer for 0 milliseconds: 20-30 times
DNS request timed out 4 times: 2-3 times.
Around this "timed out" message I see 4-5 messages like this:
Setting DNS resolution timer for 4988 milliseconds
Setting DNS resolution timer for 5010 milliseconds
Setting DNS resolution timer for 4985 milliseconds
Setting DNS resolution timer for 10005 milliseconds
Setting DNS resolution timer for 2 milliseconds
I suspect there is some race condition here potentially within c-ares, but I'm not sure. Reopening and marking help wanted.
Envoy Mobile has the same issue in iOS.
Steps to repro:
From an Envoy Mobile clone
bazel build --config=ios //:ios_distbazel run //examples/swift/hello_world:app --config=iosConfig used:
This is repro'ed with clusters with both STRICT and LOGICAL DNS. As well as the dynamic forward proxy. The DNS refresh rate was configured to be 5s.
I am going to be looking at this issue as the setup above repros this issue 100% of the time.
Did some late night digging yesterday and arrived at an explanation:
When c-ares initializes a channel (trimming irrelevant details):
ares_init_options.init_by_resolv_conf which has platform specific code.#elif defined(CARES_USE_LIBRESOLV) which uses res_getservers to get the addresses of DNS servers.res_getservers returns AF_UNSPEC for the server address’ family.init_by_defaults which uses INADDR_LOOPBACK:NAMESERVER_PORT as the servers address.Solution:
By the way, it is worth noting that this would affect any cluster that uses DnsResolverImpl, so I am going to update the title to reflect that.