Title: Graceful HTTP Connection Draining During Shutdown?
Description:
At my organization, we are preparing a large-scale Envoy deployment for serving all network traffic at the edge on ingress to our network. We are using static configuration files on disk and using the Python reloader script to execute hot reloads when our configuration on disk changes.
Sometimes, we need to stop and start the Envoy process completely.
I was hoping to see clean shutdown functionality similar to how NGINX handles a process shutdown. After a shutdown signal, NGINX stops accepting new TCP connections and attempts to complete all current HTTP requests with Connection: close to gracefully shut down the connections before stopping the process. This means that connections terminate cleanly, without a TCP close unless a timeout is exceeded.
In a nutshell, NGINX does this:
$TIMEOUT seconds for HTTP connections to terminate before terminating their TCP sockets.Connection: close back in each response to cleanly close each connection.When I restart Envoy, either with SIGTERM or with the admin interface (/quitquitquit), I see TCP connection resets, rather than Connection: close. The hot reload process does connection closing properly, but it doesn't seem that shutdown does.
Repro steps:
We are using Envoy 1.11.0 on Ubuntu 16.04 in AWS.
I have open-sourced our load-testing tool using the exact Python version, requests version, etc. to reliably reproduce the issue.
Our systemd unit for running Envoy:
envoy.service
[Unit]
Description=Envoy Proxy
Requires=network-online.target
After=network-online.target
[Service]
Type=simple
Environment="ENVOY_CONFIG_FILE=/etc/envoy/envoy.yaml"
Environment="ENVOY_START_OPTS=--use-libevent-buffers 0 --parent-shutdown-time-s 60"
ExecStart=/usr/local/bin/envoy-restarter.py /usr/local/bin/start-envoy.sh
ExecReload=/bin/kill -HUP $MAINPID
ExecStop=/bin/kill -TERM $MAINPID
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
We are using the exact Python restarter tool that is currently in master.
The script that we have the Python restarter tool run is this:
#!/bin/bash
exec /usr/local/bin/envoy -c $ENVOY_CONFIG_FILE $ENVOY_START_OPTS --restart-epoch $RESTART_EPOCH
Config:
---
static_resources:
listeners:
- name: listener_http
address:
socket_address:
address: 0.0.0.0
port_value: 80
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: ingress_http
route_config:
name: route_config
virtual_hosts:
- name: abc.mycompany.com
domains: ["abc.mycompany.com", "*.abc.mycompany.com"]
routes:
- match:
prefix: "/"
route:
cluster: abc
http_filters:
- name: envoy.router
clusters:
- name: abc
connect_timeout: 1.0s
type: LOGICAL_DNS
dns_lookup_family: V4_ONLY
lb_policy: ROUND_ROBIN
hosts:
- socket_address:
address: abc.internal.mycompany.com
port_value: 80
circuit_breakers:
thresholds:
priority: DEFAULT
max_connections: 100000000
max_pending_requests: 1000000000
max_requests: 100000000
max_retries: 1000000000
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address: { address: 0.0.0.0, port_value: 9901 }
Logs:
Aug 06 21:16:27 hostname systemd[1]: Stopping Envoy Proxy...
Aug 06 21:16:27 hostname envoy-restarter.py[4406]: [2019-08-06 21:16:27.955][4408][warning][main] [source/server/server.cc:463] caught SIGTERM
Aug 06 21:16:27 hostname envoy-restarter.py[4406]: [2019-08-06 21:16:27.955][4408][info][main] [source/server/server.cc:567] shutting down server instance
Aug 06 21:16:27 hostname envoy-restarter.py[4406]: [2019-08-06 21:16:27.955][4408][info][main] [source/server/server.cc:521] main dispatch loop exited
Aug 06 21:16:27 hostname envoy-restarter.py[4406]: [2019-08-06 21:16:27.958][4408][info][main] [source/server/server.cc:560] exiting
Aug 06 21:16:28 hostname envoy-restarter.py[4406]: starting hot-restarter with target: /usr/local/bin/start-envoy.sh
Aug 06 21:16:28 hostname envoy-restarter.py[4406]: forking and execing new child process at epoch 0
Aug 06 21:16:28 hostname envoy-restarter.py[4406]: forked new child process with PID=4408
Aug 06 21:16:28 hostname envoy-restarter.py[4406]: got SIGTERM
Aug 06 21:16:28 hostname envoy-restarter.py[4406]: sending TERM to PID=4408
Aug 06 21:16:28 hostname envoy-restarter.py[4406]: got SIGTERM
Aug 06 21:16:28 hostname envoy-restarter.py[4406]: all children exited cleanly
Aug 06 21:16:28 hostname systemd[1]: Stopped Envoy Proxy.
We don't have this functionality today, but you can script around it using /healthcheck/fail and then wait for connections to drain.
@mattklein123 thank you! Will stay subscribed for potential updates.
Scripting around /healthcheck/fail and sleeping is also what we do for draining. After failing the healthcheck we watch the listeners' downstream_rq_active metric to drop to an acceptable range before terminating.
I'm considering building a more robust wrapper for doing hot reloads and graceful shutdowns :thinking:
FYI @derekargueta when I run my Python load-testing tool against Envoy, then POST to /healthcheck/fail, I see that all responses have Connection: close, but new connections are still allowed, which won't properly evict Envoy from my load balancer.
Due to performance constraints, I have to use NLBs in my deployment, forwarding TCP packets rather than ALBs which understand HTTP, so it's not possible for me to use L7 HTTP health checks with my load balancer, rather just TCP opens to test connectivity.
Is there a way to tell Envoy to stop accepting new TCP connections and finish up and close active HTTP connections?
@naftulikay use the health checking filter and have the NLB health check the Envoy and it will stop sending new connections.
So when an AWS load balancer is configured as a network load balancer (e.g. TCP forwarding), you can't use HTTP health checks against the instance. Rather, you have a boolean enabled or disabled of plain TCP-based health checks (e.g. "can I open a connection to the instance on the given port").
See the AWS documentation for more details.
I'm currently digging through the HTTP health check filters to try to understand more about how they work.
We use NLBs at Lyft with HTTP health checks.
I'll ask our TAMs, but we have attempted to setup HTTP health checks on TCP target groups and the API fails creation. Our target groups are configured like so:
resource "aws_lb_target_group" "http" {
name = "http"
port = 80
protocol = "TCP"
vpc_id = "${var.vpc_id}"
deregistration_delay = "${var.deregistration_delay}"
health_check {
enabled = true
interval = 10
port = "traffic-port"
protocol = "TCP"
healthy_threshold = 5
unhealthy_threshold = 5
}
tags {
Name = "http"
}
}
(second'ing NLBs + HTTP healthchecks which is what we use as well)
@naftulikay the protocol in health_check should be HTTP. The documentation you linked only states that the _default_ is TCP for NLBs, but you can set it to HTTP and set the path field for an endpoint to healthcheck. (HealthCheckProtocol section)
Looks like the issue you're experiencing may be related to this thread: https://github.com/terraform-providers/terraform-provider-aws/issues/2708#issuecomment-356026204
This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in the next 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.
Can someone tag this as help-wanted? This is still a desirable function to have and I think the community would benefit from it, even if it isn't implemented in the short term.
FWIW this is a duplicate issue, see also https://github.com/envoyproxy/envoy/issues/2920 (and other previous tickets which were marked solved.)
Most helpful comment
We don't have this functionality today, but you can script around it using
/healthcheck/failand then wait for connections to drain.