To run Kong in Kubernetes a /liveness endpoint would be useful. If /liveness fails to return a 200 OK the node will be killed and replaced. Hence this would require information about whether the node is healthy and correctly connected to the cluster. Not to be confused with whether its ready to receive traffic, for that a separate /readiness endpoint is required - see https://github.com/Mashape/kong/issues/1678.
I currently use /status for this, but that might be too costly for such a regular check? I don't think it fails the 200 OK if the node didn't join the cluster correctly.
I use status, too. The thing with a health check is always: If you implement it very light, it does not check much. If you implement it very heavy, it might impacts performance. Maybe a health page and via a query parameter one could supply something like light, normal, heavy.
+1
For sure would be very usefull for the integration of Kong into Kubernetes, for a production grade level.
+1 not just for Kubernetes, but other load balancing environments as well.
/status is on 8001, not 8000, which is not ideal for some deployments (i.e. trying to lock down access to the management API -- the public load balancer should not even have access to 8001.)
+1 I ran into huge issues with performance when running Kong on Cassandra using the /status endpoint.
I solved this now by doing a execProbe: https://www.tigraine.at/2017/01/19/configuring-kong-health-checks-in-kubernetes
Once a bit more load was applied the /status endpoint would go up to 2-3 seconds response time and even sometimes go higher than that - leading to a killed Kong Pod in K8s
But having a cheap HTTP endpoint would be very beneficial especially for L7 Load Balancing.
@Tigraine The execProbe is a good workaround. Thank you!
+1
/status or /health endpoint on HTTP:8000
This will be a very nice feature. Too bad that this issue is stalled :(
With the advent of Kong 0.14, this can be done DIY using its new "Nginx Directive Injection" feature.
Here's how I did it -- pardon any typos here as I'm converting from a Terraform-based deployment versus pure YAML.
To your "kong-proxy" Deployment, add this to the spec's container environment variable list:
- name: KONG_NGINX_PROXY_INCLUDE
value: '/usr/local/kong/kube/proxy_health_check.conf'
Also add the relevant probes to the spec (which will automatically get picked up by something like GKE's ingress-gce):
livenessProbe:
failureThreshold: 3
httpGet:
path: /liveness
port: 8001
scheme: HTTP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readiness
port: 8001
scheme: HTTP
Now add that "included" file to the container via a ConfigMap; this is the actual health check implementation:
apiVersion: v1
kind: ConfigMap
metadata:
name: kong-proxy
namespace: kong
data:
proxy_health_check.conf: <<END_OF_PROXY_HEALTH_CHECKS
location /liveness {
content_by_lua_block {
ngx.say("OK")
}
}
location /readiness {
content_by_lua_block {
ngx.say("OK")
}
}
END_OF_PROXY_HEALTH_CHECKS
I'm naive as to what the best tests are so just used something simple. I tried to use the result of kong health but was tripped up because of some permissions of /usr/local/kong/.kong_env which user nobody of the worker thread could not access. I didn't want to loosen the permissions.
Something like this should be worked into Kong's Kubernetes examples as it is imperative for exposing Kong behind other load-balancers.
Most helpful comment
+1 I ran into huge issues with performance when running Kong on Cassandra using the
/statusendpoint.I solved this now by doing a execProbe: https://www.tigraine.at/2017/01/19/configuring-kong-health-checks-in-kubernetes
Once a bit more load was applied the /status endpoint would go up to 2-3 seconds response time and even sometimes go higher than that - leading to a killed Kong Pod in K8s
But having a cheap HTTP endpoint would be very beneficial especially for L7 Load Balancing.