Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
NGINX Ingress controller version: 0.16.2
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.7", GitCommit:"b30876a5539f09684ff9fde266fda10b37738c9c", GitTreeState:"clean", BuildDate:"2018-01-16T21:59:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.7+coreos.0", GitCommit:"768e049ab5230010251f30475e0e785e2e999566", GitTreeState:"clean", BuildDate:"2018-01-18T00:17:18Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"
Environment:
uname -a):What happened:
Using an externalname service with my configuration worked until I included the --enable-dynamic-configuration flag. Now nginx isnt redirecting to the external service. Its not resolving the upstream host and keeps returning 502 -bad gateway. See snippet of error below
W0717 12:46:10.270803 7 controller.go:773] service foo/fooservice does not have any active endpoints
What you expected to happen:
I expect the upstream host to be resolved.
How to reproduce it (as minimally and precisely as possible):
Set up an externalName service with no selectors , or endpoints that points to an external service
Anything else we need to know:
I believe the issue has to do with the upstream balancer
Reproduction (on 0.16.2+some commits, doing a local dev build):
(minikube/ingress-nginx) ~ $ k get svc google -o yaml
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2018-07-17T13:33:08Z
labels:
app: google
name: google
namespace: ingress-nginx
resourceVersion: "1165"
selfLink: /api/v1/namespaces/ingress-nginx/services/google
uid: f13c1d17-89c5-11e8-a634-0800270f3029
spec:
externalName: google.com
selector:
app: google
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
(minikube/ingress-nginx) ~ $ k get ingress -o yaml
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"google-ingress","namespace":"ingress-nginx"},"spec":{"rules":[{"host":"google.example.com","http":{"paths":[{"backend":{"serviceName":"google","servicePort":80},"path":"/"}]}}]}}
creationTimestamp: 2018-07-17T13:35:25Z
generation: 1
name: google-ingress
namespace: ingress-nginx
resourceVersion: "1336"
selfLink: /apis/extensions/v1beta1/namespaces/ingress-nginx/ingresses/google-ingress
uid: 4319a0eb-89c6-11e8-a634-0800270f3029
spec:
rules:
- host: google.example.com
http:
paths:
- backend:
serviceName: google
servicePort: 80
path: /
status:
loadBalancer: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
(Following examples I've port-forwarded 8080 from my machine to the nginx pod)
Without dynamic configuration flag:
(minikube/ingress-nginx) ~ $ curl -H "Host: google.example.com" http://localhost:8080
<!DOCTYPE html>
<html lang=en>
<meta charset=utf-8>
<meta name=viewport content="initial-scale=1, minimum-scale=1, width=device-width">
<title>Error 404 (Not Found)!!1</title>
<style>
...more google html stuff
With dynamic configuration flag:
(minikube/ingress-nginx) ~ $ curl -H "Host: google.example.com" http://localhost:8080
<html>
<head><title>502 Bad Gateway</title></head>
<body bgcolor="white">
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.13.12</center>
</body>
</html>
Logs for the above minikube test:
(minikube/ingress-nginx) ~ $ k logs nginx-ingress-controller-6ccb8b9445-vzmrj
-------------------------------------------------------------------------------
NGINX Ingress controller
Release: dev
Build: git-5039e770
Repository: [email protected]:ocadotechnology/ingress-nginx.git
-------------------------------------------------------------------------------
nginx version: nginx/1.13.12
W0717 13:37:41.695473 5 client_config.go:533] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0717 13:37:41.695753 5 main.go:183] Creating API client for https://10.96.0.1:443
I0717 13:37:41.702353 5 main.go:227] Running in Kubernetes cluster version v1.10 (v1.10.0) - git (clean) commit fc32d2f3698e36b93322a3465f63a14e9f0eaead - platform linux/amd64
I0717 13:37:41.710847 5 main.go:100] Validated ingress-nginx/default-http-backend as the default backend.
I0717 13:37:41.912438 5 nginx.go:250] Starting NGINX Ingress controller
I0717 13:37:41.937294 5 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"c4e10462-89c4-11e8-a634-0800270f3029", APIVersion:"v1", ResourceVersion:"462", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0717 13:37:41.945268 5 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"c4e44c9d-89c4-11e8-a634-0800270f3029", APIVersion:"v1", ResourceVersion:"463", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0717 13:37:41.946322 5 event.go:218] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"c4e5da80-89c4-11e8-a634-0800270f3029", APIVersion:"v1", ResourceVersion:"464", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0717 13:37:43.015214 5 event.go:218] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"ingress-nginx", Name:"google-ingress", UID:"4319a0eb-89c6-11e8-a634-0800270f3029", APIVersion:"extensions", ResourceVersion:"1336", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress ingress-nginx/google-ingress
I0717 13:37:43.113915 5 nginx.go:271] Starting NGINX process
I0717 13:37:43.114275 5 leaderelection.go:175] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0717 13:37:43.117560 5 controller.go:169] Configuration changes detected, backend reload required.
I0717 13:37:43.118669 5 status.go:197] new leader elected: nginx-ingress-controller-677b75cbf-xxr44
I0717 13:37:43.194583 5 controller.go:179] Backend successfully reloaded.
I0717 13:37:43.194798 5 controller.go:189] Initial synchronization of the NGINX configuration.
I0717 13:37:44.198577 5 controller.go:196] Dynamic reconfiguration succeeded.
127.0.0.1 - [127.0.0.1] - - [17/Jul/2018:13:38:06 +0000] "GET / HTTP/1.1" 502 174 "-" "curl/7.58.0" 82 0.000 [ingress-nginx-google-80] 0.0.0.1:80 0 0.000 502 3bc444973e68e8d8d18cd9a7e862cd6a
2018/07/17 13:38:06 [error] 36#36: *54 [lua] balancer.lua:129: balance(): error while setting current upstream peer to no host allowed while connecting to upstream, client: 127.0.0.1, server: google.example.com, request: "GET / HTTP/1.1", host: "google.example.com"
2018/07/17 13:38:06 [crit] 36#36: *54 connect() to 0.0.0.1:80 failed (22: Invalid argument) while connecting to upstream, client: 127.0.0.1, server: google.example.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.1:80/", host: "google.example.com"
I0717 13:38:28.475350 5 leaderelection.go:184] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0717 13:38:28.475369 5 status.go:197] new leader elected: nginx-ingress-controller-6ccb8b9445-vzmrj
2018/07/17 13:39:54 [error] 36#36: *294 [lua] balancer.lua:129: balance(): error while setting current upstream peer to no host allowed while connecting to upstream, client: 127.0.0.1, server: google.example.com, request: "GET / HTTP/1.1", host: "google.example.com"
2018/07/17 13:39:54 [crit] 36#36: *294 connect() to 0.0.0.1:80 failed (22: Invalid argument) while connecting to upstream, client: 127.0.0.1, server: google.example.com, request: "GET / HTTP/1.1", upstream: "http://0.0.0.1:80/", host: "google.example.com"
127.0.0.1 - [127.0.0.1] - - [17/Jul/2018:13:39:54 +0000] "GET / HTTP/1.1" 502 174 "-" "curl/7.58.0" 82 0.000 [ingress-nginx-google-80] 0.0.0.1:80 0 0.000 502 1ed919087fa7004c2e97df2a0240cb6b
Resulting nginx config:
(minikube/ingress-nginx) ~ $ k exec -it nginx-ingress-controller-6ccb8b9445-vzmrj cat /etc/nginx/nginx.conf
# Configuration checksum: 14015626310287645718
# setup custom paths that do not require root access
pid /tmp/nginx.pid;
daemon off;
worker_processes 2;
worker_rlimit_nofile 523264;
worker_shutdown_timeout 10s ;
events {
multi_accept on;
worker_connections 16384;
use epoll;
}
http {
lua_package_cpath "/usr/local/lib/lua/?.so;/usr/lib/lua-platform-path/lua/5.1/?.so;;";
lua_package_path "/etc/nginx/lua/?.lua;/etc/nginx/lua/vendor/?.lua;/usr/local/lib/lua/?.lua;;";
lua_shared_dict configuration_data 5M;
lua_shared_dict locks 512k;
lua_shared_dict balancer_ewma 1M;
lua_shared_dict balancer_ewma_last_touched_at 1M;
lua_shared_dict sticky_sessions 1M;
init_by_lua_block {
require("resty.core")
collectgarbage("collect")
local lua_resty_waf = require("resty.waf")
lua_resty_waf.init()
-- init modules
local ok, res
ok, res = pcall(require, "configuration")
if not ok then
error("require failed: " .. tostring(res))
else
configuration = res
end
ok, res = pcall(require, "balancer")
if not ok then
error("require failed: " .. tostring(res))
else
balancer = res
end
ok, res = pcall(require, "monitor")
if not ok then
error("require failed: " .. tostring(res))
else
monitor = res
end
}
init_worker_by_lua_block {
balancer.init_worker()
}
real_ip_header X-Forwarded-For;
real_ip_recursive on;
set_real_ip_from 0.0.0.0/0;
geoip_country /etc/nginx/geoip/GeoIP.dat;
geoip_city /etc/nginx/geoip/GeoLiteCity.dat;
geoip_org /etc/nginx/geoip/GeoIPASNum.dat;
geoip_proxy_recursive on;
aio threads;
aio_write on;
tcp_nopush on;
tcp_nodelay on;
log_subrequest on;
reset_timedout_connection on;
keepalive_timeout 75s;
keepalive_requests 100;
client_body_temp_path /tmp/client-body;
fastcgi_temp_path /tmp/fastcgi-temp;
proxy_temp_path /tmp/proxy-temp;
client_header_buffer_size 1k;
client_header_timeout 60s;
large_client_header_buffers 4 8k;
client_body_buffer_size 8k;
client_body_timeout 60s;
http2_max_field_size 4k;
http2_max_header_size 16k;
types_hash_max_size 2048;
server_names_hash_max_size 1024;
server_names_hash_bucket_size 64;
map_hash_bucket_size 64;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 64;
variables_hash_bucket_size 128;
variables_hash_max_size 2048;
underscores_in_headers off;
ignore_invalid_headers on;
limit_req_status 503;
include /etc/nginx/mime.types;
default_type text/html;
gzip on;
gzip_comp_level 5;
gzip_http_version 1.1;
gzip_min_length 256;
gzip_types application/atom+xml application/javascript application/x-javascript application/json application/rss+xml application/vnd.ms-fontobject application/x-font-ttf application/x-web-app-manifest+json application/xhtml+xml application/xml font/opentype image/svg+xml image/x-icon text/css text/plain text/x-component;
gzip_proxied any;
gzip_vary on;
# Custom headers for response
server_tokens on;
# disable warnings
uninitialized_variable_warn off;
# Additional available variables:
# $namespace
# $ingress_name
# $service_name
# $service_port
log_format upstreaminfo '$the_real_ip - [$the_real_ip] - $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $request_length $request_time [$proxy_upstream_name] $upstream_addr $upstream_response_length $upstream_response_time $upstream_status $req_id';
map $request_uri $loggable {
default 1;
}
access_log /var/log/nginx/access.log upstreaminfo if=$loggable;
error_log /var/log/nginx/error.log notice;
resolver 10.96.0.10 valid=30s;
# Retain the default nginx handling of requests without a "Connection" header
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
map $http_x_forwarded_for $the_real_ip {
default $remote_addr;
}
# trust http_x_forwarded_proto headers correctly indicate ssl offloading
map $http_x_forwarded_proto $pass_access_scheme {
default $http_x_forwarded_proto;
'' $scheme;
}
# validate $pass_access_scheme and $scheme are http to force a redirect
map "$scheme:$pass_access_scheme" $redirect_to_https {
default 0;
"http:http" 1;
"https:http" 1;
}
map $http_x_forwarded_port $pass_server_port {
default $http_x_forwarded_port;
'' $server_port;
}
map $pass_server_port $pass_port {
443 443;
default $pass_server_port;
}
# Obtain best http host
map $http_host $this_host {
default $http_host;
'' $host;
}
map $http_x_forwarded_host $best_http_host {
default $http_x_forwarded_host;
'' $this_host;
}
# Reverse proxies can detect if a client provides a X-Request-ID header, and pass it on to the backend server.
# If no such header is provided, it can provide a random value.
map $http_x_request_id $req_id {
default $http_x_request_id;
"" $request_id;
}
server_name_in_redirect off;
port_in_redirect off;
ssl_protocols TLSv1.2;
# turn on session caching to drastically improve performance
ssl_session_cache builtin:1000 shared:SSL:10m;
ssl_session_timeout 10m;
# allow configuring ssl session tickets
ssl_session_tickets on;
# slightly reduce the time-to-first-byte
ssl_buffer_size 4k;
# allow configuring custom ssl ciphers
ssl_ciphers 'ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256';
ssl_prefer_server_ciphers on;
ssl_ecdh_curve auto;
proxy_ssl_session_reuse on;
upstream upstream_balancer {
server 0.0.0.1; # placeholder
balancer_by_lua_block {
balancer.balance()
}
keepalive 32;
}
## start server _
server {
server_name _ ;
listen 80 default_server backlog=511;
listen [::]:80 default_server backlog=511;
set $proxy_upstream_name "-";
listen 443 default_server backlog=511 ssl http2;
listen [::]:443 default_server backlog=511 ssl http2;
# PEM sha: 188cb13f746f4914cc2d3a411983c048c602ed6c
ssl_certificate /etc/ingress-controller/ssl/default-fake-certificate.pem;
ssl_certificate_key /etc/ingress-controller/ssl/default-fake-certificate.pem;
location / {
set $namespace "";
set $ingress_name "";
set $service_name "";
set $service_port "0";
set $location_path "/";
rewrite_by_lua_block {
balancer.rewrite()
}
log_by_lua_block {
balancer.log()
monitor.call()
}
if ($scheme = https) {
more_set_headers "Strict-Transport-Security: max-age=15724800; includeSubDomains";
}
access_log off;
port_in_redirect off;
set $proxy_upstream_name "upstream-default-backend";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
# health checks in cloud providers require the use of port 80
location /healthz {
access_log off;
return 200;
}
# this is required to avoid error if nginx is being monitored
# with an external software (like sysdig)
location /nginx_status {
allow 127.0.0.1;
allow ::1;
deny all;
access_log off;
stub_status on;
}
}
## end server _
## start server google.example.com
server {
server_name google.example.com ;
listen 80;
listen [::]:80;
set $proxy_upstream_name "-";
location / {
set $namespace "ingress-nginx";
set $ingress_name "google-ingress";
set $service_name "google";
set $service_port "80";
set $location_path "/";
rewrite_by_lua_block {
balancer.rewrite()
}
log_by_lua_block {
balancer.log()
monitor.call()
}
port_in_redirect off;
set $proxy_upstream_name "ingress-nginx-google-80";
client_max_body_size "1m";
proxy_set_header Host $best_http_host;
# Pass the extracted client certificate to the backend
# Allow websocket connections
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
proxy_set_header X-Request-ID $req_id;
proxy_set_header X-Real-IP $the_real_ip;
proxy_set_header X-Forwarded-For $the_real_ip;
proxy_set_header X-Forwarded-Host $best_http_host;
proxy_set_header X-Forwarded-Port $pass_port;
proxy_set_header X-Forwarded-Proto $pass_access_scheme;
proxy_set_header X-Original-URI $request_uri;
proxy_set_header X-Scheme $pass_access_scheme;
# Pass the original X-Forwarded-For
proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;
# mitigate HTTPoxy Vulnerability
# https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
proxy_set_header Proxy "";
# Custom headers to proxied server
proxy_connect_timeout 5s;
proxy_send_timeout 60s;
proxy_read_timeout 60s;
proxy_buffering "off";
proxy_buffer_size "4k";
proxy_buffers 4 "4k";
proxy_request_buffering "on";
proxy_http_version 1.1;
proxy_cookie_domain off;
proxy_cookie_path off;
# In case of errors try the next upstream server before returning an error
proxy_next_upstream error timeout;
proxy_next_upstream_tries 3;
proxy_pass http://upstream_balancer;
proxy_redirect off;
}
}
## end server google.example.com
# default server, used for NGINX healthcheck and access to nginx stats
server {
# Use the port 18080 (random value just to avoid known ports) as default port for nginx.
# Changing this value requires a change in:
# https://github.com/kubernetes/ingress-nginx/blob/master/controllers/nginx/pkg/cmd/controller/nginx.go
listen 18080 default_server backlog=511;
listen [::]:18080 default_server backlog=511;
set $proxy_upstream_name "-";
location /healthz {
access_log off;
return 200;
}
location /is-dynamic-lb-initialized {
access_log off;
content_by_lua_block {
local configuration = require("configuration")
local backend_data = configuration.get_backends_data()
if not backend_data then
ngx.exit(ngx.HTTP_INTERNAL_SERVER_ERROR)
return
end
ngx.say("OK")
ngx.exit(ngx.HTTP_OK)
}
}
location /nginx_status {
set $proxy_upstream_name "internal";
access_log off;
stub_status on;
}
location /configuration {
access_log off;
allow 127.0.0.1;
allow ::1;
deny all;
# this should be equals to configuration_data dict
client_max_body_size "10m";
proxy_buffering off;
content_by_lua_block {
configuration.call()
}
}
location / {
set $proxy_upstream_name "upstream-default-backend";
proxy_pass http://upstream_balancer;
}
}
}
stream {
log_format log_stream [$time_local] $protocol $status $bytes_sent $bytes_received $session_time;
access_log /var/log/nginx/access.log log_stream;
error_log /var/log/nginx/error.log;
# TCP services
# UDP services
}
(minikube/ingress-nginx) ~ $
It appears we're ending up with this placeholder?
upstream upstream_balancer {
server 0.0.0.1; # placeholder
balancer_by_lua_block {
balancer.balance()
}
keepalive 32;
}
@mikebryant yes, in dynamic mode there is only one upstream in the configuration file. The actual balancing is done in LUA.
https://github.com/kubernetes/ingress-nginx/blob/master/rootfs/etc/nginx/lua/balancer.lua#L112
Upping the log level, this is the backends configuration sent to lua
I0717 14:07:54.528487 5 nginx.go:766] Posting backends configuration: [{"name":"ingress-nginx-google-80","port":80,"secure":false,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"google.com","port":"80","maxFails":0,"failTimeout":0}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}}},{"name":"upstream-default-backend","port":0,"secure":false,"secureCACert":{"secret":"","caFilename":"","pemSha":""},"sslPassthrough":false,"endpoints":[{"address":"172.17.0.4","port":"8080","maxFails":0,"failTimeout":0}],"sessionAffinityConfig":{"name":"","cookieSessionAffinity":{"name":"","hash":""}}}]
I've tracked down "no host allowed" - it's coming from https://github.com/openresty/lua-nginx-module/blob/master/src/ngx_http_lua_balancer.c
I've also found this issue: https://github.com/openresty/lua-resty-core/issues/45
It would appear that currently --enable-dynamic-configuration doesn't support ExternalName Services as the hostname isn't resolved in the lua code, or prior to being sent as a backend
So, I'm not sure how to fix this.
I see the options as:
Which should it be?
@mikebryant let's wait for @ElvinEfendi
Thanks for the very detailed investigation, it's rare enough to be mentioned :clap:
Thanks for extensive debugging @mikebryant. I started working on this at https://github.com/kubernetes/ingress-nginx/pull/2804, hopefully will get it shipped before next week. I'm doing it in Nginx/Lua because we already have a mechanism to periodically process endpoints, I'm piggybacking on it to resolve domain names - that means if DNS changes, Nginx will pick up the change in a second (I think currently you have to reload Nginx), so that's an added benefit.
/kind bug
I'm hit by this bug as well; any chance of getting a new release soon?
@gjcarneiro we plan to release 0.18.0 at the end of the week
Edit: In the meantime you can use quay.io/aledbf/nginx-ingress-controller:0.406 (contains current master)
Thanks, trying that master snapshot, I have one follow up question: is there extra debugging enabled in this snapshot? I ask because since the upgrade CPU consumption increased significantly:

@gjcarneiro what version are you using now?
I upgraded from quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1 to quay.io/aledbf/nginx-ingress-controller:0.406. CPU, as reported by the grafana dashboard, seems to increase, as seen in the picture (around 16:36).
@gjcarneiro did you have dynamic configuration enabled with quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.17.1? If ~so~ not could you try the master snapshot again having dynamic configuration disabled? (--enable-dynamic-configuration=false)
Yes, I have dynamic configuration enabled. I'll try to disable tomorrow (I can't right now because customer might be annoyed by the websockets disconnecting).
@gjcarneiro I made a typo with my comment above. Sorry for confusion I updated it. If you had dynamic configuration already enabled with 0.17.1 then disabling it in this new version probably won't reveal anything. Basically I wanted to make sure that the increase in CPU usage is not due to the fact that we have dynamic configuration enabled in master now.
Ah, OK, yeah, I had dynamic configuration enabled before and after. Anyway, just a heads up, there may be some performance regression lurking in master...
Unless!... maybe because the underlying bug with ExternalName got fixed, suddenly we are processing many more requests (sentry!), which could explain the increased CPU usage. So maybe it's a false alarm! :relieved:
Most helpful comment
I've tracked down "no host allowed" - it's coming from https://github.com/openresty/lua-nginx-module/blob/master/src/ngx_http_lua_balancer.c
I've also found this issue: https://github.com/openresty/lua-resty-core/issues/45
It would appear that currently
--enable-dynamic-configurationdoesn't supportExternalNameServices as the hostname isn't resolved in the lua code, or prior to being sent as a backend