Ingress-nginx: v0.26.1 does not intercept errors to default-http-backend

Created on 29 Nov 2019  路  1Comment  路  Source: kubernetes/ingress-nginx

Actual we are using the nginx ingress controller in version 0.26.1 in a k8s cluster.
We intercept errors for the error codes "404,500" via the config setting custom-http-errors
Since 28.11.2019 we get the following exception from nginx:

I1128 14:46:47.771851 6 event.go:255] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"infrastructure", Name:"test-status-code", UID:"e214562a-4fa8-4a7d-840f-25529a6d6eb4", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"91103059", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress infrastructure/test-status-code 2019/11/28 14:47:21 [error] 42#42: *482 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "test-status-code.tiki-dsp.io" 2019/11/28 14:47:21 [error] 45#45: *503 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "test-status-code.tiki-dsp.io" 2019/11/28 14:47:52 [error] 43#43: *939 could not find named location "@custom_upstream-default-backend_404", client: 127.0.0.1, server: , request: "GET / HTTP/1.0", host: "test-status-code.tiki-dsp.io

Key error:
could not find named location "@custom_upstream-default-backend_404
But this location is present in the nginx.conf and the default-http-backend does not get this request. The nginx ingress pod returns a 500er code because it can not handle this exception.

We tested the following nginx-controller versions:
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.0
quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0

Any help on this topic would be great!
Here you can find the configurations we are using:

nginx-controller-config-map:

kind: ConfigMap
apiVersion: v1
metadata:
  name: nginx-config
  namespace: infrastructure
  labels:
    component: nginx
data:
  http2-max-field-size: "32k"
  http2-max-header-size: "64k"
  proxy-buffer-size: "16k"
  large-client-header-buffers: "4 32k"
  # see: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/
  # generally the default settings are already very good!
  # and used https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=nginx-1.13.12&openssl=1.0.1e&hsts=yes&profile=modern
  proxy-connect-timeout: "120"
  proxy-read-timeout: "120"
  proxy-body-size: "2048m"
  use-http2: "true"
  # Expose metrics for monitoring
  enable-vts-status: "true"
  # Custom Error Pages
  # Setting at least one code also enables proxy_intercept_errors which are required to process error_page.
  # Make sure we are masking the http errors as string here
  custom-http-errors: "404,500"
  # hsts
  hsts: "true" # default is "true". Enables HTTP Strict Transport Security (HSTS): the HSTS header is added to the responses from backends. See https://www.nginx.com/blog/http-strict-transport-security-hsts-and-nginx/
  hsts-max-age: "15768000" # = 6 months # default is 15724800 (6 month).
  hsts-include-subdomains: "true" # default is "true".
  # ssl
  ssl-protocols: "TLSv1.3 TLSv1.2" # default is "TLSv1.2". See http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols
  ssl-ciphers: "TLS13-AES-256-GCM-SHA384:TLS13-CHACHA20-POLY1305-SHA256:TLS13-AES-128-GCM-SHA256:ECDH+CHACHA20:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256" # See http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_ciphers
  ssl-dhparam: |
    -----BEGIN DH PARAMETERS-----
    <some-cert-ciphers>
    -----END DH PARAMETERS-----

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: infrastructure
  labels:
    component: nginx
data:
  #53: "infrastructure/dns:53"
  8020: "infrastructure/hdfs-namenode:8020"
  8220: "hdfs-testcluster/hdfs-namenode:8220"
---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: infrastructure
  labels:
    component: nginx
data:
  #53: "infrastructure/dns:53"
  10088: "infrastructure/kerberos:10088"
  10464: "infrastructure/kerberos:10464"

The nginx-controller daemonset:

# howto trigger rolling update
# - change content of 'kubernetes.io/change-cause' annotation
# - kubectl apply -f ...
# NOTE: It is intended that every ingress daemonset has the same name ID
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nginx-ingress
  namespace: infrastructure
  labels:
    component: nginx
  annotations:
    kubernetes.io/change-cause: "updated nginx controller 0.21.0 to the currently latest version 0.26.1"
spec:
  selector:
    matchLabels:
      component: nginx
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  template:
    metadata:
      labels:
        component: nginx
      annotations:
        # Annotate for Prometheus monitoring
        prometheus.io/port: '10254'
        prometheus.io/scrape: 'true'
        fluentbit.io/parser: nginx-ingress
    spec:
      serviceAccountName: nginx-ingress
      nodeSelector:
        ingress: "true"
      terminationGracePeriodSeconds: 60
      containers:
        - image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.26.1
          imagePullPolicy: Always
          name: nginx-ingress
          readinessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
          livenessProbe:
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            timeoutSeconds: 1
          ports:

            #HTML Ingress ports

            - containerPort: 80
              hostPort: 80
              name: http
            - containerPort: 443
              hostPort: 443
              name: https

            #HDFS Namenode ports

            - containerPort: 8020
              protocol: TCP
              hostPort: 8020
              name: hdfs

            - containerPort: 8220
              protocol: TCP
              hostPort: 8220
              name: test-hdfs

            #DNS ports - are disabled since dns is running on scarface

            #        - containerPort: 53
            #          hostPort: 53
            #          protocol: UDP
            #          name: dns-udp
            #        - containerPort: 53
            #          hostPort: 53
            #          protocol: TCP
            #          name: dns-tcp

            #kerberos ports

            - name: kdc-udp
              containerPort: 10088
              hostPort: 10088
              protocol: UDP

            - name: kdc-pwc-udp
              containerPort: 10464
              hostPort: 10464
              protocol: UDP
              # kdc-pwc-udp => Kerberos DomainController Password Change UDP

            - name: kdc-amdmin-tcp
              containerPort: 10749
              hostPort: 10749
              protocol: TCP

          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          args:
            - /nginx-ingress-controller
            - --default-ssl-certificate=$(POD_NAMESPACE)/tiki-dsp-io-certificate
            - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
            - --configmap=$(POD_NAMESPACE)/nginx-config
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --annotations-prefix=nginx.ingress.kubernetes.io

Please find attached the nginx configuration. (Reduced to see only a test service config)
nginx.zip

Most helpful comment

We can close this issue.
The error here was a unreachable default-http-backend.
Unfortunately, the log output from the nginx error was a bit misleading.
Anyway, after restart of the default-http-backend the custom-error-page was working, again.

>All comments

We can close this issue.
The error here was a unreachable default-http-backend.
Unfortunately, the log output from the nginx error was a bit misleading.
Anyway, after restart of the default-http-backend the custom-error-page was working, again.

Was this page helpful?
0 / 5 - 0 ratings