Ingress-nginx: proxy timeout annotations have no effect on nginx

Created on 31 Jan 2018  路  37Comments  路  Source: kubernetes/ingress-nginx

NGINX Ingress controller version: 0.10.2 / quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.1", GitCommit:"3a1c9449a956b6026f075fa3134ff92f7d55f812", GitTreeState:"clean", BuildDate:"2018-01-04T11:52:23Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T09:42:01Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}

Environment:

  • Cloud provider or hardware configuration: Bare metal / On premise
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 9 (stretch)
  • Kernel (e.g. uname -a): 4.9.0-5-amd64 #1 SMP Debian 4.9.65-3+deb9u2 (2018-01-04) x86_64 GNU/Linux
  • Install tools: kubeadm
  • Others: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.10.2

What happened:

NGINX Ingress Controller v0.10.2 configuration doesn't reflect the proxy timeout annotations per Ingress.

This Ingress definition doesn't work as expected :

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: ing-manh-telnet-client
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy鈥慶onnect鈥憈imeout: 30
    nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: 1800
    nginx.ingress.kubernetes.io/proxy鈥憇end鈥憈imeout: 1800
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  tls:
    - hosts:
      - "manh-telnet.ls.domain.io"
      secretName: "tls-certs-domainio"
  rules:
    - host: "manh-telnet.ls.domain.io"
      http:
        paths:
        - path: "/"
          backend:
            serviceName: svc-manh-telnet-client
            servicePort: http

The actual vhost :

            # Custom headers to proxied server

            proxy_connect_timeout                   30s;
            proxy_send_timeout                      180s;
            proxy_read_timeout                      180s;

What you expected to happen:

The wanted vhost :

            # Custom headers to proxied server

            proxy_connect_timeout                   30s;
            proxy_send_timeout                      1800s;
            proxy_read_timeout                      1800s;

How to reproduce it (as minimally and precisely as possible):

Anything else we need to know:

Most helpful comment

I had the same problem and discovered that the following do not work:

nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: 1800
nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: 1800s
nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: "1800s"

What does work is:

nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: "1800"

All 37 comments

I have the same issue with quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.9.0-beta.17. Instead of custom timeouts nginx.conf contains default ones(60s) in the location part.

My particular test uses Ingress config (I needed timeouts only, but added other just for the test case):

- apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    name: che-ingress
    annotations:
      ingress.kubernetes.io/rewrite-target: /
      nginx.ingress.kubernetes.io/proxy-connect-timeout: "3600"
      nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
      nginx.ingress.kubernetes.io/upstream-fail-timeout: "30"
      nginx.ingress.kubernetes.io/add-base-url: "true"
      nginx.ingress.kubernetes.io/affinity: "cookie"
  spec:
    rules:
    - host: 192.168.99.100.nip.io
      http:
        paths:
        - backend:
            serviceName: che-host
            servicePort: 8080

Which generates upstream:

    upstream che-che-host-8080 {
        # Load balance algorithm; empty for round robin, which is the default

        least_conn;

        keepalive 32;

        server 172.17.0.6:8080 max_fails=0 fail_timeout=0;

    }

And server:

    server {
        server_name 192.168.99.100.nip.io ;

        listen 80;

        listen [::]:80;

        set $proxy_upstream_name "-";

        location / {

            set $proxy_upstream_name "che-che-host-8080";

            set $namespace      "che";
            set $ingress_name   "che-ingress3";
            set $service_name   "";

            port_in_redirect off;

            client_max_body_size                    "1m";

            proxy_set_header Host                   $best_http_host;

            # Pass the extracted client certificate to the backend

            proxy_set_header ssl-client-cert        "";
            proxy_set_header ssl-client-verify      "";
            proxy_set_header ssl-client-dn          "";

            # Allow websocket connections
            proxy_set_header                        Upgrade           $http_upgrade;
            proxy_set_header                        Connection        $connection_upgrade;

            proxy_set_header X-Real-IP              $the_real_ip;

            proxy_set_header X-Forwarded-For        $the_real_ip;

            proxy_set_header X-Forwarded-Host       $best_http_host;
            proxy_set_header X-Forwarded-Port       $pass_port;
            proxy_set_header X-Forwarded-Proto      $pass_access_scheme;
            proxy_set_header X-Original-URI         $request_uri;
            proxy_set_header X-Scheme               $pass_access_scheme;

            # Pass the original X-Forwarded-For
            proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

            # mitigate HTTPoxy Vulnerability
            # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
            proxy_set_header Proxy                  "";

            # Custom headers to proxied server

            proxy_connect_timeout                   5s;
            proxy_send_timeout                      60s;
            proxy_read_timeout                      60s;

            proxy_redirect                          off;
            proxy_buffering                         off;
            proxy_buffer_size                       "4k";
            proxy_buffers                           4 "4k";
            proxy_request_buffering                 "on";

            proxy_http_version                      1.1;

            proxy_cookie_domain                     off;
            proxy_cookie_path                       off;

            # In case of errors try the next upstream server before returning an error
            proxy_next_upstream                     error timeout invalid_header http_502 http_503 http_504;

            proxy_pass http://che-che-host-8080;

        }

    }

So looks like none of the annotations take effect.

You have an incorrect char "-" in your annotations.
In my configuration nginx.ingress.kubernetes.io/proxy-read-timeout is written and it works on the same version (0.10.2).

03-02-2018-16-49-17

I don't get the point, annotations were tested one by one and - is an acceptable YAML value. Can you elaborate a bit more about an incorrect - in my annotations ?

The hyphens are not the normal hyphens - but just look like them.

What @akaGelo is trying to say is that if you use your browsers search option and put in a - then some of them will not be highlighted. These are those hyphens which are not the correct ones.

Oh now that's seems very obvious ! Thanks guys, that was a pretty simple mistake. I'll look into official documentation if we can improve that with the same type of character for working copy/paste.

@garagatyi Maybe you have the same problem ? You should also update your ingress revision.

@gooodmorningopenstack I have older nginx controller, not wrong characters. The thing is I can't control the version of the controller, so I have to allow users to redefine controller annotations (it can be even not nginx controller at all). But thanks for your suggestion!

I'm having the same issue. Unlike the author one, my ingress doesn't have any special hyphen.

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-routes
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/proxy-read-timeout: 1200
    nginx.ingress.kubernetes.io/proxy-send-timeout: 1200
spec:
  tls:
  - secretName: nginxsecret
  rules:
  - http:
      paths:
      - path: /*
        backend:
          serviceName: frontend
          servicePort: 8000
      - path: /cron/*
        backend:
          serviceName: esg
          servicePort: 8000

      - path: /task/*
        backend:
          serviceName: esg
          servicePort: 8000

      - path: /api/connections/update/*
        backend:
          serviceName: esg
          servicePort: 8000

      - path: /api/drive/scansheet/*
        backend:
          serviceName: esg
          servicePort: 8000

I ran into this as well. I'm assuming an integer is required for timeouts? I was using "5m" because Nginx docs seemed to show that I could. Changed to 300 and things worked great after that.

Closing. As @akaGelo commented you have an issue with the -. Is my fault. I am sure you copy/paste from the docs (a good thing) but in order to make readable the table the character was different
Please check https://github.com/kubernetes/ingress-nginx/pull/2111

I had the same problem and discovered that the following do not work:

nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: 1800
nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: 1800s
nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: "1800s"

What does work is:

nginx.ingress.kubernetes.io/proxy鈥憆ead鈥憈imeout: "1800"

@gae123 that's not working for me

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: my-app
  namespace: my-app
  annotations:
    nginx.org/websocket-services: "my-app"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: "14400"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "14400"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "14400"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    kubernetes.io/tls-acme: "true"
    kubernetes.io/ingress.class: "gce"
spec: ...

still getting timed out after 30s

2018/03/31 16:55:07 Client 0xc420058b80 connected
2018/03/31 16:55:37 error: websocket: close 1006 (abnormal closure): unexpected EOF
2018/03/31 16:55:37 Client 0xc420058b80 disconnected

2018/03/31 16:58:19 Client 0xc420138e80 connected
2018/03/31 16:58:49 error: websocket: close 1006 (abnormal closure): unexpected EOF
2018/03/31 16:58:49 Client 0xc420138e80 disconnected

kubernetes.io/ingress.class: "gce"

I seem you are using the GCE ingress controller. This annotation only works in nginx

For me this is not working. Anyone sees an issue?
I am using the nginx helm chart: nginx-ingress-0.8.9

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: zalenium
  namespace: zalenium
  annotations:
    kubernetes.io/ingress.class: nginx
    ingress.kubernetes.io/auth-type: basic
    ingress.kubernetes.io/auth-secret: zalenium-basic-auth
    ingress.kubernetes.io/auth-realm: "Authentication Required"
    nginx.ingress.kubernetes.io/enable-cors: "true"
    nginx.ingress.kubernetes.io/cors-allow-methods: "*"
    nginx.ingress.kubernetes.io/cors-allow-origin: "*"
    nginx.ingress.kubernetes.io/proxy-connect-timeout: 3600
    nginx.ingress.kubernetes.io/proxy-send-timeout: 3600
    nginx.ingress.kubernetes.io/proxy-read-timeout: 3600
spec:
  rules:
  - host: "test.whatever"
    http:
      paths:
      - path: /
        backend:
          serviceName: zalenium
          servicePort: 4444

+1

nginx.ingress.kubernetes.io/proxy-connect-timeout: number is not working for me aswel

After way to much trial and error and frustration; some tips that might work for others who end up here:

  • nginx.ingress.kubernetes.io/proxy-connect-timeout did not work for me. Nothing changed in the nginx configuration in the ingress controller. No errors were shown. Removing the initial nginx. did work. Ending up with these annotations:
    ingress.kubernetes.io/proxy-connect-timeout: "600"
    ingress.kubernetes.io/proxy-read-timeout: "600"
    ingress.kubernetes.io/proxy-send-timeout: "600"
    ingress.kubernetes.io/send-timeout: "600"
  • If you want to inspect what the end result, the nginx.conf, looks like. You can get it from the ingress controller pod. To access the ingress controller pod with kubectl you need to specify namespace when running commands since the controller doesn't live in the default namespace. So like this:
$ kubectl get pods --all-namespaces
...
$ kubectl -n kube-system exec nginx-ingress-controller-138430828-pqb7q cat /etc/nginx/nginx.conf | tee nginx.test-ingress-export.conf

@Tim-Schwalbe I am using the helm chart as well, although a different version. It only worked with ConfigMaps.

Here are the steps that helped me. You need the name of the pod running the controller.
Say nginx-ingress-controller-1234abcd

Make sure you're running images from quay.io
$ kubectl describe pod nginx-ingress-controller-1234abcd | grep Image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.15.0
if it doesn't start with quay.io the following steps may not be relevant.

Determine the name of the ConfigMap it reads all those properties from:
$ kubectl describe pod nginx-ingress-controller-1234abcd | grep configmap=
--configmap=default/nginx-ingress-controller

That means it reads from from a ConfigMap with name nginx-ingress-controller in the default namespace. Append such a ConfigMap to you Ingress yaml file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-ingress-controller
data:
  proxy-read-timeout: "234"
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: lb-ingress
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
  rules:
  - http:
      paths:
      - path: /
        backend:
          serviceName: app-service
          servicePort: 8080

Properties you can add to the ConfigMap compiled in the table here: https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/configmap.md

The result in /etc/nginx/nginx.conf
proxy_read_timeout 234s;

I hope that was helpful.

Hi,
is this working for grpc and http2 with this image of the ingress? quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.14.0 and higher

This is another nginx and the documentation says its setting also the timeout for grpc.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/customization

But here I can not find any word about grpc.
https://github.com/kubernetes/ingress-nginx/blob/master/docs/user-guide/nginx-configuration/annotations.md

is this just not implemented?

Doesn't work for me.

image

I had to use the string version instead of the number version, any idea why this is?

This breaks:

nginx.ingress.kubernetes.io/proxy-read-timeout: 300

This works:

nginx.ingress.kubernetes.io/proxy-read-timeout: "300"

@yivo you're missing the beginning of the annotation, you need nginx in front of ingress,

so instead of

ingress.kubernetes.io/proxy-read-timeout

you should have

nginx.ingress.kubernetes.io/proxy-read-timeout

I had to use the string version instead of the number version, any idea why this is?

From the first tip in the docs https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/

Annotation keys and values can only be strings. Other types, such as boolean or numeric values must be quoted, i.e. "true", "false", "100".

@aledbf Oh, how weird is that. Thanks.

Bumping this: we hit a worse version of this problem when moving from 1.12.1 to 1.12.4. Apparently now if you have these invalid values (not specified as strings) all of your annotations are discarded. Seems like 'kubectl apply' with invalid annotations shouldn't silently accept and discard these values.

I had this problem on 0.18, upgrading to latest fixed it using "normal" annotations (nginx.ingress.*)

what is the approved solution for this.. for me also its same.. its getting CLIENT_DISCONNECTED exactly in 60sec, I have tried all options mentioned in this forum but not working.. any solid clue to get it fixed?

To turn this around: is anyone actually able to have something communicate across a kubernetes cluster boundary with >60s idle time between packets? Perhaps using something else than nginx?

After adding

nginx.ingress.kubernetes.io/proxy-read-timeout: "3600"
it throws a 502

Proxy Error
The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request

Reason: Error reading from remote server

Using a reverse proxy to connect to api on a apache server image hosted on K8s cluster.

Any updates on when this might be fixed, or a version that it is patched in? I've also tried everything, running version 0.20.0 and having no luck.

@dannyburke1 solution described in https://github.com/kubernetes/ingress-nginx/issues/2007#issuecomment-374856607 is working fine in current release

Hey @kvaps thanks for your response.

When copying/pasting that (in vim) and applying it says that it can't be applied due the hyphens being used:

name part must consist of alphanumeric characters, '-', '_' or '.', and must start and end with an alphanumeric character (e.g. 'MyName', or 'my.name', or '123-abc', regex used for validation is '([A-Za-z0-9][-A-Za-z0-9_.]*)?[A-Za-z0-9]')

If I replace the hyphens then I can apply it, but unfortunately isn't being filtered down to the nginx.conffile.

I'm seeing nginx connections resets at the exact 60s mark with the ingress-nginx controller.

Using the following annotations on the grpc service.

    nginx.ingress.kubernetes.io/server-snippet: "keepalive_timeout 600s; grpc_read_timeout 3600s; grpc_send_timeout 3600s;"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"

ngnix-ingress ConfigMap

  keep-alive: '3600'
  upstream-keepalive-timeout: '3600'

I'm setting up a bidirectional grpc stream

It looks like nginx is doing the connection reset by looking at nbl metrics

I'm not sure on the exact fix here, but seemingly redeploying the ingress and updating nginx seems to have sorted it for me.

Adding client_body_timeout was the key fix for me here. This needs to be put in the documentation somewhere since it was hard to find

    nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
    nginx.ingress.kubernetes.io/server-snippet: "keepalive_timeout 3600s; grpc_read_timeout 3600s; grpc_send_timeout 3600s;client_body_timeout 3600s;"
    nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
    nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"

@jontro
This definitely must be included at the front page of the documentation.
This helped me(+ saved me) a lot. Thanks.

How to set the timeouts in a millisecond format?

Was this page helpful?
0 / 5 - 0 ratings