Ingress-nginx: nginx ingress controller does not seem to use proxy-ssl-secret when talking to upstream

Created on 13 Jan 2020  路  2Comments  路  Source: kubernetes/ingress-nginx

NGINX Ingress controller version:
0.26.1

Kubernetes version (use kubectl version):
client: 1.14.7
server: 1.14.10-gke.0

Environment:

  • Cloud provider or hardware configuration: GKE
  • OS (e.g. from /etc/os-release): Debian GNU/Linux 10 (buster)
  • Kernel (e.g. uname -a): 4.14.138+
  • Install tools: none (I think this is asking if I'm using Helm or something like it, which I'm not)
  • Others: none

What happened:

I have an internal ingress sitting between two backend services C (the "client" service that sends requests) and S (the "server" service that handles those requests). I'm using the nginx ingress controller for this ingress to get consistent hashing routing from C to S based on an HTTP header provided by C.

                      /-> S pod 1
C --> nginx ingress -|
                      \-> S pod 2

Including the Kubernetes service objects, here's what the network looks like:

C pod --> "sticky S" svc --> nginx ingress controller svc --> nginx ingress controller pod --> S pod

I want to use TLS between C and S. Since I'm relying on the nginx ingress controller to read an HTTP header, I can't just use TLS passthrough. I need to have the nginx ingress controller terminate TLS from C, read the header, and then initiate a new TLS connection to S when it proxies the request.

I have created a Kubernetes secret object following the instructions here. This secret object is in the default namespace and its name is ingress-nginx-tls-key-v1. I have configured the ingress object to use this secret (see below for full spec) and I have verified from the nginx conf (also below) that the secret is part of the location spec for S.

When I curl S from the nginx ingress controller pod using that secret, I get a 200 (the expected result):

$ export SECRET_FILE="/etc/ingress-controller/ssl/default-ingress-nginx-tls-key-v1.pem"
$ curl  --key $SECRET_FILE --cacert $SECRET_FILE "https://svc-s.default.svc.cluster.local/_status/healthz"
HTTP/2 200

However, when I curl the "sticky S" service (which routes to the ingress controller) from C, I get a 400 error:

$ curl --key $KEY_FILE --cacert $CERT_FILE "https://svc-s-sticky.default.svc.cluster.local/_status/healthz"
HTTP/2 400
Client sent an HTTP request to an HTTPS server.

I believe that the KEY_FILE and CERT_FILE I'm using on C are correct, because when I curl S directly from C (rather than curling the ingress controller) I get a 200:

$ curl --key $KEY_FILE --cacert $CERT_FILE "https://svc-s.default.svc.cluster.local/_status/healthz"
HTTP/2 200

What you expected to happen:

I should get a 200 when curling the "sticky S" service from C.

How to reproduce it:

I'm just using the off-the-shelf nginx ingress controller service, deployment, etc.

Here's my Ingress spec:

kind: Ingress
apiVersion: extensions/v1beta1
metadata:
  name: s-sticky
  annotations:
    cloud.google.com/load-balancer-type: "Internal"
    kubernetes.io/ingress.allow-http: "false"
    kubernetes.io/ingress.class: "nginx"

    nginx.ingress.kubernetes.io/upstream-hash-by: "$http_x_header_name"
    nginx.ingress.kubernetes.io/upstream-hash-by-subset: "false"

    nginx.ingress.kubernetes.io/proxy-ssl-secret: "default/ingress-nginx-tls-key-v1"
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
    nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
spec:
  tls:
  - hosts:
    # this is the sticky svc that forwards to the ingress controller
    - svc-s-sticky.default.svc.cluster.local
    secretName: svc-s-tls-key-v2
  rules: 
  - host: svc-s-sticky.default.svc.cluster.local 
    http:
      paths:
      - path: /
        backend:
          serviceName: svc-s # this is the (non-sticky) svc in front of the S pods
          servicePort: 443

The sticky service looks like this:

kind: Service
apiVersion: v1
metadata:
  name: svc-s-sticky
  annotations:
    cloud.google.com/app-protocols: '{"https": "HTTPS"}'
spec:
  type: ExternalName
  # this is the service in front of the ingress controller
  externalName: ingress-nginx.ingress-nginx.svc.cluster.local

My secret ("default/ingress-nginx-tls-key-v1") looks like this:

{
    "apiVersion": "v1",
    "data": {
        "ca.crt": "LS0t<REDACTED>LQ0K",
        "tls.crt": "LS0t<REDACTED>LQo=",
        "tls.key": "LS0t<REDACTED>LQo="
    },
    "kind": "Secret",
    "metadata": {
        "creationTimestamp": "2020-01-10T02:17:19Z",
        "name": "ingress-nginx-tls-key-v1",
        "namespace": "default",
        "resourceVersion": "192221780",
        "selfLink": "/api/v1/namespaces/default/secrets/ingress-nginx-tls-key-v1",
        "uid": "543778a0-334f-11ea-b143-42010a8000b0"
    },
    "type": "Opaque"
}

The generated nginx conf on the ingress controller pod is:

## start server svc-s-sticky.default.svc.cluster.local
server {
    server_name svc-s-sticky.default.svc.cluster.local ;

    listen 80  ;
    listen 443  ssl http2 ;

    set $proxy_upstream_name "-";

    ssl_certificate_by_lua_block {
        certificate.call()
    }

    # PEM sha: bd4963e8f770fe1af553f004941cb3962d90e194
    proxy_ssl_certificate                   /etc/ingress-controller/ssl/default-ingress-nginx-tls-key-v1.pem;
    proxy_ssl_certificate_key               /etc/ingress-controller/ssl/default-ingress-nginx-tls-key-v1.pem;
    proxy_ssl_trusted_certificate           /etc/ingress-controller/ssl/default-ingress-nginx-tls-key-v1.pem;
    proxy_ssl_ciphers                       DEFAULT;
    proxy_ssl_protocols                     TLSv1 TLSv1.1 TLSv1.2;
    proxy_ssl_verify                        on;
    proxy_ssl_verify_depth                  1;

    location / {

        set $namespace      "default";
        set $ingress_name   "s-sticky";
        set $service_name   "svc-s";
        set $service_port   "443";
        set $location_path  "/";

        rewrite_by_lua_block {
            lua_ingress.rewrite({
                force_ssl_redirect = true,
                ssl_redirect = true,
                force_no_ssl_redirect = false,
                use_port_in_redirects = false,
            })
            balancer.rewrite()
            plugins.run()
        }

        header_filter_by_lua_block {
            plugins.run()
        }
        body_filter_by_lua_block {
        }

        log_by_lua_block {
            balancer.log()
            monitor.call()
            plugins.run()
        }

        port_in_redirect off;

        set $balancer_ewma_score -1;
        set $proxy_upstream_name "default-svc-s-443";
        set $proxy_host          $proxy_upstream_name;
        set $pass_access_scheme  $scheme;
        set $pass_server_port    $server_port;
        set $best_http_host      $http_host;
        set $pass_port           $pass_server_port;

        set $proxy_alternative_upstream_name "";

        client_max_body_size                    1m;

        proxy_set_header Host                   $best_http_host;

        # Pass the extracted client certificate to the backend

        # Allow websocket connections
        proxy_set_header                        Upgrade           $http_upgrade;

        proxy_set_header                        Connection        $connection_upgrade;

        proxy_set_header X-Request-ID           $req_id;
        proxy_set_header X-Real-IP              $remote_addr;

        proxy_set_header X-Forwarded-For        $remote_addr;

        proxy_set_header X-Forwarded-Host       $best_http_host;
        proxy_set_header X-Forwarded-Port       $pass_port;
        proxy_set_header X-Forwarded-Proto      $pass_access_scheme;

        proxy_set_header X-Scheme               $pass_access_scheme;

        # Pass the original X-Forwarded-For
        proxy_set_header X-Original-Forwarded-For $http_x_forwarded_for;

        # mitigate HTTPoxy Vulnerability
        # https://www.nginx.com/blog/mitigating-the-httpoxy-vulnerability-with-nginx/
        proxy_set_header Proxy                  "";

        # Custom headers to proxied server

        proxy_connect_timeout                   5s;
        proxy_send_timeout                      60s;
        proxy_read_timeout                      60s;

        proxy_buffering                         off;
        proxy_buffer_size                       4k;
        proxy_buffers                           4 4k;

        proxy_max_temp_file_size                1024m;

        proxy_request_buffering                 on;
        proxy_http_version                      1.1;

        proxy_cookie_domain                     off;
        proxy_cookie_path                       off;

        # In case of errors try the next upstream server before returning an error
        proxy_next_upstream                     error timeout;
        proxy_next_upstream_timeout             0;
        proxy_next_upstream_tries               3;

        proxy_pass http://upstream_balancer;

        proxy_redirect                          off;

    }

}
## end server svc-s-sticky.default.svc.cluster.local

The --v=5 nginx logs for one of the 400 queries look like this:

锘匡豢2020/01/10 23:26:20 [debug] 223#223: *52368 accept: 10.44.0.155:43946 fd:4
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer add: 4: 60000:4818171951
2020/01/10 23:26:20 [debug] 223#223: *52368 reusable connection: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 epoll add event: fd:4 op:1 ev:80002001
2020/01/10 23:26:20 [debug] 223#223: *52368 http check ssl handshake
2020/01/10 23:26:20 [debug] 223#223: *52368 http recv(): 1
2020/01/10 23:26:20 [debug] 223#223: *52368 https ssl handshake: 0x16
2020/01/10 23:26:20 [debug] 223#223: *52368 tcp_nodelay
2020/01/10 23:26:20 [debug] 223#223: *52368 ssl get session: 04C5345B:32
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL server name: "svc-s-sticky.default.svc.cluster.local"
2020/01/10 23:26:20 [debug] 223#223: *52368 ssl cert: connection reusable: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 reusable connection: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 lua_certificate_by_lua: handler return value: 0, cert cb exit code: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL ALPN supported by client: h2
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL ALPN supported by client: http/1.1
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL ALPN selected: h2
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_do_handshake: -1
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_get_error: 2
2020/01/10 23:26:20 [debug] 223#223: *52368 reusable connection: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL handshake handler: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 ssl new session: EB53BB32:32:205
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_do_handshake: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL: TLSv1.2, cipher: "ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD"
2020/01/10 23:26:20 [debug] 223#223: *52368 init http2 connection
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E714E06B0:512 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E7148E000:4096 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 add cleanup: 0000559E714E0880
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E714DE900:512 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 send SETTINGS frame
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 send WINDOW_UPDATE frame sid:0, window:2147418112
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 read handler
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: 24
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: 27
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: 13
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: 80
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: -1
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_get_error: 2
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 preface verified
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame type:4 f:0 l:18 sid:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 SETTINGS frame
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 setting 3:100
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 setting 4:1073741824
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 setting 2:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame complete pos:0000559E716DE5F3 end:0000559E716DE650
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame type:8 f:0 l:4 sid:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 WINDOW_UPDATE frame sid:0 window:1073676289
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame complete pos:0000559E716DE600 end:0000559E716DE650
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame type:1 f:5 l:71 sid:1
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 HEADERS frame sid:1 depends on 0 excl:0 weight:16
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E714E6B90:1024 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E71464250:4096 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E71465260:4096 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 get indexed header: 2
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 header: ":method: GET"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 get indexed name: 4
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 encoded string, len:12
2020/01/10 23:26:20 [debug] 223#223: *52368 http uri: "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 http args: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http exten: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 header: ":path: /_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 get indexed header: 7
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 header: ":scheme: https"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 get indexed name: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 encoded string, len:38
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 table add: ":authority: svc-s-sticky.default.svc.cluster.local"
2020/01/10 23:26:20 [debug] 223#223: *52368 malloc: 0000559E714DEB10:512
2020/01/10 23:26:20 [debug] 223#223: *52368 malloc: 0000559E71537190:4096
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 table account: 96 free:4096
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 header: ":authority: svc-s-sticky.default.svc.cluster.local"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 get indexed name: 58
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 encoded string, len:8
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 table add: "user-agent: curl/7.64.0"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 table account: 53 free:4000
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 header: "user-agent: curl/7.64.0"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 get indexed name: 19
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 raw string, len:3
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 table add: "accept: */*"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 table account: 41 free:3947
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 header: "accept: */*"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 request line: "GET /_status/healthz HTTP/2.0"
2020/01/10 23:26:20 [debug] 223#223: *52368 generic phase: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 rewrite phase: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 rewrite phase: 2
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "-"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $proxy_upstream_name
2020/01/10 23:26:20 [debug] 223#223: *52368 test location: "/"
2020/01/10 23:26:20 [debug] 223#223: *52368 using configuration "/"
2020/01/10 23:26:20 [debug] 223#223: *52368 http cl:-1 max:1048576
2020/01/10 23:26:20 [debug] 223#223: *52368 rewrite phase: 4
2020/01/10 23:26:20 [debug] 223#223: *52368 lua rewrite handler, uri:"/_status/healthz" c:1
2020/01/10 23:26:20 [debug] 223#223: *52368 rewrite phase: 4
2020/01/10 23:26:20 [debug] 223#223: *52368 rewrite phase: 5
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "default"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $namespace
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "s-sticky"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $ingress_name
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "svc-s"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $service_name
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "443"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $service_port
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "/"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $location_path
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "-1"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $balancer_ewma_score
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: "default-svc-s-443"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $proxy_upstream_name
2020/01/10 23:26:20 [debug] 223#223: *52368 http script complex value
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "default-svc-s-443"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $proxy_host
2020/01/10 23:26:20 [debug] 223#223: *52368 http script complex value
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "https"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $pass_access_scheme
2020/01/10 23:26:20 [debug] 223#223: *52368 http script complex value
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "443"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $pass_server_port
2020/01/10 23:26:20 [debug] 223#223: *52368 http script complex value
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "svc-s-sticky.default.svc.cluster.local"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $best_http_host
2020/01/10 23:26:20 [debug] 223#223: *52368 http script complex value
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "443"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $pass_port
2020/01/10 23:26:20 [debug] 223#223: *52368 http script value: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http script set $proxy_alternative_upstream_name
2020/01/10 23:26:20 [debug] 223#223: *52368 rewrite phase: 6
2020/01/10 23:26:20 [debug] 223#223: *52368 lua rewrite handler, uri:"/_status/healthz" c:1
2020/01/10 23:26:20 [debug] 223#223: *52368 looking up Lua code cache with key '=rewrite_by_lua(nginx.conf:457)nhli_6f1afa56de667de75a61211cc434cb08'
2020/01/10 23:26:20 [debug] 223#223: *52368 lua creating new thread
2020/01/10 23:26:20 [debug] 223#223: *52368 lua reset ctx
2020/01/10 23:26:20 [debug] 223#223: *52368 http cleanup add: 0000559E71465ED0
2020/01/10 23:26:20 [debug] 223#223: *52368 lua run thread, top:0 c:1
2020/01/10 23:26:20 [debug] 223#223: *52368 add cleanup: 0000559E71465FA0
2020/01/10 23:26:20 [debug] 223#223: *52368 lua resume returned 0
2020/01/10 23:26:20 [debug] 223#223: *52368 lua light thread ended normally
2020/01/10 23:26:20 [debug] 223#223: *52368 lua deleting light thread
2020/01/10 23:26:20 [debug] 223#223: *52368 post rewrite phase: 7
2020/01/10 23:26:20 [debug] 223#223: *52368 generic phase: 8
2020/01/10 23:26:20 [debug] 223#223: *52368 generic phase: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 generic phase: 10
2020/01/10 23:26:20 [debug] 223#223: *52368 access phase: 11
2020/01/10 23:26:20 [debug] 223#223: *52368 access phase: 12
2020/01/10 23:26:20 [debug] 223#223: *52368 access phase: 13
2020/01/10 23:26:20 [debug] 223#223: *52368 access phase: 14
2020/01/10 23:26:20 [debug] 223#223: *52368 post access phase: 15
2020/01/10 23:26:20 [debug] 223#223: *52368 generic phase: 16
2020/01/10 23:26:20 [debug] 223#223: *52368 generic phase: 17
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E715381A0:4096 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 http init upstream, client timer: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 http map started
2020/01/10 23:26:20 [debug] 223#223: *52368 http map: "" ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http map started
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "9eb519a69fcb94c2dc28211ea16b9951"
2020/01/10 23:26:20 [debug] 223#223: *52368 http map: "" "9eb519a69fcb94c2dc28211ea16b9951"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "Host"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "svc-s-sticky.default.svc.cluster.local"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "X-Request-ID"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "9eb519a69fcb94c2dc28211ea16b9951"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "X-Real-IP"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "10.44.0.155"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "X-Forwarded-For"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "10.44.0.155"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "X-Forwarded-Host"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "svc-s-sticky.default.svc.cluster.local"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "X-Forwarded-Port"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "443"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "X-Forwarded-Proto"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "https"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: "X-Scheme"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "https"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http script copy: ""
2020/01/10 23:26:20 [debug] 223#223: *52368 http proxy header: "user-agent: curl/7.64.0"
2020/01/10 23:26:20 [debug] 223#223: *52368 http proxy header: "accept: */*"
2020/01/10 23:26:20 [debug] 223#223: *52368 http proxy header:
2020/01/10 23:26:20 [debug] 223#223: *52368 http cleanup add: 0000559E71466240
2020/01/10 23:26:20 [debug] 223#223: *52368 init keepalive peer
2020/01/10 23:26:20 [debug] 223#223: *52368 get keepalive peer
2020/01/10 23:26:20 [debug] 223#223: *52368 lua balancer peer, tries: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 lua reset ctx
2020/01/10 23:26:20 [debug] 223#223: *52368 looking up Lua code cache with key 'balancer_by_luanhli_0f29762dfd828b8baa4d895affbc4b90'
2020/01/10 23:26:20 [debug] 223#223: *52368 stream socket 18
2020/01/10 23:26:20 [debug] 223#223: *52368 epoll add connection: fd:18 ev:80002005
2020/01/10 23:26:20 [debug] 223#223: *52368 connect to 10.44.2.168:9001, fd:18 #52370
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream connect: -2
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E714E5DD0:128 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer add: 18: 5000:4818116956
2020/01/10 23:26:20 [debug] 223#223: *52368 http finalize request: -4, "/_status/healthz?" a:1, c:2
2020/01/10 23:26:20 [debug] 223#223: *52368 http request count:2 blk:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame complete pos:0000559E716DE650 end:0000559E716DE650
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame out: 0000559E7148E1B8 sid:0 bl:0 len:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame out: 0000559E7148E108 sid:0 bl:0 len:4
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame out: 0000559E7148E050 sid:0 bl:0 len:18
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E714DA5B0:512 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 malloc: 0000559E715391B0:4096
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 27
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 13
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL to write: 49
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_write: 49
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame sent: 0000559E7148E050 sid:0 bl:0 len:18
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame sent: 0000559E7148E108 sid:0 bl:0 len:4
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame sent: 0000559E7148E1B8 sid:0 bl:0 len:0
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer del: 4: 4818171951
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 read handler
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: -1
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_get_error: 2
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame type:4 f:1 l:0 sid:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame complete pos:0000559E716DE5C9 end:0000559E716DE5C9
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream request: "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream send request handler
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream send request
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream send request body
2020/01/10 23:26:20 [debug] 223#223: *52368 chain writer buf fl:1 s:375
2020/01/10 23:26:20 [debug] 223#223: *52368 chain writer in: 0000559E71538990
2020/01/10 23:26:20 [debug] 223#223: *52368 writev: 375 of 375
2020/01/10 23:26:20 [debug] 223#223: *52368 chain writer out: 0000000000000000
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer del: 18: 4818116956
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer add: 18: 60000:4818171969
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream request: "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream process header
2020/01/10 23:26:20 [debug] 223#223: *52368 malloc: 0000559E714CEE80:4096
2020/01/10 23:26:20 [debug] 223#223: *52368 recv: eof:0, avail:1
2020/01/10 23:26:20 [debug] 223#223: *52368 recv: fd:18 76 of 4096
2020/01/10 23:26:20 [debug] 223#223: *52368 http proxy status 400 "400 Bad Request"
2020/01/10 23:26:20 [debug] 223#223: *52368 http proxy header done
2020/01/10 23:26:20 [debug] 223#223: *52368 lua header filter for user lua code, uri "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 looking up Lua code cache with key 'header_filter_by_luanhli_537482850bfc85b842f10d9c3d0521aa'
2020/01/10 23:26:20 [debug] 223#223: *52368 lua capture header filter, uri "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 header filter
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 output header: ":status: 400"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 output header: "server: openresty/1.15.8.2"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 output header: "date: Fri, 10 Jan 2020 23:26:20 GMT"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 output header: "strict-transport-security: max-age=15724800; includeSubDomains"
2020/01/10 23:26:20 [debug] 223#223: *52368 http2:1 create HEADERS frame 0000559E71538C50: len:86 fin:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http cleanup add: 0000559E71538D48
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame out: 0000559E71538C50 sid:1 bl:1 len:86
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 86
2020/01/10 23:26:20 [debug] 223#223: *52368 http2:1 HEADERS frame 0000559E71538C50 was sent
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame sent: 0000559E71538C50 sid:1 bl:1 len:86
2020/01/10 23:26:20 [debug] 223#223: *52368 http proxy filter init s:400 h:0 c:0 l:-1
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream process non buffered downstream
2020/01/10 23:26:20 [debug] 223#223: *52368 http output filter "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 http copy filter: "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 lua body filter for user lua code, uri "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 looking up Lua code cache with key 'body_filter_by_luanhli_57371dd58738408f1e599ff605294c36'
2020/01/10 23:26:20 [debug] 223#223: *52368 lua capture body filter, uri "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 http postpone filter "/_status/healthz?" 0000559E71538D60
2020/01/10 23:26:20 [debug] 223#223: *52368 http brotli filter
2020/01/10 23:26:20 [debug] 223#223: *52368 write new buf t:0 f:0 0000000000000000, pos 0000559E714CEE9C, size: 48 file: 0, size: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 http write filter: l:0 f:1 s:48
2020/01/10 23:26:20 [debug] 223#223: *52368 http write filter limit 0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2:1 windows: conn:1073741824 stream:1073741824
2020/01/10 23:26:20 [debug] 223#223: *52368 http2:1 create DATA frame 0000559E71538C50: len:48 flags:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame out: 0000559E71538C50 sid:1 bl:0 len:48
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 48
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL to write: 152
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_write: 152
2020/01/10 23:26:20 [debug] 223#223: *52368 http2:1 DATA frame 0000559E71538C50 was sent
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame sent: 0000559E71538C50 sid:1 bl:0 len:48
2020/01/10 23:26:20 [debug] 223#223: *52368 http write filter 0000000000000000
2020/01/10 23:26:20 [debug] 223#223: *52368 http copy filter: 0 "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer: 18, old: 4818171969, new: 4818171969
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream request: "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream dummy handler
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream request: "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 http upstream process non buffered upstream
2020/01/10 23:26:20 [debug] 223#223: *52368 recv: eof:1, avail:1
2020/01/10 23:26:20 [debug] 223#223: *52368 recv: fd:18 0 of 4096
2020/01/10 23:26:20 [debug] 223#223: *52368 finalize http upstream request: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 finalize http proxy request
2020/01/10 23:26:20 [debug] 223#223: *52368 free keepalive peer
2020/01/10 23:26:20 [debug] 223#223: *52368 lua balancer free peer, tries: 2
2020/01/10 23:26:20 [debug] 223#223: *52368 close http upstream connection: 18
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E714E5DD0, unused: 48
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer del: 18: 4818171969
2020/01/10 23:26:20 [debug] 223#223: *52368 reusable connection: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 http output filter "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 http copy filter: "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 lua body filter for user lua code, uri "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 looking up Lua code cache with key 'body_filter_by_luanhli_57371dd58738408f1e599ff605294c36'
2020/01/10 23:26:20 [debug] 223#223: *52368 lua capture body filter, uri "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 http postpone filter "/_status/healthz?" 00007FFE8356DCF0
2020/01/10 23:26:20 [debug] 223#223: *52368 http brotli filter
2020/01/10 23:26:20 [debug] 223#223: *52368 write new buf t:0 f:0 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 http write filter: l:1 f:0 s:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http write filter limit 0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2:1 create DATA frame 0000559E71538C50: len:0 flags:1
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame out: 0000559E71538C50 sid:1 bl:0 len:0
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL buf copy: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL to write: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_write: 9
2020/01/10 23:26:20 [debug] 223#223: *52368 http2:1 DATA frame 0000559E71538C50 was sent
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 frame sent: 0000559E71538C50 sid:1 bl:0 len:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http write filter 0000000000000000
2020/01/10 23:26:20 [debug] 223#223: *52368 http copy filter: 0 "/_status/healthz?"
2020/01/10 23:26:20 [debug] 223#223: *52368 http finalize request: 0, "/_status/healthz?" a:1, c:1
2020/01/10 23:26:20 [debug] 223#223: *52368 http request count:1 blk:0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 close stream 1, queued 0, processing 1, pushing 0
2020/01/10 23:26:20 [debug] 223#223: *52368 http close request
2020/01/10 23:26:20 [debug] 223#223: *52368 lua request cleanup: forcible=0
2020/01/10 23:26:20 [debug] 223#223: *52368 lua log handler, uri:"/_status/healthz" c:0
2020/01/10 23:26:20 [debug] 223#223: *52368 looking up Lua code cache with key '=log_by_lua(nginx.conf:474)nhli_91c8d86038dbc6fb2db2b6381a82b516'
2020/01/10 23:26:20 [debug] 223#223: *52368 http log handler
2020/01/10 23:26:20 [debug] 223#223: *52368 http map started
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "/_status/healthz"
2020/01/10 23:26:20 [debug] 223#223: *52368 http map: "/_status/healthz" "1"
2020/01/10 23:26:20 [debug] 223#223: *52368 http script var: "1"
2020/01/10 23:26:20 [debug] 223#223: *52368 run cleanup: 0000559E71465FA0
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E714CEE80
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E71464250, unused: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E71465260, unused: 2
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E715381A0, unused: 90
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E714E6B90, unused: 846
2020/01/10 23:26:20 [debug] 223#223: *52368 post event 0000559E7155E9D0
2020/01/10 23:26:20 [debug] 223#223: *52368 delete posted event 0000559E7155E9D0
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 handle connection handler
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E7148E000, unused: 2936
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E715391B0
2020/01/10 23:26:20 [debug] 223#223: *52368 reusable connection: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer add: 4: 180000:4818291970
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 idle handler
2020/01/10 23:26:20 [debug] 223#223: *52368 reusable connection: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 posix_memalign: 0000559E7148E000:4096 @16
2020/01/10 23:26:20 [debug] 223#223: *52368 http2 read handler
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_read: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_get_error: 6
2020/01/10 23:26:20 [debug] 223#223: *52368 peer shutdown SSL cleanly
2020/01/10 23:26:20 [debug] 223#223: *52368 close http connection: 4
2020/01/10 23:26:20 [debug] 223#223: *52368 SSL_shutdown: 1
2020/01/10 23:26:20 [debug] 223#223: *52368 event timer del: 4: 4818291970
2020/01/10 23:26:20 [debug] 223#223: *52368 reusable connection: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 run cleanup: 0000559E714E0880
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E7148E000, unused: 4016
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000000000000000
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E71537190
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E714DEB10
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E7143FE30, unused: 0
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E714E06B0, unused: 8
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E714DE900, unused: 8
2020/01/10 23:26:20 [debug] 223#223: *52368 free: 0000559E714DA5B0, unused: 384

/kind bug

kinbug

Most helpful comment

Update: it looks like the source of the error was this line in the nginx conf:

proxy_pass http://upstream_balancer;

Note the http://, not https://.

I fixed this by adding

nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

to my ingress spec. However, now I get a 502 Bad Gateway. I see the following error in the nginx log:

2020/01/14 01:26:02 [error] 41#41: *199 upstream SSL certificate does not match "upstream_balancer" while SSL handshaking to upstream, client: 10.44.5.7, server: svc-s-sticky.default.svc.cluster.local, request: "GET /_status/healthz HTTP/2.0", upstream: "https://10.44.2.168:9001/_status/healthz", host: "svc-s-sticky.default.svc.cluster.local"

Obviously, my server is not serving an SSL certificate that matches upstream_balancer as a host name. What is the suggested fix here? It seems unlikely that I'm the first person to hit this issue.

All 2 comments

Update: it looks like the source of the error was this line in the nginx conf:

proxy_pass http://upstream_balancer;

Note the http://, not https://.

I fixed this by adding

nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

to my ingress spec. However, now I get a 502 Bad Gateway. I see the following error in the nginx log:

2020/01/14 01:26:02 [error] 41#41: *199 upstream SSL certificate does not match "upstream_balancer" while SSL handshaking to upstream, client: 10.44.5.7, server: svc-s-sticky.default.svc.cluster.local, request: "GET /_status/healthz HTTP/2.0", upstream: "https://10.44.2.168:9001/_status/healthz", host: "svc-s-sticky.default.svc.cluster.local"

Obviously, my server is not serving an SSL certificate that matches upstream_balancer as a host name. What is the suggested fix here? It seems unlikely that I'm the first person to hit this issue.

Closing this issue since I technically resolved the original problem. Filed another issue for the 502 Bad Gateway here: https://github.com/kubernetes/ingress-nginx/issues/4928

Was this page helpful?
0 / 5 - 0 ratings

Related issues

smeruelo picture smeruelo  路  3Comments

geek876 picture geek876  路  3Comments

lachlancooper picture lachlancooper  路  3Comments

natemurthy picture natemurthy  路  3Comments

briananstett picture briananstett  路  3Comments