Ingress-nginx: auth-tls-pass-certificate-to-upstream does not work with https

Created on 4 Dec 2018  路  21Comments  路  Source: kubernetes/ingress-nginx

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

NGINX Ingress controller version: 0.21.0

Kubernetes version:

Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.3", GitCommit:"435f92c719f279a3a67808c80521ea17d5715c66", GitTreeState:"clean", BuildDate:"2018-11-27T01:14:37Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}

What happened:

When adding auth-tls-pass-certificate-to-upstream: true to an ingress resource, the client certificate passed to the ingress controller is not forwarded to the backend pod.

What you expected to happen:

The backend pod should receive the client certificate.

How to reproduce it (as minimally and precisely as possible):

  1. Start an https server expecting mtls

  2. Create an ingress resource, such as the following, that points to the mtls server's service

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: mtls-sample
  annotations:
    nginx.ingress.kubernetes.io/backend-protocol: "https"
    nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
spec:
  rules:
    - http:
        paths:
          - path: /hello
            backend:
              serviceName: mtls-svc
              servicePort: 443
  1. Send a request to the ingress controller, such as the following.
# curl -L --cert 4_client/certs/localhost.cert.pem --key 4_client/private/localhost.key.pem https://172.17.0.11:443/hello -k

<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx/1.15.6</center>
</body>
</html>
  1. View the pod logs (assuming go app) to verify missing client cert.
2018/12/04 17:42:29 http: TLS handshake error from 172.17.0.11:41922: tls: client didn't provide a certificate
  1. Send same curl directly to pod service and verify mtls succeeds.
# curl -L --cert 4_client/certs/localhost.cert.pem --key 4_client/private/localhost.key.pem https://10.102.202.95:443/hello -k

Hello World/ 

Anything else we need to know:

Unless i'm misunderstanding the annotation, I'd expect the client cert to be passed on to the upstream pod.

Most helpful comment

Again, still interested! :)

/remove-lifecycle stale

All 21 comments

Alright, after digging deeper, I'm finding the issue to be more around standardization of passing client certificates in headers rather than my initial theory, the nginx-ingress-controller not passing the client cert.

I've found nginx is passing the client cert to the backend pod in the Ssl-client-certificate header.

It seems for projects like Envoy, there's been lots of discussion around how to accomplish this, in their case they went with x-forwarded-client-cert. And some suggestions on stackoverflow suggest using headers like x-ssl-cert.

I suggest we provide the functionality to specify the header key for which the client cert will be forwarded in. Something such as:

nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream-header: "x-forwarded-client-cert"

To ensure compatibility with upstream servers / pods.

Let me know your thoughts.

I suggest we provide the functionality to specify the header key for which the client cert will be forwarded in. Something such as:

That makes sense. We already do this for another header https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/configmap/#forwarded-for-header
That said, this should be a global value instead of a new annotation, at least as a first step.

Edit: just in case, this does not means the header will be compatible with envoy (http://nginx.org/en/docs/http/ngx_http_ssl_module.html != https://www.envoyproxy.io/docs/envoy/latest/configuration/http_conn_man/headers#x-forwarded-client-cert)

this should be a global value instead of a new annotation,

I can take a stab at this. Can you elaborate on the meaning of global value @aledbf ?

this does not means the header will be compatible with envoy

100% understood.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Any update on this? :) I really like to see this implemented!

/remove-lifecycle rotten

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Still interested! :)

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Again, still interested! :)

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Can't wait to get this! ;)

/remove-lifecycle stale

did anyone figure this out, we have the same situation, had to spend a few days debugging before landing here.

Any update on this? We have a same situation like this.

Hello,

I run into this thread a couple of days ago.

It seems that there is a configuration snippet annotation (https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#configuration-snippet) that lets you define the HTTP header in which the client certificate will be inserted. The following annotation worked for me:

nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;

Of course you can replace the 'X-SSL-CERT' with the name of your desired header.

@IoakeimSamarasGR Could you please share your whole annotation?

I am using the following annotation without success

annotations:
      kubernetes.io/ingress.class: "nginx"
      nginx.ingress.kubernetes.io/affinity: "cookie"
      nginx.ingress.kubernetes.io/session-cookie-name: "MYSERVICE"
      nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
      nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
      #nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
      #nginx.ingress.kubernetes.io/auth-tls-verify-client: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: |
        proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;

@rdoering Here are the annotations that worked for me:

annotations:
    nginx.ingress.kubernetes.io/auth-tls-verify-client: "on"
    nginx.ingress.kubernetes.io/auth-tls-verify-depth: "1"
    nginx.ingress.kubernetes.io/auth-tls-secret: default/{{ .Values.ingress.caTls.caSecretName }}
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;
      proxy_ssl_name "mydemo.demo.com";
    kubernetes.io/ingress.allow-http: "false"
    nginx.ingress.kubernetes.io/ssl-redirect: "true"
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    nginx.ingress.kubernetes.io/proxy-ssl-secret: default/{{ .Values.ingress.caTls.caSecretName }}
    nginx.ingress.kubernetes.io/proxy-ssl-verify: "on"
    nginx.ingress.kubernetes.io/proxy-ssl-verify-depth: "1"
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

@IoakeimSamarasGR Could you please share your whole annotation?

I am using the following annotation without success

annotations:
      kubernetes.io/ingress.class: "nginx"
      nginx.ingress.kubernetes.io/affinity: "cookie"
      nginx.ingress.kubernetes.io/session-cookie-name: "MYSERVICE"
      nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
      nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
      #nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true"
      #nginx.ingress.kubernetes.io/auth-tls-verify-client: "true"
      nginx.ingress.kubernetes.io/configuration-snippet: |
        proxy_set_header X-SSL-CERT $ssl_client_escaped_cert;

I gave it a try with kubernetes v1.18.8 and k8s.gcr.io/ingress-nginx/controller:v0.41.2
(reference: https://github.com/kubernetes/ingress-nginx/issues/2922 "Unable to use "ssl_verify_client optional_no_ca" with the current implementation of "auth-tls-verify-client" annotation")

This combination of two configs set the client certificates.

    nginx.ingress.kubernetes.io/server-snippet: |
      ssl_verify_client optional_no_ca;
    nginx.ingress.kubernetes.io/configuration-snippet: |
      proxy_set_header SSL_CLIENT_CERT $ssl_client_escaped_cert;

As far as I tested, nginx.ingress.kubernetes.io/auth-tls-verify-client: "optional_no_ca" did not work. nginx.ingress.kubernetes.io/auth-tls-pass-certificate-to-upstream: "true" was not necessary.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

briananstett picture briananstett  路  3Comments

kfox1111 picture kfox1111  路  3Comments

lachlancooper picture lachlancooper  路  3Comments

smeruelo picture smeruelo  路  3Comments

cehoffman picture cehoffman  路  3Comments