Ingress-nginx: AWS NLB support is limited

Created on 11 Feb 2020  Â·  21Comments  Â·  Source: kubernetes/ingress-nginx

Below are some facts for ingress-nginx on NLB with TLS termination:

  1. Right now we have only global annotation per LB to configure TLS on backend: service.beta.kubernetes.io/aws-load-balancer-backend-protocol
  2. if BE protocol set to "ssl" everything works fine, except the fact that we're doing double TLS offloading for no reason (on NLB first, then on ingress)
  3. if BE protocol set to "tcp", we'll get "Plain HTTP request sent to TLS port"
  4. if we map https to http port to address â„–3 then HTTP -> HTTPS redirects stop working
  5. there should be yet another combination that will cause infinite loop in HTTPS redirects, but I cannot remember it from top of my head...

These "circular" dependencies still causing pain to some people and they still report problems defined in â„–3 â„–4 â„–5 here in issue tracker.

I do understand that if there are multiple cases like TCP and TLS backends within the scope they should be handled with separate NLBs - this is rather limitation from ELB support in K8S nowadays.
We probably may even take BE proto "ssl" as a relatively good workaround (although suboptimal) but that basically means that all TCP ports exposed through ingress-nginx should have internal TLS which is usually not the case and if you need SSL-passthrough for additional security you probably won't use ingress-nginx as an option anyways.
Also, I briefly went through ingress controller's source code and haven't found any evidence that SSL port may be faked by actually configuring plain HTTP listener on port 443 (no ssl option included).
I don't remember precisely if there is some dependency in ingress-nginx's configuration on service.beta.kubernetes.io/aws-load-balancer-backend-protocol annotation (but have strong feeling that there is), but wouldn't it be more convenient and reasonable to rely on that configuration and switch nginx behavior for port 443 to "" or "ssl" to reflect LB annotation's value "tcp" or "ssl" respectively?

That would make both configurations defined in facts â„–2 and â„–3 working correctly, as well as provide more flexibility for users to decide which scenario to use and potentially keep it in line with their TCP apps if they may choose to publish them through the ingress-nginx.

Sorry for a wall of text, but I was trying to summarize problems we have across the board and trigger wider discussion. Thank you!

kinsupport

Most helpful comment

Hi @dene14 , this is how I set up my nginx-controller, and it works fine, except for setting up the SSL certificate manually via AWS Console in the NLB.

I hope this helps, if it's not and it's not related, I'll just delete my comment

Process

  1. Traffic goes to NLB on HTTPS (if HTTP redirects to HTTPS)
  2. TLS termination on NLB and redirects traffic to nginx-controller with HTTP
  3. nginx controller --> ingress (DNS-based, not path-based) --> service --> pods

Note Requests to www are forwarded to HTTPS

My spec

Kubernetes version: v1.14 (running on EKS)
nginx-controller: helm chart nginx-ingress-1.30.0
Load Balancer Type: AWS Network Load Balancer

yaml files

I used kubectl get resource -o yaml and omitted creation-timestamp, status, etc.

nginx values.yaml

controller:
  config:
    real-ip-header: "proxy_protocol"
    use-forwarded-headers: "true"
  metrics:
    enabled: "true"
  service:
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:ommitted-xxx"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb

ingress.yaml

apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/affinity: cookie
      nginx.ingress.kubernetes.io/enable-cors: "true"
      nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
      nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
      nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
      nginx.ingress.kubernetes.io/session-cookie-name: MYCOOKIE
      nginx.org/websocket-services: service1 # not sure if this is working, need to test it
    labels:
      id: ns1
    name: ing
    namespace: ns1
  spec:
    rules:
    - host: host1.example.com
      http:
        paths:
        - backend:
            serviceName: service1
            servicePort: 1234
          path: /
    - host: host2.example.com
      http:
        paths:
        - backend:
            serviceName: service1
            servicePort: 1234
          path: /

service.yaml

apiVersion: v1
  kind: Service
  metadata:
    labels:
      id: ns1
    name: service1
    namespace: ns1
  spec:
    clusterIP: 1.2.3.4
    ports:
    - name: pod1
      port: 1234
      protocol: TCP
      targetPort: 4321
    - name: pod2
      port: 1234
      protocol: TCP
      targetPort: 5678
    selector:
      id: ns1
    sessionAffinity: None
    type: ClusterIP

All 21 comments

Hi @dene14 , this is how I set up my nginx-controller, and it works fine, except for setting up the SSL certificate manually via AWS Console in the NLB.

I hope this helps, if it's not and it's not related, I'll just delete my comment

Process

  1. Traffic goes to NLB on HTTPS (if HTTP redirects to HTTPS)
  2. TLS termination on NLB and redirects traffic to nginx-controller with HTTP
  3. nginx controller --> ingress (DNS-based, not path-based) --> service --> pods

Note Requests to www are forwarded to HTTPS

My spec

Kubernetes version: v1.14 (running on EKS)
nginx-controller: helm chart nginx-ingress-1.30.0
Load Balancer Type: AWS Network Load Balancer

yaml files

I used kubectl get resource -o yaml and omitted creation-timestamp, status, etc.

nginx values.yaml

controller:
  config:
    real-ip-header: "proxy_protocol"
    use-forwarded-headers: "true"
  metrics:
    enabled: "true"
  service:
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:ommitted-xxx"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: "3600"
      service.beta.kubernetes.io/aws-load-balancer-type: nlb

ingress.yaml

apiVersion: extensions/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
      nginx.ingress.kubernetes.io/affinity: cookie
      nginx.ingress.kubernetes.io/enable-cors: "true"
      nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
      nginx.ingress.kubernetes.io/session-cookie-expires: "172800"
      nginx.ingress.kubernetes.io/session-cookie-max-age: "172800"
      nginx.ingress.kubernetes.io/session-cookie-name: MYCOOKIE
      nginx.org/websocket-services: service1 # not sure if this is working, need to test it
    labels:
      id: ns1
    name: ing
    namespace: ns1
  spec:
    rules:
    - host: host1.example.com
      http:
        paths:
        - backend:
            serviceName: service1
            servicePort: 1234
          path: /
    - host: host2.example.com
      http:
        paths:
        - backend:
            serviceName: service1
            servicePort: 1234
          path: /

service.yaml

apiVersion: v1
  kind: Service
  metadata:
    labels:
      id: ns1
    name: service1
    namespace: ns1
  spec:
    clusterIP: 1.2.3.4
    ports:
    - name: pod1
      port: 1234
      protocol: TCP
      targetPort: 4321
    - name: pod2
      port: 1234
      protocol: TCP
      targetPort: 5678
    selector:
      id: ns1
    sessionAffinity: None
    type: ClusterIP

Hi,

I just want to flag that we've hit similar limitations with ELBs.

We want to offload at the LB (for easy certificate management in ACM) and still pass https ->https in the nginx-ingress (and http -> http).
We almost have this working, we set BE protocol to "ssl" and the listeners are configured in a way that works, but it makes the ELB health check use SSL (fine) against the non-SSL node port (not fine). At the moment we have to manually change the health check port after deployment, then everything works as expected. (I have a minimal set of values for the helm chart to repro this if needed)

I imagine if the issues in this ticket are resolved then it'll extend to the ELB case as well? Or if not and this comment isn't relevant then I can raise a separate issue.

@dene14 I am sorry but I don't understand exactly what are you trying to say.

The only way to support SSL as a backend option in an ELB or NLB is to expose SSL certificate/s for the hosts being defined in ACM. This could be self-signed certificates (because NGINX uses SNI to determine which server block should handle the traffic)

@aledbf
I'm trying to say that when you use NLB (Level 4 LB) w/o ProxyProtocol (bcoz why you should use it) and use SSL termination on NLB (yes, ACM cert) there is no signalization for nginx other than NodePort to understand where this traffic originated from.
Current base logic for nginx assumes port 80 plain and port 443 ssl. Latter obviously requires traffic between NLB and Nginx to be wrapped in TLS. If you would like to avoid double SSL termination (on NLB, then on Nginx) you may remap HTTPS to port 80, this however will introduce infinite redirect, as Nginx has no way to determine if it was TLS traffic originally. So if we can disable "ssl" flag on port 443 of nginx, then traffic can be terminated on NLB only and go to a plain port that is dedicated for ex-TLS traffic, all redirections rules on that port may be simply disabled.

And as my initial message says, it also gives some flexibility to users who would need publish TCP ports outside (they will be also non TLS, as backend-proto annotation on ingress-nginx's service will be set to "tcp" in that case).

Hope I clarified.

@unfor19 thanks for your example! I'm a bit scared to enable ProxyProtocol manually, as it will be disabled during any service update - not a solution for production cluster, unfortunately https://github.com/kubernetes/kubernetes/issues/57250. And in fact I don't see any reason why we should use ProxyProtocol at all, as original source IPs left intact with NLB (unlike ELB)

@dene14 the redirect issue is in part due to proxy_set_header X-Forwarded-Proto being defined as either the value of the inbound connection or the value forwarded by the load balancer (if use-forwarded-headers: "true"). There is also no way to override these headers. Any headers defined by proxy-set-headers are applied before the X-Forwarded-* headers.

@unfor19. In your example I think you missed use-proxy-protocol: "true". In any case the proxy protocol here only sets the real ip, and not the real port, which we need to determine the protocol (http or https used).

Unfortunately it looks like Nginx lacks support for determining the original request scheme from the PROXY protocol Ref: https://trac.nginx.org/nginx/ticket/711#comment:2 so until that is done only a best efforts guess based on the TCP port (via $proxy_protocol_server_port) or allowing the user to override the $scheme is possible.

Another consideration is that the usage of NLB requires the service to be exposed to the public internet on the node. Definitely a security consideration if we start assuming that all traffic on a specific port is encrypted upstream (tls bypass), because it possibly isn't, or requires trusting PROXY v2 protocol info that may originate from the internet (spoofing).

In ingress-nginx this PR https://github.com/kubernetes/ingress-nginx/pull/5042 makes it so that when use-proxy-protocol: "true" the proto of https is selected when port 443 is used, so it uncovers options for acceptable configurations.

hi @dene14, curious if you got the problem resolved? Am having a difficult time with this as well. Wish to terminate TLS on NLB and just use HTTP for all traffic behind the NLB, but also force a redirect to HTTPs should users try to access the NLB on a HTTP port.

Hi @unfor19,

Can you confirm you are setting up the SSL certificate manually via AWS Console despite having it defined in your values. I'm not sure I understand.

it works fine, except for setting up the SSL certificate manually via AWS Console in the NLB.

service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:ommitted-xxx"

Thank you,


Edit:

🎉I setup NLB with a cert and this works absolutely fine! I didn't need to manually setting it up via the aws console.

@aledbf I'm not sure why this PR is closed. I couldn't find anyone that managed to have a proper NLB (ssl termination enabled) setup with the correct X-Forwarded-Proto/Port headers (https/443).

@smoke

when use-proxy-protocol: "true" the proto of https is selected when port 443 is used

If you enable the ssl termination on the NLB, ssl being enabled on port 443, you get the error "Plain HTTP request was sent to HTTPS port"

Can we reopen this PR?

@grifx I can not speak for the PR, nor I have yet setup NLB.
I just want to share what configuration should be achieved so that the use-proxy-protocol steps in to fill properly x-forwarded proto headers https://github.com/kubernetes/ingress-nginx/pull/5042/files#diff-d2d9d7b4e27b247474196438a3dfbdc3R127

The following will work:

  • AWS NLB Port 80 -> Nginx Port 80
  • SSL -> AWS NLB Port 443 -> Nginx Port 80 (not 443!)

Keep in mind the following will not be covered from use-proxy-protocol

  • SSL -> AWS NLB Port 1443 -> Nginx Port 80 (not 443!)

Also the following will not work, unless you configure SSL on the Nginx and config the NLB to use again SSL for this communication (no idea if it is possible):

  • SSL -> AWS NLB Port 443 -> Nginx Port 443

I couldn't find anyone that managed to have a proper NLB (ssl termination enabled) setup with the correct X-Forwarded-Proto/Port headers (https/443).

NLB is an L4 load balancer. There is no way to signal X-Forwarded-Proto/Port headers. For that reason, traffic arriving on port 443 (NLB) and being handled by ingress-nginx in port 80, it always thinks is processing an HTTP request. That is the reason why the installation steps for AWS LB TLS termination uses an ELB, where is possible to send such header.

should be achieved so that the use-proxy-protocol steps

This don't work. Same issues with HTTP headers.

@aledbf sorry, you are probably missing that it is actually Proxy Protocol version 2 that is used and in this case the x-forwarded headers are not added by the AWS NLB, but from the Nginx Ingress based on the Proxy Protocol version 2 provided data.

So given that NLB is an L4 load balancer that supports Proxy Protocol version 2, also that Nginx is also supporting Proxy Protocol version 2 the service in the K8S cluster can receive proper X-Forwarded headers.

This happens in the following manner for the trivial setup:
_Browsers_ (using TLS over TCP)
-> _Port 443 on AWS NLB_ (using non-TLS over TCP and Binary header format as defined in Proxy Protocol v2 to pass data including CLIENT_IP that is the Browser's IP and PROXY_PORT that is 443 in a standard setup)
-> _Port 80 on Nginx Ingress_ (it when use-proxy-protocol is enabled gets the needed info from the Proxy Protocol v2 headers and converts them to X-Forwarded headers also forces proper X-Forwarded-Proto = https when the PROXY_PORT in front of it is 443 )
-> <_Port XYZ on internal k8s service_> (now knows the Browser's IP and other details for the proxies if all is well configured)

also forces proper X-Forwarded-Proto = https when the PROXY_PORT in front of it is 443

As per my previous comment this is a security issue for any implementation. NLB requires the service to be exposed to the public internet on the node, therefore unencrypted traffic may be served via direct node access. I also don't believe that this should be closed until such time as Nginx support for Proxy protocol scheme determination is added. Also any support for NLB should be caveated with this security issue.

sorry, you are probably missing that it is actually Proxy Protocol version 2 that is used and in this case the x-forwarded headers are not added by the AWS NLB, but from the Nginx Ingress based on the Proxy Protocol version 2 provided data.

I tried that. If you can provide a change to the hack/generate-deploy-scripts.sh like I did in #5313, fixing the issue, please open a PR.
To be clear, I tried to use an NLB in #5313 for the TLS termination but in the end, the headers were wrong.

As per my previous comment this is a security issue for any implementation. NLB requires the service to be exposed to the public internet on the node, therefore unencrypted traffic may be served via direct node access. I also don't believe that this should be closed until such time as Nginx support for Proxy protocol scheme determination is added. Also any support for NLB should be caveated with this security issue.

Not sure what do you mean. In the previous version of the documentation and now, we are clear that you need to adjust the setting proxy-real-ip-cidr to the VPC CIDR in use for the Kubernetes cluster
https://kubernetes.github.io/ingress-nginx/deploy/#tls-termination-in-aws-load-balancer-elb

I also don't believe that this should be closed until such time as Nginx support for Proxy protocol scheme determination is added.

I am sorry, but for the TLS termination and redirect use case, the AWS documentation indicates we can use an ELB or ALB, where HTTP headers are supported.
That said, is not possible to define a service type=LoadBalancer of type ALB, only ELB or NLB. That is the reason why the docs use an ELB for this scenario.

@aledbf I initially had the following setup:

NLB:80 -> NGINX:2443 which redirects to NLB:443
SSL_TERMINATION(NLB:443) -> NGINX:80

Would mocking the headers this way be fundamentally wrong or dangerous?

if (port = 80) {
  more_set_input_headers "X-Forwarded-Port: 443";
  more_set_input_headers "X-Forwarded-Proto: https";
  more_set_input_headers "X-Forwarded-For: $remote_addr";
}

If there are no security concerns with this approach. Can we edit nginx.conf.tmpl to make it easier or provide an example in the doc?

@macropin (sorry I initially pinged the wrong person)

unencrypted traffic may be served via direct node access

I'm not sure I understand the scope of this issue since it would only happen to someone with bad intentions.

Thank you!

Can we edit nginx.conf.tmpl to make it easier or provide an example in the doc?

If this works for your use case, you can use the custom template approach
That said, the template contains logic that for those variables, something not sure it can be changed just with those more_set_headers directives.

hi @dene14, curious if you got the problem resolved? Am having a difficult time with this as well. Wish to terminate TLS on NLB and just use HTTP for all traffic behind the NLB, but also force a redirect to HTTPs should users try to access the NLB on a HTTP port.

Also trying to figure this out ...

Following worked for me (terminate TLS on NLB and force redirect to HTTPS): Using EKS v1.17

NOTE: You need to manually enable proxy protocol v2 in the NLB target groups in GUI or CLI:

aws elbv2 modify-target-group-attributes --target-group-arn arn:aws:elasticloadbalancing:us-east-1:xxx:targetgroup/xx --attributes Key=proxy_protocol_v2.enabled,Value=true

nginx-ingress-values.yaml

controller:
  publishService:
    enabled: true
  metrics:
    enabled: true
  service:
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
      service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-east-1:xxx:certificate/xxx"
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
    targetPorts:
      https: http
  config:
    use-proxy-protocol: "true"

Script to enable proxy protocol on target groups:

hostname=$(kubectl get services ingress-nginx-controller --output jsonpath='{.status.loadBalancer.ingress[0].hostname}')
loadBalancerArn=$(aws elbv2 describe-load-balancers  --query "LoadBalancers[?DNSName==\`$hostname\`].LoadBalancerArn"  --output text)
targetGroup1Arn=$(aws elbv2 describe-target-groups --load-balancer-arn $loadBalancerArn --query TargetGroups[0].TargetGroupArn --output text)
targetGroup2Arn=$(aws elbv2 describe-target-groups --load-balancer-arn $loadBalancerArn --query TargetGroups[1].TargetGroupArn --output text)
aws elbv2 --region us-east-1 modify-target-group-attributes --target-group-arn $targetGroup1Arn --attributes Key=proxy_protocol_v2.enabled,Value=true --output text
aws elbv2 --region us-east-1 modify-target-group-attributes --target-group-arn $targetGroup2Arn --attributes Key=proxy_protocol_v2.enabled,Value=true --output text
Was this page helpful?
0 / 5 - 0 ratings