Ingress-nginx: Deploy EKS on fargate

Created on 6 Jan 2020  路  14Comments  路  Source: kubernetes/ingress-nginx

I'm deploying the ingress-nginx towards an EKS cluster on fargate (new ) uisng the standard deployment yaml file, but there is issue with the securitycontext

Warning FailedScheduling fargate-scheduler Pod not supported on Fargate: invalid SecurityContext fields: AllowPrivilegeEscalation

AllowPrivilegeEscalation doesn't seems to be allowed.

lifecyclstale

Most helpful comment

Guys, the biggest problem is the fact that nginx ingress controller uses classic load balancer, that is not supported on EKS Fargate.

All 14 comments

AllowPrivilegeEscalation doesn't seems to be allowed.

Did you check there is no PodSecurityPolicy forbidding this?

Hi,
I've noticed the same issue, here is my PSP

kubectl describe psp eks.privileged
Name: eks.privileged

Settings:
Allow Privileged: true
Allow Privilege Escalation: 0xc0003fcb78
Default Add Capabilities:
Required Drop Capabilities:
Allowed Capabilities: *
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: 0-65535
Allow Host PID: true
Allow Host IPC: true
Read Only Root Filesystem: false
SELinux Context Strategy: RunAsAny
User:
Role:
Type:
Level:
Run As User Strategy: RunAsAny
Ranges:
FSGroup Strategy: RunAsAny
Ranges:
Supplemental Groups Strategy: RunAsAny
Ranges:

My psp

uid: d7fc543e-288b-11ea-a11c-0a72aad1a7be
spec:
allowPrivilegeEscalation: true
allowedCapabilities:

  • '*'
    fsGroup:
    rule: RunAsAny
    hostIPC: true
    hostNetwork: true
    hostPID: true
    hostPorts:
  • max: 65535
    min: 0
    privileged: true
    runAsUser:
    rule: RunAsAny
    seLinux:
    rule: RunAsAny
    supplementalGroups:
    rule: RunAsAny
    volumes:
  • '*'

---> allowPrivilegeEscalation: true, so for this point it looks fine

Seems to be blocked on other level, the pod execution policy?

I think we are htting some EKS on fargate restriction here

There are currently a few limitations that you should be aware of:

https://aws.amazon.com/blogs/aws/amazon-eks-on-aws-fargate-now-generally-available/

You cannot run Daemonsets, Privileged pods, or pods that use HostNetwork or HostPort.

Can we run the controller with the option AllowPrivilegeEscalation set to false?

Can sombody advise on this?

I have changed the allowescalation to false in the 0.21.0 till 0.28.0 getting Permission denied error.

Error:

Error: exit status 1
2020/02/13 11:47:06 [notice] 73#73: ModSecurity-nginx v1.0.0
nginx: the configuration file /tmp/nginx-cfg598726495 syntax is ok
2020/02/13 11:47:06 [emerg] 73#73: bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: [emerg] bind() to 0.0.0.0:80 failed (13: Permission denied)
nginx: configuration file /tmp/nginx-cfg598726495 test failed

can anyone help me out here?

same issue....

I solved this issue by setting the following values in the helm chart

controller:
  extraArgs:
    http-port: 8080
    https-port: 8443

  containerPort:
    http: 8080
    https: 8443

  service:
    ports:
      http: 80
      https: 443
    targetPorts:
      http: 8080
      https: 8443

  image:
    allowPrivilegeEscalation: false

I ran into this today, was thinking I have to deploy an EC2 nodegroup in my cluster to test instead of using fargate.

"Warning FailedScheduling fargate-scheduler Pod not supported on Fargate: invalid SecurityContext fields: AllowPrivilegeEscalation"

Guys, the biggest problem is the fact that nginx ingress controller uses classic load balancer, that is not supported on EKS Fargate.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Is the takeaway from this that ingress-nginx is not supported on EKS fargate and should be avoided?

According to https://github.com/kubernetes/ingress-nginx/issues/4888#issuecomment-602082624 it seems to be workable. (And those would be nice changes to make default in the chart, to remove the need to run privileged by default)

You'd also have to use the NLB with it (or better-yet, the newly-supported NLB-IP mode) since CLB isn't supported for Fargate, but nginx-ingress works with NLB already.

To be clear, I haven't tried nginx-ingress on Fargate or with NLB-IP myself, it's part of my plan for my next AWS k8s rollout. We do run nginx-ingress with NLB today.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

oilbeater picture oilbeater  路  3Comments

sophaskins picture sophaskins  路  3Comments

yuyang0 picture yuyang0  路  3Comments

geek876 picture geek876  路  3Comments

cabrinoob picture cabrinoob  路  3Comments