Ingress-nginx: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io"

Created on 19 Apr 2020  ·  81Comments  ·  Source: kubernetes/ingress-nginx

Hi all,

When I apply the ingress's configuration file named ingress-myapp.yaml by command kubectl apply -f ingress-myapp.yaml, there was an error. The complete error is as follows:

Error from server (InternalError): error when creating "ingress-myapp.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded

This is my ingress:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-myapp
  namespace: default
  annotations: 
    kubernetes.io/ingress.class: "nginx"
spec:
  rules: 
  - host: myapp.magedu.com
    http:
      paths:
      - path: 
        backend: 
          serviceName: myapp
          servicePort: 80

Has anyone encountered this problem?

kinsupport

Most helpful comment

@aduncmj I found this solution https://stackoverflow.com/questions/61365202/nginx-ingress-service-ingress-nginx-controller-admission-not-found

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

All 81 comments

Hi,

I have.

The validatingwebhook service is not reachable in my private GKE cluster. I needed to open the 8443 port from the master to the pods.
On top of that, I then received a certificate error on the endpoint "x509: certificate signed by unknown authority". To fix this, I needed to include the caBundle from the generated secret in the validatingwebhookconfiguration.

A quick fix if you don't want to do the above and have the webhook fully operational is to remove the validatingwebhookconfiguration or setting the failurePolicy to Ignore.

I believe some fixes are needed in the deploy/static/provider/cloud/deploy.yaml as the webhooks will not always work out of the box.

A quick update on the above, the certificate error should be managed by the patch job that exists in the deployment so that part should be a non-issue.
Only the port 8443 needed to be opened from master to pods for me.

A quick update on the above, the certificate error should be managed by the patch job that exists in the deployment so that part should be a non-issue.
Only the port 8443 needed to be opened from master to pods for me.

Hi, I am a beginner in setting a k8s and ingress.
I am facing a similar issue. But more in a baremetal scenario. It would be very grateful if you can please share more details on what you mean by 'opening a port between master and pods'?

Update:
sorry, as I said, I am new to this. I checked there is a service (ingress-nginx-controller-admission) which is exposed to node 433 running from the ingress-nginx namespace. And for some reason my ingress resource trying to run from default namespace is not able to communicate to it. Please suggest on how I can resolve this.

error is :
Error from server (InternalError): error when creating "test-nginx-ingress.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded

I'm also facing this issue, on a fresh cluster from AWS where I only did

helm install nginx-ing ingress-nginx/ingress-nginx --set rbac.create=true

And deployed a react service (which I can port-forward to and it works fine).

I then tried to apply both my own ingress and the example ingress

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /

I'm getting this error:

Error from server (InternalError): error when creating "k8s/ingress/test.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://nginx-ing-ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: stream error: stream ID 7; INTERNAL_ERROR

I traced it down to this loc by looking at the logs in the controller:
https://github.com/kubernetes/ingress-nginx/blob/master/internal/ingress/controller/controller.go#L532

Logs:

I0427 11:52:35.894902       6 server.go:61] handling admission controller request /extensions/v1beta1/ingresses?timeout=30s
2020/04/27 11:52:35 http2: panic serving 172.31.16.27:39304: runtime error: invalid memory address or nil pointer dereference
goroutine 2514 [running]:
net/http.(*http2serverConn).runHandler.func1(0xc00000f2c0, 0xc0009a9f8e, 0xc000981980)
    /home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/h2_bundle.go:5713 +0x16b
panic(0x1662d00, 0x27c34c0)
    /home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/runtime/panic.go:969 +0x166
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getBackendServers(0xc000119a40, 0xc00000f308, 0x1, 0x1, 0x187c833, 0x1b, 0x185e388, 0x0, 0x185e388, 0x0)
    /tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:532 +0x6d2
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).getConfiguration(0xc000119a40, 0xc00000f308, 0x1, 0x1, 0x1, 0xc00000f308, 0x0, 0x1, 0x0)
    /tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:402 +0x80
k8s.io/ingress-nginx/internal/ingress/controller.(*NGINXController).CheckIngress(0xc000119a40, 0xc000bfc300, 0x50a, 0x580)
    /tmp/go/src/k8s.io/ingress-nginx/internal/ingress/controller/controller.go:228 +0x2c9
k8s.io/ingress-nginx/internal/admission/controller.(*IngressAdmission).HandleAdmission(0xc0002d4fb0, 0xc000943080, 0x7f8ffce8b1b8, 0xc000942ff0)
    /tmp/go/src/k8s.io/ingress-nginx/internal/admission/controller/main.go:73 +0x924
k8s.io/ingress-nginx/internal/admission/controller.(*AdmissionControllerServer).ServeHTTP(0xc000219820, 0x1b05080, 0xc00000f2c0, 0xc000457d00)
    /tmp/go/src/k8s.io/ingress-nginx/internal/admission/controller/server.go:70 +0x229
net/http.serverHandler.ServeHTTP(0xc000119ce0, 0x1b05080, 0xc00000f2c0, 0xc000457d00)
    /home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/server.go:2807 +0xa3
net/http.initALPNRequest.ServeHTTP(0x1b07440, 0xc00067f170, 0xc0002dc700, 0xc000119ce0, 0x1b05080, 0xc00000f2c0, 0xc000457d00)
    /home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/server.go:3381 +0x8d
net/http.(*http2serverConn).runHandler(0xc000981980, 0xc00000f2c0, 0xc000457d00, 0xc000a81480)
    /home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/h2_bundle.go:5720 +0x8b
created by net/http.(*http2serverConn).processHeaders
    /home/ubuntu/.gimme/versions/go1.14.2.linux.amd64/src/net/http/h2_bundle.go:5454 +0x4e1

Any ideas? Seems strange to get this on a newly setup cluster where I followed the instructions correctly.

I might have solved it..

I followed this guide for the helm installation: https://kubernetes.github.io/ingress-nginx/deploy/

But when I followed this guide instead: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-helm/

The error doesn't occur.

If you have this issue try it out by deleting your current helm installation.

Get the name:

helm list

Delete and apply stable release:

helm delete <release-name>
helm repo add nginx-stable https://helm.nginx.com/stable
helm install nginx-ing nginx-stable/nginx-ingress

@johan-lejdung not really, that is a different ingress controller.

@aledbf I use 0.31.1 still has same problem

bash-5.0$ /nginx-ingress-controller --version
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       0.31.1
  Build:         git-b68839118
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.17.10

-------------------------------------------------------------------------------

Error: UPGRADE FAILED: failed to create resource: Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded

@aledbf Same error. Bare-metal installation.

NGINX Ingress controller
  Release:       0.31.1
  Build:         git-b68839118
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.17.10

-------------------------------------------------------------------------------

Error from server (InternalError): error when creating "./**ommitted**.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded

I added a note about the webhook port in https://kubernetes.github.io/ingress-nginx/deploy/ and the links for the additional steps in GKE

i still have the problem

update

i disable the webhook, the error go away

fix workaround

helm install my-release ingress-nginx/ingress-nginx
--set controller.service.type=NodePort
--set controller.admissionWebhooks.enabled=false

Caution!!!! it's may not resolve the issue properly.

now status

  • use helm 3
    helm install my-release ingress-nginx/ingress-nginx
    --set controller.service.type=NodePort

    exec kubectl get svc,pods

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/a-service ClusterIP 10.105.159.98 80/TCP 28h
service/b-service ClusterIP 10.106.17.65 80/TCP 28h
service/kubernetes ClusterIP 10.96.0.1 443/TCP 3d4h
service/my-release-ingress-nginx-controller NodePort 10.97.224.8 80:30684/TCP,443:32294/TCP 111m
service/my-release-ingress-nginx-controller-admission ClusterIP 10.101.44.242 443/TCP 111m

NAME READY STATUS RESTARTS AGE
pod/a-deployment-84dcd8bbcc-tgp6d 1/1 Running 0 28h
pod/b-deployment-f649cd86d-7ss9f 1/1 Running 0 28h
pod/configmap-pod 1/1 Running 0 54m
pod/configmap-pod-1 1/1 Running 0 3h33m
pod/my-release-ingress-nginx-controller-7859896977-bfrxp 1/1 Running 0 111m
pod/redis 1/1 Running 1 6h11m
pod/test 1/1 Running 1 5h9m

my ingress.yaml

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example

namespace: foo

spec:
rules:
- host: b.abbetwang.top
http:
paths:
- path: /b
backend:
serviceName: b-service
servicePort: 80
- path: /a
backend:
serviceName: a-service
servicePort: 80

tls:
- hosts:
- b.abbetwang.top

what I Do

when i run kubectl apply -f new-ingress.yaml
i got Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io:

my apiserver log blow:

I0504 06:22:13.286582 1 trace.go:116] Trace[1725513257]: "Create" url:/apis/networking.k8s.io/v1beta1/namespaces/default/ingresses,user-agent:kubectl/v1.18.2 (linux/amd64) kubernetes/52c56ce,client:192.168.0.133 (started: 2020-05-04 06:21:43.285686113 +0000 UTC m=+59612.475819043) (total time: 30.000880829s):
Trace[1725513257]: [30.000880829s] [30.000785964s] END
W0504 09:21:19.861015 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
W0504 09:31:49.897548 1 watcher.go:199] watch chan error: etcdserver: mvcc: required revision has been compacted
I0504 09:36:17.637753 1 trace.go:116] Trace[615862040]: "Call validating webhook" configuration:my-release-ingress-nginx-admission,webhook:validate.nginx.ingress.kubernetes.io,resource:networking.k8s.io/v1beta1, Resource=ingresses,subresource:,operation:CREATE,UID:41f47c75-9ce1-49c0-a898-4022dbc0d7a1 (started: 2020-05-04 09:35:47.637591858 +0000 UTC m=+71256.827724854) (total time: 30.000128816s):
Trace[615862040]: [30.000128816s] [30.000128816s] END
W0504 09:36:17.637774 1 dispatcher.go:133] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://my-release-ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded

Why close this issue? What is the solution?

@eltonbfw update to 0.32.0 and make sure the API server can reach the POD running the ingress controller

@eltonbfw update to 0.32.0 and make sure the API server can reach the POD running the ingress controller

I have the same problem,and i use 0.32.0.
What's the solution?
Pleast, thanks!

For the specific issue, my problem did turn out to be an issue with internal communication. @aledbf added notes to the documentation to verify connectivity. I had internal communication issues caused by Centos 8's move to nftables. In my case, I needed additional "rich" allow rules in firewalld for:

  • Docker network source (172.17.0.0/16)
  • CNI CIDR source
  • Cluster CIDR source
  • Host IP source
  • Masquerading

I have the same issue, baremetal install with CentOS 7 worker nodes.

Have the same issue with 0.32.0 on HA baremetal cluster with strange behaviour:
Have two ingresses A and B:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: service-alpha
  namespace: staging
  annotations:
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
    - host: alpha.example.org
      http:
        paths:
          - path: /
            backend:
              serviceName: service-alpha
              servicePort: 1080
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: service-beta
  namespace: staging
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
  rules:
    - host: beta.example.org
      http:
        paths:
          - path: /user/(.*)
            backend:
              serviceName: service-users
              servicePort: 1080
          - path: /data/(.*)
            backend:
              serviceName: service-data
              servicePort: 1080



md5-a875cb22ce76c0d4dc34c8fe8e0837aa



# kubectl apply -f manifests/ingress-beta.yml 
Error from server (InternalError): error when creating "manifests/ingress-beta.yml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)



md5-dcd931a1db829c387c14ea024f181d93



I0530 08:05:56.884549       1 trace.go:116] Trace[898207247]: "Call validating webhook" configuration:ingress-nginx-admission,webhook:validate.nginx.ingress.kubernetes.io,resource:networking.k8s.io/v1beta1, Resource=ingresses,subresource:,operation:CREATE,UID:fdce95ab-e2a9-40f5-9ab3-73a85b603db6 (started: 2020-05-30 08:05:26.883895783 +0000 UTC m=+5434.178340436) (total time: 30.000569226s):
Trace[898207247]: [30.000569226s] [30.000569226s] END
W0530 08:05:56.884664       1 dispatcher.go:133] Failed calling webhook, failing closed validate.nginx.ingress.kubernetes.io: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded (Client.Timeout exceeded while awaiting headers)
I0530 08:05:56.885303       1 trace.go:116] Trace[868353513]: "Create" url:/apis/networking.k8s.io/v1beta1/namespaces/staging/ingresses,user-agent:kubectl/v1.18.3 (linux/amd64) kubernetes/2e7996e,client:127.0.0.1 (started: 2020-05-30 08:05:26.882592405 +0000 UTC m=+5434.177037017) (total time: 30.002669278s):
Trace[868353513]: [30.002669278s] [30.002248351s] END

The main question is why the first ingress is created the most of times and the second is always failed to create?

Upd. Also this comment on SO might be useful in investigating causes of problems.

Upd 2. When rewrite annotation is removed, the manifest is applied without errors.

Upd 3. It fails in combination with multiple paths and with rewrite annotation.

@aledbf Looks like a bug.

We have this issue on baremetal k3s cluster. Our http proxy logged these traffic.

gost[515]: 2020/06/09 15:15:37 http.go:151: [http] 192.168.210.21:47396 -> http://:8080 -> ingress-nginx-controller-admission.ingress-nginx.svc:443
gost[515]: 2020/06/09 15:15:37 http.go:241: [route] 192.168.210.21:47396 -> http://:8080 -> ingress-nginx-controller-admission.ingress-nginx.svc:443
gost[515]: 2020/06/09 15:15:37 http.go:262: [http] 192.168.210.21:47396 -> 192.168.210.1:8080 : dial tcp: lookup ingress-nginx-controller-admission.ingress-nginx.svc on 192.168.210.1:53: no such host

@eltonbfw update to 0.32.0 and make sure the API server can reach the POD running the ingress controller

I have the same problem,and i use 0.32.0.
What's the solution?
Pleast, thanks!

me too

If you are using the baremetal install from Kelsey Hightower, my suggestion is to install kubelet on your master nodes, start calico/flannel or whatever you use for CNI, label your nodes as masters so you have no other pods started there and then your control-plane would be able to communicate with your nginx deployment and the issue should be fixed. At least this is how it worked for me.

@aledbf This issue still occurs

@andrei-matei Kelsey's cluster works perfectly even without additional CNI plugins and kubelet SystemD services installed on master nodes. All you need is to add a route to Services' CIDR 10.32.0.0/24 using worker node IPs as "next-hop" on master nodes only.
In this way I've got ingress-nginx (deployed from "bare-metal" manifest) and cert-manager webhooks working, but unfortunately not together :( still doesn't know why...

Updated: got both of them working

@aduncmj I found this solution https://stackoverflow.com/questions/61365202/nginx-ingress-service-ingress-nginx-controller-admission-not-found

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

@aduncmj i did the same, thank you for sharing the findings. I m curious if this can be handled without manual intervention.

@opensourceonly This worked for me, you can try it, you should add a pathType for Ingress configuration. https://github.com/kubernetes/ingress-nginx/pull/5445

Hi,

I have.

The validatingwebhook service is not reachable in my private GKE cluster. I needed to open the 8443 port from the master to the pods.
On top of that, I then received a certificate error on the endpoint "x509: certificate signed by unknown authority". To fix this, I needed to include the caBundle from the generated secret in the validatingwebhookconfiguration.

A quick fix if you don't want to do the above and have the webhook fully operational is to remove the validatingwebhookconfiguration or setting the failurePolicy to Ignore.

I believe some fixes are needed in the deploy/static/provider/cloud/deploy.yaml as the webhooks will not always work out of the box.


@moljor
I have the same quest about :

On top of that, I then received a certificate error on the endpoint "x509: certificate signed by unkno wn authority". To fix this, I needed to include the caBundle from the generated secret in the validatingwebhookconfiguration.

how do you make the setup?

I used openssl tool to make ssl file ;then make secret ; but I do not know how to make validatingwebhookconfiguration good

please help me

@liminghua999 If you check the deploy yaml, the patch job should "make the validatingwebhookconfiguration good". It exists to update it with the secret.

@liminghua999 If you check the deploy yaml, the patch job should "make the validatingwebhookconfiguration good". It exists to update it with the secret.

@moljor

HI moljor : thanks a lot for your answer

I get deploy.yaml file from 
https://github.com/kubernetes/ingress-nginx/mirrors/ingress-nginx/raw/master/deploy/static/provider/baremetal/deploy.yaml
the ValidatingWebhookConfiguration.webhooks.clientConfig do not config caBundle ;How do I config it by myself

kind: ValidatingWebhookConfiguration
metadata:
  labels:
    helm.sh/chart: ingress-nginx-2.11.1
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/instance: ingress-nginx
    app.kubernetes.io/version: 0.34.1
    app.kubernetes.io/managed-by: Helm
    app.kubernetes.io/component: admission-webhook
  name: ingress-nginx-admission
webhooks:
  - name: validate.nginx.ingress.kubernetes.io
    rules:
      - apiGroups:
          - extensions
          - networking.k8s.io
        apiVersions:
          - v1beta1
        operations:
          - CREATE
          - UPDATE
        resources:
          - ingresses
    failurePolicy: Fail
    sideEffects: None
    admissionReviewVersions:
      - v1
      - v1beta1
    clientConfig:
      service:
        namespace: ingress-nginx
        name: ingress-nginx-controller-admission
        path: /extensions/v1beta1/ingresses

@liminghua999
check https://github.com/kubernetes/ingress-nginx/blob/master/deploy/static/provider/baremetal/deploy.yaml and the last 2 batch jobs. It creates and updates everything. (no need to create a secret yourself)

Otherwise reading a bit of the k8s documentation might be helpful if you want to do things yourself: https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/#prerequisites

so, what's the solutions?

@aledbf can you please reopen this issue? A huge number of people are having the same problem, so this issue definitely isn't resolved. The instructed solution isn't clear in either the documentation nor the issue comments.

I'm seeing the most common reply here is "turn off webhook validation", but turning off validation doesn't mean the error has gone away, just that it's no longer being reported.

so, what's the solutions?

I had a similar problem (but with "connection refused" rather than "context deadline exceeded", as reason).

The solution of @lbs-rodrigo, deleting the ingress so that it can be recreated according to the config, with kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission, fixed my problem.
If your configuration is correct, then give it a try.

hello, i use version 0.30 to solve this problem,hah

------------------ 原始邮件 ------------------
发件人: "kubernetes/ingress-nginx" <[email protected]>;
发送时间: 2020年9月9日(星期三) 凌晨1:47
收件人: "kubernetes/ingress-nginx"<[email protected]>;
抄送: "小伙子很皮啊"<[email protected]>;"Comment"<[email protected]>;
主题: Re: [kubernetes/ingress-nginx] Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io" (#5401)

so, what's the solutions?

I had a similar problem (but with "connection refused" rather than "context deadline exceeded", as reason).

The solution of @lbs-rodrigo, deleting the ingress so that it can be recreated according to the config, with kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission, fixed my problem.
If your configuration is correct, then give it a try.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.

As mentioned by @cnlong I also updated mine to version v0.34.1 and did not need to remove ValidatingWebhook but I had to change the number of pods in the ingress deploy to replicate on all my nodes.

I've tried to upgrade from the deprecated helm chart stable/nginx-ingress to ingress-nginx/ingress-nginx (app version 0.35.0) and my ingress deployment crashes with:

Error from server (InternalError): error when creating "STDIN": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post https://nginx-ingress-ingress-nginx-controller-admission.default.svc:443/extensions/v1beta1/ingresses?timeout=30s: dial tcp 10.100.146.146:443: connect: connection refused

I used minimal configuration shown in the documentaion but Ingress resource give same error

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80

Error from server: error when creating "disabled/my-ingress-prod-v3.yaml": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: rejecting admission review because the request does not contains an Ingress resource but networking.k8s.io/v1, Resource=ingresses with name minimal-ingress in namespace my-pro

This is still an issue - using version: v0.35.0.

kubectl apply -f ingress-single.yaml --kubeconfig=/home/mansaka/softwares/k8sClusteryaml/kubectl.yaml
worked for me

Solution:
delete your ValidatingWebhookConfiguration

kubectl get -A ValidatingWebhookConfiguration
NAME
nginx-ingress-ingress-nginx-admission

kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-ingress-nginx-admission

The solution from vosuyak worked for me, using kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission when currently using the namespace where I'm applying the ingress rules.

See https://stackoverflow.com/a/62044090/1549918

Update on 2020-10-07

In my scenario, the problem is caused by custom CNI plugin weave-net, which makes the API server not able to reach the overlay network. The solution is either using the EKS default CNI plugin, or adding hostNetwork: true to the ingress-nginx-controller-admission Deployment spec. But the latter has some other issues that one needs to care about.

----------------Original comment----------------

Removing the ValidatingWebhookConfiguration only disable the validation. Your ingress may get persisted, but once your ingress has some configuration error, you nginx ingress controller will be doomed.

I don't think the PathType fix 5445 has something to do with this error. It says

Post https://ingress-nginx-controller-admission.ingress-nginx.svc:443/extensions/v1beta1/ingresses?timeout=30s: context deadline exceeded

which IMHO tells that the ingress admission service cannot be reached from the control plane(8443 port is the default port exposed from pod, and 443 is the service exposed for the pod/deployment).

I'm encountering this error in AWS EKS, K8S version 1.17. It occured to me this might have something to do with security group settings. But I tried every possible way to make sure the control plane can reach the worker node on any port, but still the problem cannot be resolved. 😞

I think so, deleting ValidatingWebhookConfiguration is not a good solution, because it is very unsafe.But I don't know how to solve this problem.
I used the same steps and there is no problem in Kubernetes v1.17.5. But in Kubernetes v1.19.x there will be an error:
Error from server (InternalError): error when creating "/root/ingress-v1.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-nginx-controller-admission.ingress-nginx.svc:443/networking/v1beta1/ingresses?timeout=10s": x509: certificate is valid for kubernetes, kubernetes.default, kubernetes.default.svc, kubernetes.default.svc.cluster, kubernetes.default.svc.cluster.local, not ingress-nginx-controller-admission.ingress-nginx.svc

Chart Version:
````

cat Chart.yaml

apiVersion: v1
appVersion: 0.35.0
description: Ingress controller for Kubernetes using NGINX as a reverse proxy and
load balancer
home: https://github.com/kubernetes/ingress-nginx
icon: https://upload.wikimedia.org/wikipedia/commons/thumb/c/c5/Nginx_logo.svg/500px-Nginx_logo.svg.png
keywords:

  • ingress
  • nginx
    kubeVersion: '>=1.16.0-0'
    maintainers:
  • name: ChiefAlexander
    name: ingress-nginx
    sources:
  • https://github.com/kubernetes/ingress-nginx
    version: 3.3.0
    ````
    Is there any other solution besides deleting ValidatingWebhookConfiguration?

Same for me.
context deadline exceeded

I get same error when I have two ingress controllers(nginx and aws alb) deployed in the eks cluster.
When my helm installation tries to create an ingress with class =alb, this web hook is called and results in error.
Is there a way to limit this webhook just for nginx ingresses?

Would you please reopen this issue @aledbf ?

This error means that the Kubernetes API Server can't connect to the admission webhook (a workload running inside the Kubernetes cluster).

Solution for GKE is actually perfectly documented: https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#console_9. Just create a firewall rule to allow API Server -> workload traffic.

For other Kubernetes deployments try to login to the API Server host and connect to the provided URL yourself. If it doesn't work, figure out routing, firewalls and name resolution.

@amlinux
Using GKE and adding the rule did not help, unfortunately.
I have following firewall rules and it still does not work:

`
Name Type Targets Filters Protocols/ports Action Priority Logs Hit count Last hit
gke-allow-http-s-vo-dev3
Ingress
Apply to all
IP ranges: 0.0.0.0/0
tcp:80,443
Allow
1000
Off


gke-allow-master-vo-dev3
Ingress
Apply to all
IP ranges: 172.16.10.0/28
tcp:443,10250,80
Allow
1000
Off


gke-vo-dev3-d5b3cd68-all
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 10.0.0.0/14
tcp
udp
esp; ah;
Allow
1000
Off


gke-vo-dev3-d5b3cd68-master
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 172.16.10.0/28
tcp:10250,443
Allow
1000
Off


gke-vo-dev3-d5b3cd68-vms
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 172.16.0.0/28
tcp:1-65535
udp:1-65535
icmp
Allow
1000
Off


k8s-a13d627c779ffa7b-node-http-hc
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 130.211.0.0/22,
tcp:10256
Allow
1000
Off


k8s-fw-a52358d4ebd364640b91ca2f4dd1b190
Ingress
gke-vo-dev3-d5b3cd68-node
IP ranges: 0.0.0.0/0
tcp:80,443
Allow
1000
Off

`

What are your master network range and GKE node label? Which of the rules is supposed to allow master traffic to the nodes?

Master network range is: 172.16.10.0/28
By GKE node label you mean k8s labels?
Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/instance-type=n1-standard-2 beta.kubernetes.io/os=linux cloud.google.com/gke-nodepool=node-pool-dev3 cloud.google.com/gke-os-distribution=cos cloud.google.com/gke-preemptible=true failure-domain.beta.kubernetes.io/region=europe-north1 failure-domain.beta.kubernetes.io/zone=europe-north1-a kubernetes.io/arch=amd64 kubernetes.io/hostname=gke-vo-dev3-node-pool-dev3-46fcaf79-twx7 kubenetes.io/os=linux
And this rule I think allows the master traffic to the nodes. -->gke-allow-master-vo-dev3

Unfortunately formatting has been lost in copy-paste, and now it's very hard to say what your rules do.

gke-allow-master-vo-dev3 doesn't seem to be the right one as it only allows ports 443 and 10250 (https and standard kubelet ports). What you need to open, is traffic from master to the port that the admission webhook is listening on.

To make it simple, open all ports from master range to all nodes, and maybe also to the secondary range of the cluster (ips allocated to pods), make sure that everything works, and then step back and tighten the rules.

When opened all tcp ports for the rule as below it works:
_gke-allow-master-vo-dev3
Logs
Off
view
Networkvpc-network-dev3
Priority1000
DirectionIngress
Action on matchAllow
Source filters
IP ranges
172.16.10.0/28
Protocols and ports
tcp
EnforcementEnabled
Insights
None_

How to tighten them? How to check on which ports it listens as it seems to me it is 443?
_k describe ValidatingWebhookConfiguration ingress-nginx-admission
Name: ingress-nginx-admission
Namespace:
Labels: app.kubernetes.io/component=admission-webhook
app.kubernetes.io/instance=ingress-nginx
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/version=0.40.2
helm.sh/chart=ingress-nginx-3.7.1
Annotations:
API Version: admissionregistration.k8s.io/v1
Kind: ValidatingWebhookConfiguration
Metadata:
Creation Timestamp: 2020-10-29T14:08:55Z
Generation: 2
Resource Version: 87618
Self Link: /apis/admissionregistration.k8s.io/v1/validatingwebhookconfigurations/ingress-nginx-admission
UID: 3b44b27e-7c8b-41a1-bf04-ff0ec8c35a36
Webhooks:
Admission Review Versions:
v1
v1beta1
Client Config:
Ca Bundle: XXX
Service:
Name: ingress-nginx-controller-admission
Namespace: default
Path: /networking/v1beta1/ingresses
Port: 443
Failure Policy: Fail
Match Policy: Equivalent
Name: validate.nginx.ingress.kubernetes.io
Namespace Selector:
Object Selector:
Rules:
API Groups:
networking.k8s.io
API Versions:
v1beta1
v1
Operations:
CREATE
UPDATE
Resources:
ingresses
Scope: *
Side Effects: None
Timeout Seconds: 10_

However when I go back to open 80,443, 10250 on that rule it does not work.

Ok. It occured it needs port 8443.
Thank you.

Solution:
delete your ValidatingWebhookConfiguration

kubectl get -A ValidatingWebhookConfiguration
NAME
nginx-ingress-ingress-nginx-admission

kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-ingress-nginx-admission

This also worked for me 👍🏻

Solution:
delete your ValidatingWebhookConfiguration

kubectl get -A ValidatingWebhookConfiguration
NAME
nginx-ingress-ingress-nginx-admission

kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-ingress-nginx-admission

That also works for me

Solution:
delete your ValidatingWebhookConfiguration
kubectl get -A ValidatingWebhookConfiguration
NAME
nginx-ingress-ingress-nginx-admission
kubectl delete -A ValidatingWebhookConfiguration nginx-ingress-ingress-nginx-admission

That also works for me

That's not a solution, that's destroying the functionality of the software that caused the root issue :)

As mentioned previously, the solution is to allow admission webhook port 8443 from master to worker nodes. On private GKE clusters firewall rule should be gke-<cluster_name>-<id>-master with target tags gke-<cluster_name>-<id>-node, source range - your master CIDR block and TCP ports 10250, 443 by default.

Is this the officially supported Kubernetes ingress controller, a CNCF-owned IBM/RedHat/Google/Microsoft-funded project, or is this unmaintained? This issue breaks the "Hello World" ingress tutorial on the Kubernetes website, and the maintainers have closed this issue and refused to reopen it.
While I understand that in open source, nobody owns me anything when they're volunteering their time, in this case they're not _volunteering_ their time. This is a well-funded project with absent maintainers. It's quite unprofessional to have a tutorial on the website fail, and then close the issue addressing the problem.

Is this the officially supported Kubernetes ingress controller,

Yes

a CNCF-owned IBM/RedHat/Google/Microsoft-funded project,

No

or is this unmaintained?

No

This issue breaks the "Hello World" ingress tutorial on the Kubernetes website, and the maintainers have closed this issue and refused to reopen it.

This is not true. If you check the thread, the issue is not related to ingress-nginx itself, but a networking issue; the master node cannot connect to the worker node/s, like the previous comment mention.

While I understand that in open source, nobody owns me anything when they're volunteering their time, in this case, they're not volunteering their time.

Yes, I am volunteering my time since I created ingress-nginx

This is a well-funded project with absent maintainers.

This is not true. I've been unable to find sponsors for my time on the project.

It's quite unprofessional to have a tutorial on the website fail and then close the issue addressing the problem.

Not sure exactly what you are doing. From https://kind.sigs.k8s.io/docs/user/ingress/

cat <<EOF | kind create cluster --config=-
> kind: Cluster
> apiVersion: kind.x-k8s.io/v1alpha4
> nodes:
> - role: control-plane
>   kubeadmConfigPatches:
>   - |
>     kind: InitConfiguration
>     nodeRegistration:
>       kubeletExtraArgs:
>         node-labels: "ingress-ready=true"
>   extraPortMappings:
>   - containerPort: 80
>     hostPort: 80
>     protocol: TCP
>   - containerPort: 443
>     hostPort: 443
>     protocol: TCP
> EOF
Creating cluster "kind" ...
 ✓ Ensuring node image (kindest/node:v1.19.1) 🖼
 ✓ Preparing nodes 📦  
 ✓ Writing configuration 📜 
 ✓ Starting control-plane 🕹️ 
 ✓ Installing CNI 🔌 
 ✓ Installing StorageClass 💾 
Set kubectl context to "kind-kind"
You can now use your cluster with:

kubectl cluster-info --context kind-kind

Thanks for using kind! 😊
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/kind/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
kubectl wait --namespace ingress-nginx \
>   --for=condition=ready pod \
>   --selector=app.kubernetes.io/component=controller \
>   --timeout=90s
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
pod/ingress-nginx-controller-6df69bd4f7-fv7lr condition met
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
ingress.networking.k8s.io/example-ingress created
# should output "foo"
curl localhost/foo
foo
# should output "bar"
curl localhost/bar
bar

@RobbieMcKinstry not sure how you arrived at all those assumptions about the project. Can you share the source for that?

@aledbf Multiple people in this thread reported having the same problem outside of the "networking issue" described above, myself included. Additionally, we've made clear that disabling the validating webhook is not a solution.

We've asked that you reopen the issue because those problems are not addressed by the proposed solution. The root of my "unprofessionalism" claim is that you've been explicitly asked to reopen this issue in August, and haven't replied in over three months or made any effort to resolve the users' problems. The first step to ameliorate this is to reopen the issue.

I empathize with the difficulty of finding a maintainer and running an OSS project. OSS work is hard to keep up with and there are few volunteers. However, there's no reason that an extremely well funded project like Kubernetes should have an official ingress controller with an uncertain maintainership status.

If the load is too much for one person to bear (and no one else is willing to step forward), perhaps the right move for the user is to downgrade this ingress controller to unofficial status. it's a really unfortunate user experience to go through the official ingress tutorial on a fresh cluster, hit a bug, and wait three months for a response with a ton of other people having the same problem. By that point, I suspect many users have abandoned this controller is favor of another anyway.

The first step to ameliorate this is to reopen the issue.

done.

However, there's no reason that an extremely well funded project like Kubernetes should have an official ingress controller with an uncertain maintainership status.

Again, not sure why you have such an assumption

If the load is too much for one person to bear (and no one else is willing to step forward), perhaps the right move for the user is to downgrade this ingress controller to unofficial status.

Maybe that is the way. How you proposed to do that?

Multiple people in this thread reported having the same problem outside of the "networking issue" described above, myself included. Additionally, we've made clear that disabling the validating webhook is not a solution.

There is no single comment in this thread, like the one I posted showing this is not an ingress-nginx problem, or how to reproduce this step by step (including the cluster creation)

Edit: the use of kind as provisioned is intentional, to use documentation writing by a different project. And yes, it is a single node deployment, to show this is a firewall/networking problem.

Reading my own comments, it sounds like I have no interest in this issue.
I could edit my comments, or I can show that I cannot reproduce it:

export PROJECT_ID=XXXXXXX
export ZONE=us-west1-a
gcloud config set compute/zone $ZONE
gcloud beta container clusters create "${PROJECT_ID}" \
  --machine-type=n1-standard-1 \
  --zone=us-west1-a \
  --preemptible \
  --num-nodes=3 \
  --no-enable-basic-auth

From https://kubernetes.github.io/ingress-nginx/deploy/#gce-gke

kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole cluster-admin \
  --user $(gcloud config get-value account)

Create the firewall rules (if required) https://cloud.google.com/kubernetes-engine/docs/how-to/private-clusters#add_firewall_rules

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.41.2/deploy/static/provider/cloud/deploy.yaml
namespace/ingress-nginx created
serviceaccount/ingress-nginx created
configmap/ingress-nginx-controller created
clusterrole.rbac.authorization.k8s.io/ingress-nginx created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx created
role.rbac.authorization.k8s.io/ingress-nginx created
rolebinding.rbac.authorization.k8s.io/ingress-nginx created
service/ingress-nginx-controller-admission created
service/ingress-nginx-controller created
deployment.apps/ingress-nginx-controller created
validatingwebhookconfiguration.admissionregistration.k8s.io/ingress-nginx-admission created
serviceaccount/ingress-nginx-admission created
clusterrole.rbac.authorization.k8s.io/ingress-nginx-admission created
clusterrolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
role.rbac.authorization.k8s.io/ingress-nginx-admission created
rolebinding.rbac.authorization.k8s.io/ingress-nginx-admission created
job.batch/ingress-nginx-admission-create created
job.batch/ingress-nginx-admission-patch created
kubectl wait --namespace ingress-nginx \
>    --for=condition=ready pod \
>    --selector=app.kubernetes.io/component=controller \
>    --timeout=90s
pod/ingress-nginx-controller-67759f896-9bvv5 condition met
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
ingress.networking.k8s.io/example-ingress created
kubectl get pods -A
NAMESPACE       NAME                                                          READY   STATUS      RESTARTS   AGE
default         bar-app                                                       1/1     Running     0          6s
default         foo-app                                                       1/1     Running     0          7s
ingress-nginx   ingress-nginx-admission-create-vx77x                          0/1     Completed   0          42s
ingress-nginx   ingress-nginx-admission-patch-5vv9r                           0/1     Completed   0          42s
ingress-nginx   ingress-nginx-controller-67759f896-9bvv5                      1/1     Running     0          45s
kube-system     event-exporter-gke-77cccd97c6-vtlrm                           2/2     Running     0          4m59s
kube-system     fluentd-gke-d9jxm                                             2/2     Running     0          2m39s
kube-system     fluentd-gke-dhplz                                             2/2     Running     0          3m13s
kube-system     fluentd-gke-scaler-54796dcbf7-hwls9                           1/1     Running     0          4m56s
kube-system     fluentd-gke-tlj2n                                             2/2     Running     0          2m6s
kube-system     gke-metrics-agent-q94w2                                       1/1     Running     0          4m52s
kube-system     gke-metrics-agent-rjzts                                       1/1     Running     0          4m43s
kube-system     gke-metrics-agent-sts8c                                       1/1     Running     0          4m42s
kube-system     kube-dns-7bb4975665-dzlvw                                     4/4     Running     0          4m59s
kube-system     kube-dns-7bb4975665-lrjv7                                     4/4     Running     0          4m28s
kube-system     kube-dns-autoscaler-645f7d66cf-bqnfm                          1/1     Running     0          4m54s
kube-system     kube-proxy-gke-ingress-nginx-k8s-default-pool-4aea3aa5-0lbc   1/1     Running     0          4m52s
kube-system     kube-proxy-gke-ingress-nginx-k8s-default-pool-4aea3aa5-42pm   1/1     Running     0          4m42s
kube-system     kube-proxy-gke-ingress-nginx-k8s-default-pool-4aea3aa5-jcxc   1/1     Running     0          4m43s
kube-system     l7-default-backend-678889f899-dbbch                           1/1     Running     0          5m
kube-system     metrics-server-v0.3.6-64655c969-xrt9h                         2/2     Running     0          4m27s
kube-system     prometheus-to-sd-h8zxv                                        1/1     Running     0          4m42s
kube-system     prometheus-to-sd-j64s6                                        1/1     Running     0          4m43s
kube-system     prometheus-to-sd-jh495                                        1/1     Running     0          4m52s
kube-system     stackdriver-metadata-agent-cluster-level-565b88964d-sdmh4     2/2     Running     1          4m6s
sleep 60
kubectl get ing -A
NAMESPACE   NAME              HOSTS   ADDRESS         PORTS   AGE
default     example-ingress   *       34.83.147.123   80      63s
curl 34.83.147.123
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx</center>
</body>
</html>

(expected, only /foo and /bar are mapped)

curl 34.83.147.123/bar
bar
curl 34.83.147.123/foo
foo
kubectl logs -f -n ingress-nginx   ingress-nginx-controller-67759f896-9bvv5 
-------------------------------------------------------------------------------
NGINX Ingress controller
  Release:       v0.41.2
  Build:         d8a93551e6e5798fc4af3eb910cef62ecddc8938
  Repository:    https://github.com/kubernetes/ingress-nginx
  nginx version: nginx/1.19.4

-------------------------------------------------------------------------------

I1128 16:04:29.108727       6 flags.go:205] "Watching for Ingress" class="nginx"
W1128 16:04:29.111652       6 flags.go:210] Ingresses with an empty class will also be processed by this Ingress controller
W1128 16:04:29.111970       6 client_config.go:608] Neither --kubeconfig nor --master was specified.  Using the inClusterConfig.  This might not work.
I1128 16:04:29.112125       6 main.go:241] "Creating API client" host="https://10.111.240.1:443"
I1128 16:04:29.123281       6 main.go:285] "Running in Kubernetes cluster" major="1" minor="16+" git="v1.16.15-gke.4300" state="clean" commit="7ed5ddc0e67cb68296994f0b754cec45450d6a64" platform="linux/amd64"
I1128 16:04:29.366373       6 main.go:105] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
I1128 16:04:29.380812       6 ssl.go:528] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
I1128 16:04:29.416486       6 nginx.go:249] "Starting NGINX Ingress controller"
I1128 16:04:29.438180       6 event.go:282] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"75861588-4018-40d0-8363-0e207b2195e2", APIVersion:"v1", ResourceVersion:"1840", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
I1128 16:04:30.617176       6 nginx.go:291] "Starting NGINX process"
I1128 16:04:30.617468       6 leaderelection.go:243] attempting to acquire leader lease  ingress-nginx/ingress-controller-leader-nginx...
I1128 16:04:30.617795       6 nginx.go:311] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
I1128 16:04:30.618051       6 controller.go:144] "Configuration changes detected, backend reload required"
I1128 16:04:30.633898       6 leaderelection.go:253] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I1128 16:04:30.634382       6 status.go:84] "New leader elected" identity="ingress-nginx-controller-67759f896-9bvv5"
I1128 16:04:30.716550       6 controller.go:161] "Backend successfully reloaded"
I1128 16:04:30.716839       6 controller.go:172] "Initial sync, sleeping for 1 second"
I1128 16:04:30.717251       6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-67759f896-9bvv5", UID:"69306a33-62bd-4619-9243-5cd19ad99eee", APIVersion:"v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
W1128 16:04:50.856834       6 controller.go:950] Service "default/foo-service" does not have any active Endpoint.
W1128 16:04:50.856867       6 controller.go:950] Service "default/bar-service" does not have any active Endpoint.
I1128 16:04:50.921678       6 main.go:112] "successfully validated configuration, accepting" ingress="example-ingress/default"
I1128 16:04:50.929275       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"3c3c4a56-c154-41d5-8fba-020025a7bdd5", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"2110", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
W1128 16:04:50.936565       6 controller.go:950] Service "default/foo-service" does not have any active Endpoint.
W1128 16:04:50.936732       6 controller.go:950] Service "default/bar-service" does not have any active Endpoint.
I1128 16:04:50.998927       6 main.go:112] "successfully validated configuration, accepting" ingress="example-ingress/default"
I1128 16:04:51.003376       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"3c3c4a56-c154-41d5-8fba-020025a7bdd5", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"2112", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
I1128 16:04:54.117314       6 controller.go:144] "Configuration changes detected, backend reload required"
I1128 16:04:54.245180       6 controller.go:161] "Backend successfully reloaded"
I1128 16:04:54.246042       6 event.go:282] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-67759f896-9bvv5", UID:"69306a33-62bd-4619-9243-5cd19ad99eee", APIVersion:"v1", ResourceVersion:"1883", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
I1128 16:05:30.639982       6 status.go:290] "updating Ingress status" namespace="default" ingress="example-ingress" currentValue=[] newValue=[{IP:34.83.147.123 Hostname:}]
I1128 16:05:30.648706       6 event.go:282] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"3c3c4a56-c154-41d5-8fba-020025a7bdd5", APIVersion:"networking.k8s.io/v1beta1", ResourceVersion:"2288", FieldPath:""}): type: 'Normal' reason: 'Sync' Scheduled for sync
200.83.32.243 - - [28/Nov/2020:16:06:24 +0000] "GET /foo HTTP/1.1" 200 4 "-" "curl/7.68.0" 80 0.003 [default-foo-service-5678] [] 10.108.2.7:5678 4 0.002 200 2b71f5cf7f0477962f8e8cc3f2ff086d
200.83.32.243 - - [28/Nov/2020:16:06:28 +0000] "GET /bar HTTP/1.1" 200 4 "-" "curl/7.68.0" 80 0.002 [default-bar-service-5678] [] 10.108.2.8:5678 4 0.002 200 4233f276c73a7ceaf3521ad974c907ef
200.83.32.243 - - [28/Nov/2020:16:08:51 +0000] "GET /bar HTTP/1.1" 200 4 "-" "curl/7.68.0" 80 0.002 [default-bar-service-5678] [] 10.108.2.8:5678 4 0.002 200 aaa1fa13d768101fd39c463645929320
200.83.32.243 - - [28/Nov/2020:16:08:55 +0000] "GET /foo HTTP/1.1" 200 4 "-" "curl/7.68.0" 80 0.002 [default-foo-service-5678] [] 10.108.2.7:5678 4 0.002 200 5b66251e285ec799bb0ae4ad2217c8a4

From the log:

I1128 16:04:50.921678       6 main.go:112] "successfully validated configuration, accepting" ingress="example-ingress/default"

that means the API server reached the webhook validation running in the ingress-nginx pod

Hi! Thanks again for creating and maintaining this project. Whether this is a legit issue or not, passive aggressiveness is never a solution.

I do have the same issue on a baremetal cluster bootstrapped by kubeadm and using Calico as the CNI. There is no firewall between any of the nodes, so they should be able to freely talk to each other. It might be possible that kubeadm's default settings has some firewall rules that causes this issue. UPDATE: I had a different issue altogether. Please ignore.

I found a way to recreate the problem using minikube (which I recognize is experimental with multinode setups, but this might help digging deeper into the issue):

minikube start -n=2
helm install -n default ingress ingress-nginx/ingress-nginx
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml

```
rkevin@redshift:~$ minikube start -n=2
😄 minikube v1.15.1 on Arch rolling
✨ Automatically selected the virtualbox driver
👍 Starting control plane node minikube in cluster minikube
🔥 Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
🔎 Verifying Kubernetes components...
🌟 Enabled addons: storage-provisioner, default-storageclass

❗ Multi-node clusters are currently experimental and might exhibit unintended behavior.
📘 To track progress on multi-node clusters, see https://github.com/kubernetes/minikube/issues/7538.

👍 Starting node minikube-m02 in cluster minikube
🔥 Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
🌐 Found network options:
▪ NO_PROXY=192.168.99.144
🐳 Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
▪ env NO_PROXY=192.168.99.144
🔎 Verifying Kubernetes components...
🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
rkevin@redshift:~$ helm install -n default ingress ingress-nginx/ingress-nginx
NAME: ingress
LAST DEPLOYED: Sat Nov 28 14:12:11 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:

apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: example
namespace: foo
spec:
rules:
- host: www.example.com
http:
paths:
- backend:
serviceName: exampleService
servicePort: 80
path: /
# This section is only required if TLS is to be enabled for the Ingress
tls:
- hosts:
- www.example.com
secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

apiVersion: v1
kind: Secret
metadata:
name: example-tls
namespace: foo
data:
tls.crt:
tls.key:
type: kubernetes.io/tls
rkevin@redshift:~$ kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
Error from server (InternalError): error when creating "https://kind.sigs.k8s.io/examples/ingress/usage.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-ingress-nginx-controller-admission.default.svc:443/networking/v1beta1/ingresses?timeout=10s": dial tcp 10.96.214.134:443: connect: connection refused

Interestingly enough, this is not a problem with kind. I got it to work with the following:
```bash
kind create cluster --config - <<EOF
> kind: Cluster
> apiVersion: kind.x-k8s.io/v1alpha4
> nodes:
>   - role: control-plane
>   - role: worker
> EOF
helm install -n default ingress ingress-nginx/ingress-nginx
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml

I can make a vagrant + kubeadm setup and see if that recreates this problem if you want. The baremetal cluster we have is fairly vanilla, so I can't think of a reason for it to fail if firewall rules are the culprit. UPDATE: I had a different issue altogether. Please ignore.

Error from server (InternalError): error when creating "https://kind.sigs.k8s.io/examples/ingress/usage.yaml": Internal error occurred: failed calling webhook "validate.nginx.ingress.kubernetes.io": Post "https://ingress-ingress-nginx-controller-admission.default.svc:443/networking/v1beta1/ingresses?timeout=10s": dial tcp 10.96.214.134:443: connect: connection refused

Did you execute that command just after the helm install? The command

kubectl wait --namespace ingress-nginx \
    --for=condition=ready pod \
    --selector=app.kubernetes.io/component=controller \
    --timeout=90s

waits for the creation of the SSL certificate used in the validation webhook (usually takes ~60s). Only after the secret is created, the ingress controller can start

@rkevin-arch I can reproduce the minikube issue, but seems related to the default CNI selected (kindnet)?
Please check again using flannel

minikube start -n=2 --driver=kvm2 --cni=flannel
😄  minikube v1.15.1 on Debian bullseye/sid
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=3950MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
🔗  Configuring Flannel (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass

❗  Multi-node clusters are currently experimental and might exhibit unintended behavior.
📘  To track progress on multi-node clusters, see https://github.com/kubernetes/minikube/issues/7538.

👍  Starting node minikube-m02 in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=3950MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.39.15
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    ▪ env NO_PROXY=192.168.39.15
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
helm install -n default ingress ingress-nginx/ingress-nginx
NAME: ingress
LAST DEPLOYED: Sat Nov 28 20:49:11 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
ingress.networking.k8s.io/example-ingress created
kubectl get pods -A -o wide
NAMESPACE     NAME                                                READY   STATUS    RESTARTS   AGE     IP               NODE           NOMINATED NODE   READINESS GATES
default       bar-app                                             1/1     Running   0          2m8s    10.244.1.4       minikube-m02   <none>           <none>
default       foo-app                                             1/1     Running   0          2m8s    10.244.1.5       minikube-m02   <none>           <none>
default       ingress-ingress-nginx-controller-8488fbdf45-czvvx   1/1     Running   0          2m51s   10.244.1.2       minikube-m02   <none>           <none>
kube-system   coredns-f9fd979d6-jrmzj                             1/1     Running   0          3m38s   10.88.0.2        minikube       <none>           <none>
kube-system   etcd-minikube                                       1/1     Running   0          3m46s   192.168.39.15    minikube       <none>           <none>
kube-system   kube-apiserver-minikube                             1/1     Running   0          3m46s   192.168.39.15    minikube       <none>           <none>
kube-system   kube-controller-manager-minikube                    1/1     Running   0          3m46s   192.168.39.15    minikube       <none>           <none>
kube-system   kube-flannel-ds-amd64-fzzrb                         1/1     Running   0          3m10s   192.168.39.237   minikube-m02   <none>           <none>
kube-system   kube-flannel-ds-amd64-jb9fg                         1/1     Running   0          3m38s   192.168.39.15    minikube       <none>           <none>
kube-system   kube-proxy-228k4                                    1/1     Running   0          3m38s   192.168.39.15    minikube       <none>           <none>
kube-system   kube-proxy-jrbz6                                    1/1     Running   0          3m10s   192.168.39.237   minikube-m02   <none>           <none>
kube-system   kube-scheduler-minikube                             1/1     Running   0          3m46s   192.168.39.15    minikube       <none>           <none>
kube-system   storage-provisioner                                 1/1     Running   1          3m52s   192.168.39.15    minikube       <none>           <none>

@rkevin-arch just in case I run the same test with calico

minikube start -n=2 --driver=kvm2 --cni=calico
😄  minikube v1.15.1 on Debian bullseye/sid
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=3950MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
🔗  Configuring Calico (Container Networking Interface) ...
🔎  Verifying Kubernetes components...
🌟  Enabled addons: storage-provisioner, default-storageclass

❗  Multi-node clusters are currently experimental and might exhibit unintended behavior.
📘  To track progress on multi-node clusters, see https://github.com/kubernetes/minikube/issues/7538.

👍  Starting node minikube-m02 in cluster minikube
🔥  Creating kvm2 VM (CPUs=2, Memory=3950MB, Disk=20000MB) ...
🌐  Found network options:
    ▪ NO_PROXY=192.168.39.236
🐳  Preparing Kubernetes v1.19.4 on Docker 19.03.13 ...
    ▪ env NO_PROXY=192.168.39.236
🔎  Verifying Kubernetes components...
🏄  Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
helm install -n default ingress ingress-nginx/ingress-nginx
NAME: ingress
LAST DEPLOYED: Sat Nov 28 20:56:33 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The ingress-nginx controller has been installed.
It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl --namespace default get services -o wide -w ingress-ingress-nginx-controller'

An example Ingress that makes use of the controller:

  apiVersion: networking.k8s.io/v1beta1
  kind: Ingress
  metadata:
    annotations:
      kubernetes.io/ingress.class: nginx
    name: example
    namespace: foo
  spec:
    rules:
      - host: www.example.com
        http:
          paths:
            - backend:
                serviceName: exampleService
                servicePort: 80
              path: /
    # This section is only required if TLS is to be enabled for the Ingress
    tls:
        - hosts:
            - www.example.com
          secretName: example-tls

If TLS is enabled for the Ingress, a Secret containing the certificate and key must also be provided:

  apiVersion: v1
  kind: Secret
  metadata:
    name: example-tls
    namespace: foo
  data:
    tls.crt: <base64 encoded cert>
    tls.key: <base64 encoded key>
  type: kubernetes.io/tls
kubectl apply -f https://kind.sigs.k8s.io/examples/ingress/usage.yaml
pod/foo-app created
service/foo-service created
pod/bar-app created
service/bar-service created
ingress.networking.k8s.io/example-ingress created

Hmm, can confirm minikube start -n=2 --cni=calico works. I'll take a look at using vagrant + kubeadm to spawn a cluster with Calico and see if I can replicate the issue.

Whoops, I didn't realize the issue I have was completely unrelated to this one this entire time, even if deleting the ValidatingWebhookConfiguration does solve my issue. Sorry about that. Feel free to mark the stuff I said as off-topic.

(The issue I had was Error: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: rejecting admission review because the request does not contains an Ingress resource but networking.k8s.io/v1, Resource=ingresses with name jupyterhub in namespace staging-jhub. I'll dig elsewhere for a more permanent solution.)

Error: admission webhook "validate.nginx.ingress.kubernetes.io" denied the request: rejecting admission review because the request does not contains an Ingress resource but networking.k8s.io/v1, Resource=ingresses with name jupyterhub in namespace staging-jhub

@rkevin-arch please make sure you are using the latest version v0.41.2. There was a regression that denied validation of ingresses networking.k8s.io/v1.

I run ingress-nginx on my DIY cluster, and started seeing issue after upgrading to the latest version (3.12.0).
My cluster is based on typhoon but has many modifications.

I'll try running latest ingress version with latest typhoon (1.19.4).

typhoon has nginx ingress addon, which can be installed with kubectl. I wonder if the issue is reproducable with it or not.

I don't know what rules.apiVersions value in webhooks YAML is for, but
prometheus-operator uses '*' for rules.apiVersions, while ingress-nginx only has ’v1beta1’.
So, this is just the issue of YAML definition?

I installed kube-prometheus-stack in cluster (which also uses admission webhooks) as well, and don't have issues with it.

https://github.com/prometheus-community/helm-charts/blob/kube-prometheus-stack-12.3.0/charts/kube-prometheus-stack/templates/prometheus-operator/admission-webhooks/validatingWebhookConfiguration.yaml#L20
https://github.com/kubernetes/ingress-nginx/blob/ingress-nginx-3.12.0/charts/ingress-nginx/templates/admission-webhooks/validating-webhook.yaml#L21

https://stackoverflow.com/questions/61616203/nginx-ingress-controller-failed-calling-webhook/62713105#62713105

This guy says that the issue is caused by older API version, but seems like apiVersion has been updated to one without beta recently: "apiVersion: admissionregistration.k8s.io/v1".
I'm on k8s 1.19.4, which is the newest released version, I believe.

@aduncmj I found this solution https://stackoverflow.com/questions/61365202/nginx-ingress-service-ingress-nginx-controller-admission-not-found

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

Its not solving the problem killing the webhook - youn need the webhook for an working cluster.

First of all, you cannot delete ValidatingWebhookConfiguration, which is very, very not recommended.Secondly, you need to update to the latest version of ingress, and make sure that your ingress controller is not deployed on the k8s master. Finally, make sure that the ingress controller is in the Running state.
Then you will not see any errors.

First of all, you cannot delete ValidatingWebhookConfiguration, which is very, very not recommended.Secondly, you need to update to the latest version of ingress, and make sure that your ingress controller is not deployed on the k8s master. Finally, make sure that the ingress controller is in the Running state.
Then you will not see any errors.

That's not entirely true. It has been posted several times before that it's a networking issue. At least the context deadline exceeded error (which is the original error posted in the issue). The first reply correctly addresses this and the docs are explicit about it: https://kubernetes.github.io/ingress-nginx/deploy/:

For private clusters, you will need to either add an additional firewall rule that allows master nodes access to port 8443/tcp on worker nodes, or change the existing rule that allows access to ports 80/tcp, 443/tcp and 10254/tcp to also allow access to port 8443/tcp.

It turned out that my helm chart values was incorrect.
I set hostNetwork: true, which effectively disables access to admission webhook.

To be able to use admission webhooks with hostNetwork: true, you need to open port 8443 of node as well, I guess, but I don't think that's a good idea.

If you what you need is just exposing port 80 and 443 (but not 8443), you can use port mapping instead of hostNetwork. This way admission webhooks remain
only accessible inside cluster, which is better than exposing port 8443 of node.

controller:
   hostPort:
     enabled: true
   kind: Deployment
   publishService:
     enabled: false
   replicaCount: 1
   service:
     type: ClusterIP

@vroad
you do not have to open port for 8443, but anyway you need to redirect that port to another node port like it is for other services. Right?
` ports:

  • name: http
    nodePort: 32519
    port: 80
    protocol: TCP
    targetPort: http
  • name: https
    nodePort: 31635
    port: 443
    protocol: TCP
    targetPort: https
    `

Anyway on GCp the solution requires to open 8443 port from master to nodes, so this is not opened to external world.

@rzuku I use calico on my DIY cluster, which is based on typhoon, and runs on AWS.

I don't know anything about GCP, but I don't have to configure security groups (or nodeport)
for port 8443 after disabling hostNetwork, because calico handles connection to the pod, and I'm not using things like calico network policies for now.

I have looked at my AWS EKS cluster, and the outcome is: if not any others issues, than properly defined security group allowing 8443 from control plane to nodes should help.

@aduncmj I found this solution https://stackoverflow.com/questions/61365202/nginx-ingress-service-ingress-nginx-controller-admission-not-found

kubectl delete -A ValidatingWebhookConfiguration ingress-nginx-admission

This works for me, after hours searching and this resolved, thanks!

@renanrider As others already pointed out, you should resolve network issues rather than disabling webhooks. Disabling admission webhook is bad idea.

Was this page helpful?
0 / 5 - 0 ratings