hi, I am using the newest ingress yaml, but still got below error in kubelet log, I have checked the readinessProbe and livenessProbe config, they are all using /healthz as path and 10254 as port. I don't know what's wrong else:
E0226 22:15:39.289364 2536 pod_workers.go:184] Error syncing pod dcee08ac-fc5e-11e6-87bb-080027e0776f, skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 5m0s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-2478719372-sst0p_kube-system(dcee08ac-fc5e-11e6-87bb-080027e0776f)"
Below is my yaml config output:
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-system","name":"nginx-ingress-controller-2478719372","uid":"dcebdf9e-fc5e-11e6-87bb-080027e0776f","apiVersion":"extensions","resourceVersion":"1130706"}}
creationTimestamp: 2017-02-26T20:05:04Z
generateName: nginx-ingress-controller-2478719372-
labels:
k8s-app: nginx-ingress-controller
pod-template-hash: "2478719372"
name: nginx-ingress-controller-2478719372-sst0p
namespace: kube-system
ownerReferences:
Can you please remove the 'healthz' check and check if the error persists?
@rikatz Thanks I just fix this issue by adding 'apiserver-host' when starting ingress controller. I don't know why there is no any guide or instructions mentioned these so important thing. Maybe the document failed to be updated in time?
@mmzhou you need to check the pod logs. If there's any issue with the connection to the api server you will see a reference to the troubleshooting doc
@mmzhou please reopen if you still have issues (also include the logs of the ingress pod)
I can reproduce this issue:
Steps to reproduce:
namespace: kube-system to both all the yaml files as default-backend example is in kube-system namespace$ kubectl get pods -n kube-system
nginx-ingress-controller-2275227678-n7b22 0/1 CrashLoopBackOff 6 7m
$ kubectl logs nginx-ingress-controller-2275227678-n7b22 -n kube-system
I0329 16:39:47.098483 5 launch.go:96] &{NGINX 0.9.0-beta.3 git-3dd7461 [email protected]:ixdy/kubernetes-ingress.git}
I0329 16:39:47.098693 5 launch.go:99] Watching for ingress class: nginx
I0329 16:39:47.099228 5 launch.go:245] Creating API server client for https://10.0.0.1:443
I0329 16:39:47.100962 5 nginx.go:127] starting NGINX process...
I0329 16:39:47.169809 5 launch.go:115] validated kube-system/default-http-backend as the default backend
F0329 16:39:47.171747 5 launch.go:125] service kube-system/nginx-ingress-lb does not (yet) have ingress points
md5-2ce8dfbf765bebf52e96c65c0d7ee0f7
$ kubectl describe pods nginx-ingress-controller-2275227678-9q00c -n kube-system
Name: nginx-ingress-controller-2275227678-9q00c
Namespace: kube-system
Node: node3/10.10.10.127
Start Time: Wed, 29 Mar 2017 17:43:58 +0100
Labels: k8s-app=nginx-ingress-controller
pod-template-hash=2275227678
Status: Running
IP: 172.10.35.3
Controllers: ReplicaSet/nginx-ingress-controller-2275227678
Containers:
nginx-ingress-controller:
Container ID: docker://f06b2fcaffd3792ec78587787021ed5d7104155dc784af4d677f93c4c06d2e7c
Image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3
Image ID: docker://sha256:383e5ec1f5f90a9d43b4c65392af8d918afe8bf8e6764f41facd21ed7e038d35
Ports: 80/TCP, 443/TCP
Args:
/nginx-ingress-controller
--default-backend-service=$(POD_NAMESPACE)/default-http-backend
--publish-service=$(POD_NAMESPACE)/nginx-ingress-lb
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 255
Started: Wed, 29 Mar 2017 17:46:53 +0100
Finished: Wed, 29 Mar 2017 17:46:53 +0100
Ready: False
Restart Count: 5
Liveness: http-get http://:10254/healthz delay=10s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:10254/healthz delay=0s timeout=1s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hcnbb (ro)
Environment Variables:
POD_NAME: nginx-ingress-controller-2275227678-9q00c (v1:metadata.name)
POD_NAMESPACE: kube-system (v1:metadata.namespace)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-hcnbb:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hcnbb
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4m 4m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-ingress-controller-2275227678-9q00c to node3
4m 4m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Created Created container with docker id 21ac0a1295dd; Security:[seccomp=unconfined]
4m 4m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Started Started container with docker id 21ac0a1295dd
4m 4m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Created Created container with docker id 7ea74ce5aef3; Security:[seccomp=unconfined]
4m 4m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Started Started container with docker id 7ea74ce5aef3
4m 4m 3 {kubelet node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 10s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-2275227678-9q00c_kube-system(e7d0c4cd-149e-11e7-81d4-00505685528b)"
4m 4m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Created Created container with docker id 19e419914c9a; Security:[seccomp=unconfined]
4m 4m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Started Started container with docker id 19e419914c9a
4m 4m 3 {kubelet node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 20s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-2275227678-9q00c_kube-system(e7d0c4cd-149e-11e7-81d4-00505685528b)"
3m 3m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Created Created container with docker id 8a31b9e1ea87; Security:[seccomp=unconfined]
3m 3m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Started Started container with docker id 8a31b9e1ea87
3m 3m 4 {kubelet node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-2275227678-9q00c_kube-system(e7d0c4cd-149e-11e7-81d4-00505685528b)"
3m 3m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Started Started container with docker id 652f48600bf1
3m 3m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Created Created container with docker id 652f48600bf1; Security:[seccomp=unconfined]
3m 1m 7 {kubelet node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-2275227678-9q00c_kube-system(e7d0c4cd-149e-11e7-81d4-00505685528b)"
4m 1m 6 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Pulled Container image "gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.3" already present on machine
1m 1m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Created Created container with docker id f06b2fcaffd3; Security:[seccomp=unconfined]
1m 1m 1 {kubelet node3} spec.containers{nginx-ingress-controller} Normal Started Started container with docker id f06b2fcaffd3
4m 9s 26 {kubelet node3} spec.containers{nginx-ingress-controller} Warning BackOff Back-off restarting failed docker container
1m 9s 9 {kubelet node3} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "nginx-ingress-controller" with CrashLoopBackOff: "Back-off 2m40s restarting failed container=nginx-ingress-controller pod=nginx-ingress-controller-2275227678-9q00c_kube-system(e7d0c4cd-149e-11e7-81d4-00505685528b)"
I am having the same issue as well. It happens when I set type: NodePort for the --publish-service service. It does not have load balancer associated which triggers this code:
if len(svc.Status.LoadBalancer.Ingress) == 0 {
// We could poll here, but we instead just exit and rely on k8s to restart us
glog.Fatalf("service %s does not (yet) have ingress points", *publishSvc)
}
I was wondering if nginx controller even supports NodePort services?
The solution for me was to remove --publish-service altogether
Same for me. I am creating an nginx ingress controller using methods outlined in a number of guides, but using service type NodePort in front of the nginx controller instead of load-balancer. I kept getting:
⇒ kubectl get pods | grep nginx
ingress-nginx-1785899290-wccg8 0/1 CrashLoopBackOff 1 8s
with the pod logs showing
⇒ kubectl logs -f ingress-nginx-1785899290-wccg8
I0621 19:08:46.331218 5 launch.go:101] &{NGINX 0.9.0-beta.8 git-245e6b0 https://github.com/kubernetes/ingress}
I0621 19:08:46.331322 5 launch.go:104] Watching for ingress class: nginx
I0621 19:08:46.331741 5 launch.go:257] Creating API server client for https://10.3.0.1:443
I0621 19:08:46.393209 5 nginx.go:185] starting NGINX process...
I0621 19:08:46.479777 5 launch.go:120] validated default/default-http-backend as the default backend
F0621 19:08:46.483025 5 launch.go:130] service default/ingress-nginx does not (yet) have ingress points
I also removed the --publish-service param and it is finally staying up. Not sure if it will work yet as haven't finished testing, but I found this when Googling the above error.
I had to add to do the following to get it to work:
--- nginx-ingress.orig/templates/controller-deployment.yaml 1969-12-31 16:00:00.000000000 -0800
+++ nginx-ingress/templates/controller-deployment.yaml 2017-06-20 11:51:38.539392152 -0700
@@ -29,7 +29,7 @@
args:
- /nginx-ingress-controller
- --default-backend-service={{ if .Values.defaultBackend.enabled }}{{ .Release.Namespace }}/{{ template "defaultBackend.fullname" . }}{{ else }}{{ .Values.controller.defaultBackendService }}{{ end }}
- - --nginx-configmap={{ .Release.Namespace }}/{{ template "controller.fullname" . }}
+ - --configmap={{ .Release.Namespace }}/{{ template "controller.fullname" . }}
- --tcp-services-configmap={{ .Release.Namespace }}/{{ template "fullname" . }}-tcp
- --udp-services-configmap={{ .Release.Namespace }}/{{ template "fullname" . }}-udp
{{- range $key, $value := .Values.controller.extraArgs }}
@kfox1111 that flag was changed in 0.9-beta.1 https://github.com/kubernetes/ingress/releases/tag/nginx-ingress-controller-0.9-beta.1
yup. and by default the charts pointing at the 0.9 series now:
https://github.com/kubernetes/charts/blob/master/stable/nginx-ingress/values.yaml#L8
So the stable charts broken out of the box when I tried it a couple days ago.
@andreychernih removing the --publish-service worked for me.
What is the impact of removing it?
What is the impact of removing it?
The status you see in the ingress rules will show the IP of the node/s where the ingress controller is running and not the FQDN of the load balancer.
@vendrov I did not notice any in my use-case
Most helpful comment
The solution for me was to remove
--publish-servicealtogether