Is this a request for help? (If yes, you should use our troubleshooting guide and community support channels, see https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/.):
Yes, I need help.
What keywords did you search in NGINX Ingress controller issues before filing this one? (If you have found any duplicates, you should instead reply there.):
yes, I don't find the same problem.
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
NGINX Ingress controller version: 0.24.1
Kubernetes version (use kubectl version):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:26:52Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.5", GitCommit:"2166946f41b36dea2c4626f90a77706f426cdea2", GitTreeState:"clean", BuildDate:"2019-03-25T15:19:22Z", GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:
uname -a): Linux k8s-m1 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/LinuxWhat happened:
I want to set up multiple ingress-nginx when I set up multiple instances when I find an error:
I0628 01:41:26.521390 7 nginx.go:311] Starting NGINX process
I0628 01:41:26.521454 7 leaderelection.go:217] attempting to acquire leader lease lstest-front/ingress-controller-leader-iov-lstest-front...
W0628 01:41:26.525075 7 controller.go:724] Error obtaining Endpoints for Service "default/ingress-nginx": no object matching key "default/ingress-nginx
" in local store
But I'm sure I've modified everything that should be changed.
apiVersion: v1
kind: Namespace
metadata:
name: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: ClusterIP
clusterIP: "None"
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
apiVersion: v1
kind: Namespace
metadata:
name: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole-lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-iov-lstest-front"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: lstest-front
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding-lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole-lstest-front
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: lstest-front
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: lstest-front
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
node: IOV
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.24.1
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --report-node-internal-ip-address
- --annotations-prefix=nginx.ingress.kubernetes.io
- --ingress-class=lstest-front
- --election-id=ingress-controller-leader-iov
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
What you expected to happen:
now return 503 errors and I hope he can act normally
How to reproduce it (as minimally and precisely as possible):
I want to copy the above configuration directly, which will reproduce the problem
Anything else we need to know:
I tried changing the name to change lstest-front to test-front everything is fine, I don't know what happened. Is this a bug?
I had the same error after upgrading my cluser to 1.13.7-gke.8, i downgrad it to
1.12.7-gke.25 to solve the issue.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I have the same issue with 1.13.11 GKE version
Closing. Please update to 0.26.2.
NGINX Ingress controller
Release: 0.26.2
Build: git-bde101c57
Repository: https://github.com/aledbf/ingress-nginx
nginx version: openresty/1.15.8.2
-------------------------------------------------------------------------------
I0101 23:12:49.263911 6 flags.go:198] Watching for Ingress class: lstest-front
W0101 23:12:49.263943 6 flags.go:201] Only Ingresses with class "lstest-front" will be processed by this Ingress controller
W0101 23:12:49.264270 6 flags.go:243] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
W0101 23:12:49.264309 6 client_config.go:541] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0101 23:12:49.264425 6 main.go:182] Creating API client for https://10.96.0.1:443
I0101 23:12:49.272482 6 main.go:226] Running in Kubernetes cluster version v1.16 (v1.16.2) - git (clean) commit c97fe5036ef3df2967d086711e6c0c405941e14b - platform linux/amd64
I0101 23:12:49.372159 6 main.go:101] SSL fake certificate created /etc/ingress-controller/ssl/default-fake-certificate.pem
I0101 23:12:49.395681 6 nginx.go:263] Starting NGINX Ingress controller
I0101 23:12:49.457925 6 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"lstest-front", Name:"nginx-configuration", UID:"38ab5bbc-fdbb-404b-96c2-29510ca55136", APIVersion:"v1", ResourceVersion:"1385", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap lstest-front/nginx-configuration
I0101 23:12:49.458799 6 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"lstest-front", Name:"udp-services", UID:"7e681b53-8a05-44cf-8ae2-1c3aa88fba48", APIVersion:"v1", ResourceVersion:"1387", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap lstest-front/udp-services
I0101 23:12:49.459398 6 event.go:255] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"lstest-front", Name:"tcp-services", UID:"2492fb83-457a-460b-8eed-caf3b2728210", APIVersion:"v1", ResourceVersion:"1386", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap lstest-front/tcp-services
I0101 23:13:04.546021 6 store.go:347] ignoring add for ingress foo-bar based on annotation kubernetes.io/ingress.class with value
I0101 23:13:04.597435 6 nginx.go:307] Starting NGINX process
I0101 23:13:04.597560 6 leaderelection.go:241] attempting to acquire leader lease lstest-front/ingress-controller-leader-iov-lstest-front...
I0101 23:13:04.599991 6 controller.go:134] Configuration changes detected, backend reload required.
I0101 23:13:04.605807 6 status.go:86] new leader elected: nginx-ingress-controller-zzkln
I0101 23:13:04.695839 6 controller.go:150] Backend successfully reloaded.
I0101 23:13:04.696073 6 controller.go:159] Initial sync, sleeping for 1 second.
I0101 23:13:46.504245 6 leaderelection.go:251] successfully acquired lease lstest-front/ingress-controller-leader-iov-lstest-front
I0101 23:13:46.504675 6 status.go:86] new leader elected: nginx-ingress-controller-jthfj
I am seeing this issue with latest 27.1 now. When we revert back to 26.2 it works fine.
I am getting the same error on version 0.25.1
Most helpful comment
I am seeing this issue with latest 27.1 now. When we revert back to 26.2 it works fine.