BUG REPORT
When I enabled ingress-nginx, I got following flood logs:
$ kubectl logs -f pod/ingress-nginx-controller-ll8ph --namespace=ingress-nginx
...
W0716 12:41:52.425622 5 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
W0716 12:41:52.519267 5 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
W0716 12:41:52.619749 5 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
W0716 12:41:52.724243 5 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
As far as I checked, following configuration set ingress-nginx service but it was not exist, it's a problem:
kubectl edit daemonset.apps/ingress-nginx-controller --namespace=ingress-nginx
"--publish-service=$(POD_NAMESPACE)/ingress-nginx"
After I changed it for default-backend service which was installed by kubespray, error was fixed:
"--publish-service=$(POD_NAMESPACE)/default-backend"
Environment:
ingress_nginx_enabled: true
ingress_nginx_host_network: true
ingress_nginx_nodeselector:
node-role.kubernetes.io/master: "true"
ingress_nginx_namespace: "ingress-nginx"
ingress_nginx_insecure_port: 80
ingress_nginx_secure_port: 443
ingress_nginx_configmap:
map-hash-bucket-size: "128"
ssl-protocols: "SSLv2"
#ingress_nginx_configmap_tcp_services:
# 9000: "default/example-go:8080"
ingress_nginx_configmap_udp_services:
53: "kube-system/kube-dns:53"
OS : CentOS 7.5
Version of Ansible : 2.7.5
**Kubespray version (commit) (git rev-parse --short HEAD):d1e170c
Network plugin used:
Same problem, same fix
I found following issue:
https://github.com/kubernetes/ingress-nginx/issues/2599
I think run following command is right resolution:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
I have a related problem with the same log flood result: I have two ingress classes and controllers, each with their own service. For each, I have set the --publish-service flag to the correct service, neither of which are named ingress-nginx. Everything works, but the controller still produces about 24 of these messages every second:
W1030 15:22:52.118982 7 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:<nil>,DeletionGracePeriodSeconds:nil,Labels:map[string]string{},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,}, err services "ingress-nginx" not found
Since my services are actually valid, I can't use the workaround described in the OP, and I'm not sure what effect creating a random service as mentioned by https://github.com/kubernetes-incubator/kubespray/issues/3005#issuecomment-408424516 would have.
UPDATE: Ok, I'm running on Azure, so I applied https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml as per https://kubernetes.github.io/ingress-nginx/deploy/#azure, which seems to have solved the problem. Is it correct that this service should exist even though it isn't used for the actual ingress?
I think I understand why this other service needs to exist. It seems this is used by the IngressController to provide the 404 responses..., but I'm still a bit confused, like @rocketraman . In my case I had my own LoadBalancer service that was working, but I still had to create the service that @okamototk mentioned above before the log messages went away.
I found following issue:
kubernetes/ingress-nginx#2599I think run following command is right resolution:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/baremetal/service-nodeport.yaml
@okamototk
I tried same command still facing same issue.
here is the log
````
NGINX Ingress controller
Release: 0.23.0
Build: git-be1329b22
W0312 14:37:36.349804 6 flags.go:213] SSL certificate chain completion is disabled (--enable-ssl-chain-completion=false)
nginx version: nginx/1.15.9
W0312 14:37:36.352675 6 client_config.go:549] Neither --kubeconfig nor --master was specified. Using the inClusterConfig. This might not work.
I0312 14:37:36.352913 6 main.go:200] Creating API client for https://10.96.0.1:443
I0312 14:37:36.364031 6 main.go:244] Running in Kubernetes cluster version v1.13 (v1.13.4) - git (clean) commit c27b913fddd1a6c480c229191a087698aa92f0b1 - platform linux/amd64
I0312 14:37:36.620605 6 nginx.go:261] Starting NGINX Ingress controller
I0312 14:37:36.635939 6 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"nginx-configuration", UID:"5d4fb65d-44d4-11e9-919a-0021ccd89118", APIVersion:"v1", ResourceVersion:"811680", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/nginx-configuration
I0312 14:37:36.640159 6 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"5d52b0c4-44d4-11e9-919a-0021ccd89118", APIVersion:"v1", ResourceVersion:"811681", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
I0312 14:37:36.640340 6 event.go:221] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"5d545b32-44d4-11e9-919a-0021ccd89118", APIVersion:"v1", ResourceVersion:"811683", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
I0312 14:37:37.724190 6 event.go:221] Event(v1.ObjectReference{Kind:"Ingress", Namespace:"default", Name:"example-ingress", UID:"92cdc98b-44cc-11e9-919a-0021ccd89118", APIVersion:"extensions/v1beta1", ResourceVersion:"806471", FieldPath:""}): type: 'Normal' reason: 'CREATE' Ingress default/example-ingress
I0312 14:37:37.821287 6 nginx.go:282] Starting NGINX process
I0312 14:37:37.821554 6 leaderelection.go:205] attempting to acquire leader lease ingress-nginx/ingress-controller-leader-nginx...
I0312 14:37:37.822328 6 controller.go:172] Configuration changes detected, backend reload required.
I0312 14:37:37.831237 6 leaderelection.go:214] successfully acquired lease ingress-nginx/ingress-controller-leader-nginx
I0312 14:37:37.831648 6 status.go:148] new leader elected: nginx-ingress-controller-797b884cbc-65j8p
W0312 14:37:37.838672 6 queue.go:130] requeuing &ObjectMeta{Name:sync status,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:
````
Please add a service.
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
type: ClusterIP
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app: ingress-nginx
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Please add a service.