kops version: 1.8.1
kubernetes version: 1.8.6
kubectl version: 1.8.6
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-01-14
cloud provider: AWS
One out of 3 master nodes(other masters and nodes kube-proxy don't have this issue) kube-proxy having a lot of errors starting metrics server:
E0301 20:46:18.193727 5 healthcheck.go:317] Failed to start node healthz on 0.0.0.0:10256: listen tcp 0.0.0.0:10256: bind: address already in use
E0301 20:46:18.831790 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:23.832087 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:28.832373 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:33.832704 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:38.832955 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:43.833330 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:48.833539 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:53.833810 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:46:58.834118 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:47:03.834282 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:47:08.834555 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:47:13.834810 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
E0301 20:47:18.194014 5 healthcheck.go:317] Failed to start node healthz on 0.0.0.0:10256: listen tcp 0.0.0.0:10256: bind: address already in use
E0301 20:47:18.835078 5 server.go:480] starting metrics server failed: listen tcp 127.0.0.1:10249: bind: address already in use
It seems that port is already in use and after checking with netstat it seems that it's etcd established connections:
netstat -vatnp | grep 10249
tcp 0 0 127.0.0.1:10249 127.0.0.1:4001 ESTABLISHED 2112/kube-apiserver
tcp6 0 0 127.0.0.1:4001 127.0.0.1:10249 ESTABLISHED 2235/etcd
netstat -vatnp | grep 10256
tcp 0 0 127.0.0.1:10256 127.0.0.1:4001 ESTABLISHED 2112/kube-apiserver
tcp6 0 0 127.0.0.1:4001 127.0.0.1:10256 ESTABLISHED 2235/etcd
netstat -tulpn | grep 4001
tcp6 0 0 :::4001 :::* LISTEN 2235/etcd
Anyone experiencing the same? Any solutions? Does kops supports arguments to start kube-proxy metrics server and healthz on different ports?
Thanks.
Facing a similar error.
I tried downgrading the cluster to 1.7 as well using kops 1.8 and the issue persisted.
I think it's related to the kube-proxy QoS change in 1.8 which is burstable now and I believe it was BestEffort before.
Any workarounds for this?
@kumudt not sure if it can help you but I have just restarted master nodes in a rolling manner one by one and the problem has gone
Seeing this with kops 1.9 after the node ran out of disk and recovered
Failed to start node healthz on 0.0.0.0:10256: listen tcp 0.0.0.0:10256: bind: address already in use
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Seeing this with kops 1.9 after the node ran out of disk and recovered