Describe the bug
With version 0.5 I frequently see errors
E0509 10:29:42.477736 30937 watcher.go:208] watch chan error: EOF
They originate here in kvsql
I don't remember them appearing with 0.4, so it seems to be a regression.
I did not yet observe any downstream errors yet. However, events don't get send so I assume they are there. ;)
@deas Its not really a regression, klog errors/warnings were displayed in 0.5, I still can reproduce the error logs in 0.4 if I start k3s with debug enabled
it happens every 1-3 seconds on mostly empty setup
I got this issue along with the error info below, too. Does this error mean k3s server is failing? Or just print the error information to the screen?
Failed to get system container stats for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": failed to get cgroup stats for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": failed to get container info for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": unknown container "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy"
I got this issue along with the error info below, too. Does this error mean k3s server is failing? Or just print the error information to the screen?
Failed to get system container stats for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": failed to get cgroup stats for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": failed to get container info for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": unknown container "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy"
I am not seeing this kind of errors.
I got this issue along with the error info below, too. Does this error mean k3s server is failing? Or just print the error information to the screen?
Failed to get system container stats for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": failed to get cgroup stats for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": failed to get container info for "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy": unknown container "/docker/328bcee8bbc949ccccbb0d68508a5949212e6e1305e0ecaf774ecfc39da41f2a/kube-proxy"I am not seeing this kind of errors.
Ok, maybe this is an environment problem because I am running 'k3s server' inside self-build container. Although such errors appear, the container is initiailized as one server and make deployments. For now it does not have negative effects.
I'm seeing this too. Vanilla k3s installation with --docker option on Centos 7.6.1810
k3s[7480]: E0527 09:33:06.693874 7480 watcher.go:208] watch chan error: EOF
every 3 to 4 seconds.
Happens for me too, containerd, standard single-machine installation, it's all over my logs, every few second 24/7.
Version - 0.6.0-rc3
Verified fixed
Comments: the "watch chan error" is no longer showing up in the logs. Ran journalctl -f and compared v0.6.0-rc2 to v0.6.0-rc3 to ensure error has gone away.
@tfiduccia any chance to backport it for v0.5.x before v0.6x is released?
Most helpful comment
it happens every 1-3 seconds on mostly empty setup