che 2019-12-03 16:19:52,010[/172.27.0.1/...] [ERROR] [.w.i.k.n.KubernetesDeployments 500] -
Failed to parse last timestamp of the event.
Cause: Pod event timestamp can not be blank.
Event: PodEvent{
podName='mkdir-workspacemgb23y8056s2uxy7',
containerName='null',
reason='Scheduled',
message='Successfully assigned eclipse-example-workspace/mkdir-workspacemgb23y8056s2uxy7 to ripper1',
creationTimestamp='2019-12-03T15:48:20Z',
lastTimestamp='null'
}
@mingfang is it just about an error in the server logs or is this error preventing you from starting the workspace? It would be nice if you could provide more information that would help us reproduce the problem:
cc @skabashnyuk
My workspaces do start and appears to be in working order; It looks like just a logging issue.
>kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.3", GitCommit:"b3cbbae08ec52a7fc73d334838e18d17e8512749", GitTreeState:"clean", BuildDate:"2019-11-13T11:13:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"}
@mingfang thanks for the informations. Would you be able to provide a PR to fix that?
Did anyone dig in and see if that is an intended change in kubernetes or a bug that will be fixed in the next release ?
We might need to upgrade k8s-client to fix this problem see https://github.com/fabric8io/kubernetes-client#compatibility-matrix related to https://github.com/eclipse/che/issues/11981
Seeing hundreds of these error littered around our logs when workspaces start up. We have just upgraded from k8s 1.16 to 1.17 and both versions were leading to the same errors. It looks like the scheduling event doesn't get a timestamp.
Could the absence of the timestamp cause unnecessary 60sec delays to workspace startup?
I see two ways to solve this issue.
2020-03-03 09:44:53,285[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='null', reason='Scheduled', message='Successfully assigned skabashn-che/mkdir-workspace043yt0m0l6ygodu0 to crc-w6th5-master-0', creationTimestamp='2020-03-02T13:57:40Z', lastTimestamp='null'}
2020-03-03 09:44:53,292[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Pulled', message='Container image "registry.access.redhat.com/ubi8-minimal:8.0-213" already present on machine', creationTimestamp='2020-03-02T13:57:49Z', lastTimestamp='2020-03-02T13:57:49Z'}
2020-03-03 09:44:53,294[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Created', message='Created container mkdir-workspace043yt0m0l6ygodu0', creationTimestamp='2020-03-02T13:57:49Z', lastTimestamp='2020-03-02T13:57:49Z'}
2020-03-03 09:44:53,296[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Started', message='Started container mkdir-workspace043yt0m0l6ygodu0', creationTimestamp='2020-03-02T13:57:49Z', lastTimestamp='2020-03-02T13:57:49Z'}
2020-03-03 09:44:53,297[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='null', reason='Scheduled', message='Successfully assigned skabashn-che/mkdir-workspace043yt0m0l6ygodu0 to crc-w6th5-master-0', creationTimestamp='2020-03-02T13:58:00Z', lastTimestamp='null'}
2020-03-03 09:44:53,300[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Pulled', message='Container image "registry.access.redhat.com/ubi8-minimal:8.0-213" already present on machine', creationTimestamp='2020-03-02T13:58:08Z', lastTimestamp='2020-03-02T13:58:08Z'}
2020-03-03 09:44:53,307[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Created', message='Created container mkdir-workspace043yt0m0l6ygodu0', creationTimestamp='2020-03-02T13:58:08Z', lastTimestamp='2020-03-02T13:58:08Z'}
2020-03-03 09:44:53,308[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Started', message='Started container mkdir-workspace043yt0m0l6ygodu0', creationTimestamp='2020-03-02T13:58:08Z', lastTimestamp='2020-03-02T13:58:08Z'}
2020-03-03 09:44:53,310[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='null', reason='Scheduled', message='Successfully assigned skabashn-che/mkdir-workspace043yt0m0l6ygodu0 to crc-w6th5-master-0', creationTimestamp='2020-03-02T15:30:37Z', lastTimestamp='null'}
2020-03-03 09:44:53,313[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Pulled', message='Container image "registry.access.redhat.com/ubi8-minimal:8.0-213" already present on machine', creationTimestamp='2020-03-02T15:30:45Z', lastTimestamp='2020-03-02T15:30:45Z'}
2020-03-03 09:44:53,315[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Created', message='Created container mkdir-workspace043yt0m0l6ygodu0', creationTimestamp='2020-03-02T15:30:45Z', lastTimestamp='2020-03-02T15:30:45Z'}
2020-03-03 09:44:53,319[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Started', message='Started container mkdir-workspace043yt0m0l6ygodu0', creationTimestamp='2020-03-02T15:30:45Z', lastTimestamp='2020-03-02T15:30:45Z'}
2020-03-03 09:44:53,327[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='null', reason='Scheduled', message='Successfully assigned skabashn-che/mkdir-workspace043yt0m0l6ygodu0 to crc-w6th5-master-0', creationTimestamp='2020-03-02T15:30:57Z', lastTimestamp='null'}
2020-03-03 09:44:53,330[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='Pulled', message='Container image "registry.access.redhat.com/ubi8-minimal:8.0-213" already present on machine', creationTimestamp='2020-03-02T15:31:05Z', lastTimestamp='2020-03-02T15:31:05Z'}
2020-03-03 09:44:53,342[/172.30.0.1/...] [INFO ] [.w.i.k.n.KubernetesDeployments 579] - PodEvent = PodEvent{podName='mkdir-workspace043yt0m0l6ygodu0', containerName='mkdir-workspace043yt0m0l6ygodu0', reason='
@l0rd @benoitf What do you think?
I'm a :+1: on upgrading the kubernetes client -- I ran into a few issues when trying to use a newer version of quarkus for https://github.com/che-incubator/che-api-sidecar due to (I think, I didn't look into it too deeply) a transitive dependency in fabric8-kubernetes-client.
As for this comment https://github.com/eclipse/che/issues/15395#issuecomment-593882859. After reviewing this pr https://github.com/kubernetes/kubernetes/pull/86557 and testing k8s client to 4.8.0 I've decided to go with variant 1 because the upgrade of the library didn't help it only adds additional risk to delay the fix of this issue. Upgrade to the new k8s client would be made as a separate task.