Kubespray: Error log "Could not construct reference to..." in /var/log/messages with kubernetes v1.6

Created on 9 May 2017  路  6Comments  路  Source: kubernetes-sigs/kubespray

I got such error messages in system log file /var/log/messages when I installed k8s cluster successfully, but everything seems to be OK .It is happened after upgrading k8s to v1.6.1.
Is this a bug or I configed something wrong?

May 10 10:54:23 test1 kubelet: E0510 02:54:23.739857 1083 event.go:259] Could not construct reference to: '&v1.Node{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"u20170406-test-xufr3-1", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"beta.kubernetes.io/arch":"amd64", "kubernetes.io/hostname":"u20170406-test-xufr3-1", "node-role.kubernetes.io/master":"true", "beta.kubernetes.io/os":"linux"}, Annotations:map[string]string{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.NodeSpec{PodCIDR:"", ExternalID:"u20170406-test-xufr3-1", ProviderID:"", Unschedulable:true, Taints:[]v1.Taint(nil)}, Status:v1.NodeStatus{Capacity:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:2000, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:3975622656, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Allocatable:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1900, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:3358765056, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Phase:"", Conditions:[]v1.NodeCondition{v1.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456728811, loc:(
May 10 10:54:23 test1 kubelet: time.Location)(0x6f08340)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456728811, loc:(time.Location)(0x6f08340)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, v1.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456875990, loc:(time.Location)(0x6f08340)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456875990, loc:(time.Location)(0x6f08340)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, v1.NodeCondition{Type:"DiskPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456908166, loc:(time.Location)(0x6f08340)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456908166, loc:(time.Location)(0x6f08340)}}, Reason:"KubeletHasNoDiskPressure", Message:"kubelet has no disk pressure"}, v1.NodeCondition{Type:"Ready", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456937668, loc:(time.Location)(0x6f08340)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63629914934, nsec:456937668, loc:(*time.Location)(0x6f08340)}}, Reason:"KubeletNotReady", Message:"container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,network state unknown"}}, Addresses:[]v1.NodeAddress{v1.NodeAddress{Type:"LegacyHostIP", Address:"10.120.120.134"}, v1.NodeAddress{Type:"InternalIP", Address:"10.120.120.134"}, v1.NodeAddress{Type:"Hostname", Address:"u20170406-test-xufr3-1"}}, DaemonEndpoints:v1.NodeDaemonEndpoints{KubeletEndpoint:v1.DaemonEndpoint{Port:10250}}, NodeInfo:v1.NodeSystemInfo{MachineID:"8e025a21a4254e11b028584d9d8b12c4", SystemUUID:"2087A011-981B-416E-8A27-4159FD0B034E", BootID:"5c887ac0-1515-41df-9ac1-57f3d2a1cea9", KernelVersion:"3.10.0-327.13.1.el7.x86_64", OSImage:"CentOS Linux 7 (Core)", ContainerRuntimeVersion:"docker://1.13.1", KubeletVersion:"v1.6.1+coreos.0", KubeProxyVersion:"v1.6.1+c
May 10 10:54:23 test1 kubelet: oreos.0", OperatingSystem:"linux", Architecture:"amd64"}, Images:[]v1.ContainerImage(nil), VolumesInUse:[]v1.UniqueVolumeName(nil), VolumesAttached:[]v1.AttachedVolume(nil)}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Warning' 'FailedNodeAllocatableEnforcement' 'Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 3975622656 to memory.limit_in_bytes: write /var/lib/docker/devicemapper/mnt/eaf16414ab688a1f3b9e29e325bf56860322d730058913d6aa8eeb825d17cf33/rootfs/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument'
....

Most helpful comment

All 6 comments

I am very sorry for previous two wrong issues, the first one "Handler for GET ..." is a bug of old version before v1.6, the second one "Error updating node status ..." is due to my unstable network......
I apologize for my wrong issue...
But this one "Could not construct reference to..." is really an issue after I installed and confirmed several times, hope someone to help me, thanks very much~

@bradbeam Yeal, you are right, it seems to be the same issue, thanks very much !

Seems like we are also having same issue with version v1.6.2, any idea which version this issue is been fixed?

May 26 23:54:03 ar-master-k8-02 kubelet: E0526 23:54:03.462871 14883 event.go:259] Could not construct reference to: '&v1.Node{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ar-master-k8-02", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{sec:0, nsec:0, loc:(time.Location)(nil)}}, DeletionTimestamp:(v1.Time)(nil), DeletionGracePeriodSeconds:(int64)(nil), Labels:map[string]string{"beta.kubernetes.io/os":"linux", "beta.kubernetes.io/arch":"amd64", "kubernetes.io/hostname":"ar-master-k8-02", "node-role.kubernetes.io/master":"true"}, Annotations:map[string]string{"volumes.kubernetes.io/controller-managed-attach-detach":"true"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:""}, Spec:v1.NodeSpec{PodCIDR:"", ExternalID:"ar-master-k8-02", ProviderID:"", Unschedulable:true, Taints:[]v1.Taint(nil)}, Status:v1.NodeStatus{Capacity:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:1000, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:3975622656, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Allocatable:v1.ResourceList{"memory":resource.Quantity{i:resource.int64Amount{value:3358765056, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"BinarySI"}, "pods":resource.Quantity{i:resource.int64Amount{value:110, scale:0}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}, "cpu":resource.Quantity{i:resource.int64Amount{value:900, scale:-3}, d:resource.infDecAmount{Dec:(inf.Dec)(nil)}, s:"", Format:"DecimalSI"}}, Phase:"", Conditions:[]v1.NodeCondition{v1.NodeCondition{Type:"OutOfDisk", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506613277, loc:(time.Location)(0x6f094
May 26 23:54:03 ar-master-k8-02 kubelet: 00)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506613277, loc:(time.Location)(0x6f09400)}}, Reason:"KubeletHasSufficientDisk", Message:"kubelet has sufficient disk space available"}, v1.NodeCondition{Type:"MemoryPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506710600, loc:(time.Location)(0x6f09400)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506710600, loc:(time.Location)(0x6f09400)}}, Reason:"KubeletHasSufficientMemory", Message:"kubelet has sufficient memory available"}, v1.NodeCondition{Type:"DiskPressure", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506731608, loc:(time.Location)(0x6f09400)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506731608, loc:(time.Location)(0x6f09400)}}, Reason:"KubeletHasNoDiskPressure", Message:"kubelet has no disk pressure"}, v1.NodeCondition{Type:"Ready", Status:"False", LastHeartbeatTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506740987, loc:(time.Location)(0x6f09400)}}, LastTransitionTime:v1.Time{Time:time.Time{sec:63631404962, nsec:506740987, loc:(*time.Location)(0x6f09400)}}, Reason:"KubeletNotReady", Message:"container runtime is down,PLEG is not healthy: pleg was last seen active 2562047h47m16.854775807s ago; threshold is 3m0s,network state unknown"}}, Addresses:[]v1.NodeAddress{v1.NodeAddress{Type:"LegacyHostIP", Address:"10.169.237.29"}, v1.NodeAddress{Type:"InternalIP", Address:"10.169.237.29"}, v1.NodeAddress{Type:"Hostname", Address:"ar-master-k8-02"}}, DaemonEndpoints:v1.NodeDaemonEndpoints{KubeletEndpoint:v1.DaemonEndpoint{Port:10250}}, NodeInfo:v1.NodeSystemInfo{MachineID:"8e025a21a4254e11b028584d9d8b12c4", SystemUUID:"420A7EB4-83C7-CAC2-B439-C16B08AA8892", BootID:"a566e7fa-c5c9-4b32-a5d3-6b571e03f5f4", KernelVersion:"3.10.0-514.10.2.el7.x86_64", OSImage:"CentOS Linux 7 (Core)", ContainerRuntimeVersion:"docker://1.13.1", KubeletVersion:"v1.6.2+coreos.0", KubeProxyVersion:"v1.6.2+coreos.0", OperatingSystem:"linu
May 26 23:54:03 ar-master-k8-02 kubelet: x", Architecture:"amd64"}, Images:[]v1.ContainerImage(nil), VolumesInUse:[]v1.UniqueVolumeName(nil), VolumesAttached:[]v1.AttachedVolume(nil)}}' due to: 'selfLink was empty, can't make reference'. Will not report event: 'Warning' 'FailedNodeAllocatableEnforcement' 'Failed to update Node Allocatable Limits "": failed to set supported cgroup subsystems for cgroup : Failed to set config for supported subsystems : failed to write 3975622656 to memory.limit_in_bytes: write /var/lib/docker/devicemapper/mnt/3a87989b91ec0b34b965eb851af0e47036bb2037b71fc6a61132d17948283141/rootfs/sys/fs/cgroup/memory/memory.limit_in_bytes: invalid argument'

looks like it'll be included in 1.7

Thanks @bradbeam

Was this page helpful?
0 / 5 - 0 ratings