Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT ( Need to confirm)
Minikube version (use minikube version): minikube version: v0.21.0
Environment:
"Boot2DockerURL": "file://C:/Users/someena/.minikube/cache/iso/minikube-v0.23.0.iso"What happened:
Got following message on deploying a helm chart on minikube
FailedSync, no error message.
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
26m 26m 1 default-scheduler Normal Scheduled Successfully assigned acquisition-solutionentry-3533965862-fl3f7 to minikube
26m 26m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "sindbad-shared-configs"
26m 26m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "acquisition-solutionentry-properties"
26m 26m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "sindbad-local-properties"
26m 26m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "default-token-j1nl2"
26m 26m 1 kubelet, minikube Normal SuccessfulMountVolume MountVolume.SetUp succeeded for volume "eyaml-config"
26m 2s 1049 kubelet, minikube Warning FailedSync Error syncing pod
26m 1s 1048 kubelet, minikube Normal SandboxChanged Pod sandbox changed, it will be killed and re-created.
Aug 03 13:53:20 minikube localkube[3745]: E0803 13:53:20.865807 3745 kuberuntime_sandbox.go:54] CreatePodSandbox for pod "acquisition-solutionentry-3533965862-fl3f7_default(122fbab0-7851-11e7-8a12-00155d20b60b)" failed: rpc error: code = 2 desc = failed to start sandbox container for pod "acquisition-solutionentry-3533965862-fl3f7": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:286: decoding sync type from init pipe caused \\\\\\\"read parent: connection reset by peer\\\\\\\"\\\"\\n\""}
Aug 03 13:53:20 minikube localkube[3745]: E0803 13:53:20.865830 3745 kuberuntime_manager.go:618] createPodSandbox for pod "acquisition-solutionentry-3533965862-fl3f7_default(122fbab0-7851-11e7-8a12-00155d20b60b)" failed: rpc error: code = 2 desc = failed to start sandbox container for pod "acquisition-solutionentry-3533965862-fl3f7": Error response from daemon: {"message":"invalid header field value \"oci runtime error: container_linux.go:247: starting container process caused \\\"process_linux.go:286: decoding sync type from init pipe caused \\\\\\\"read parent: connection reset by peer\\\\\\\"\\\"\\n\""}
Aug 03 13:53:20 minikube localkube[3745]: E0803 13:53:20.865877 3745 pod_workers.go:182] Error syncing pod 122fbab0-7851-11e7-8a12-00155d20b60b ("acquisition-solutionentry-3533965862-fl3f7_default(122fbab0-7851-11e7-8a12-00155d20b60b)"), skipping: failed to "CreatePodSandbox" for "acquisition-solutionentry-3533965862-fl3f7_default(122fbab0-7851-11e7-8a12-00155d20b60b)" with CreatePodSandboxError: "CreatePodSandbox for pod \"acquisition-solutionentry-3533965862-fl3f7_default(122fbab0-7851-11e7-8a12-00155d20b60b)\" failed: rpc error: code = 2 desc = failed to start sandbox container for pod \"acquisition-solutionentry-3533965862-fl3f7\": Error response from daemon: {\"message\":\"invalid header field value \\\"oci runtime error: container_linux.go:247: starting container process caused \\\\\\\"process_linux.go:286: decoding sync type from init pipe caused \\\\\\\\\\\\\\\"read parent: connection reset by peer\\\\\\\\\\\\\\\"\\\\\\\"\\\\n\\\"\"}"
Aug 03 13:53:21 minikube localkube[3745]: W0803 13:53:21.096300 3745 pod_container_deletor.go:77] Container "5d5d4ca2888d0508653c08c547c9475f963a314df4916254a0be4478552c3f64" not found in pod's containers
Aug 03 13:53:21 minikube localkube[3745]: I0803 13:53:21.398915 3745 kuberuntime_manager.go:457] Container {Name:acquisition-solutionentry Image:msedocker.azurecr.io/acquisition-solutionentry:testrelease Command:[/usr/bin/java] Args:[-DapplicationName=acquisition-solutionentry -Ds4=acquisition-solutionentry -Dfile.encoding=UTF-8 -Xmx1024m -classpath /etc/sindbad/local:/usr/local/lib/sindbad/acquisition-solutionetry/libs/*: -DLOG_DIR=/var/log/sindbad -Dloader.path=/etc/sindbad/local -Dserver.port=8080 com.microsoft.netbreeze.acquisition.solutionentry.SolutionEntryServiceApp] WorkingDir: Ports:[] EnvFrom:[] Env:[] Resources:{Limits:map[cpu:{i:{value:1 scale:0} d:{Dec:<nil>} s:1 Format:DecimalSI} memory:{i:{value:2048 scale:-3} d:{Dec:<nil>} s:2048m Format:DecimalSI}] Requests:map[memory:{i:{value:1024 scale:-3} d:{Dec:<nil>} s:1024m Format:DecimalSI} cpu:{i:{value:500 scale:-3} d:{Dec:<nil>} s:500m Format:DecimalSI}]} VolumeMounts:[{Name:sindbad-shared-configs ReadOnly:false MountPath:/etc/sindbad/local SubPath:} {Name:default-token-j1nl2 ReadOnly:true MountPath:/var/run/secrets/kubernetes.io/serviceaccount SubPath:}] LivenessProbe:nil ReadinessProbe:&Probe{Handler:Handler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health/ready,Port:8080,Host:,Scheme:HTTP,HTTPHeaders:[],},TCPSocket:nil,},InitialDelaySeconds:10,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,} Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:Always SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.
md5-63f1cb397df980e6e5166cddaa9f2d9f
{\"message\":\"invalid header field value \\\"oci runtime error: container_linux.go:247: starting container process caused \\\\\\\"process_linux.go:286: decoding sync type from init pipe caused \\\\\\\\\\\\\\\"read parent: connection reset by peer\\\\\\\\\\\\\\\"\\\\\\\"\\\\n\\\"\"}"
md5-7f652a77389614848c875bcfff60041c
> docker version
Client:
Version: 17.06.0-ce
API version: 1.23
Go version: go1.8.3
Git commit: 02c1d87
Built: Fri Jun 23 21:30:30 2017
OS/Arch: windows/amd64
Server:
Version: 1.12.6
API version: 1.24 (minimum version )
Go version: go1.6.4
Git commit: 78d1802
Built: Wed Jan 11 00:23:16 2017
OS/Arch: linux/amd64
Experimental: false
It turn out to be issue with limit key in helm template.
Before
resources:
limits:
cpu: 1000m
memory: 2048m
requests:
cpu: 300m
memory: 1024m
After
resources:
requests:
cpu: 300m
memory: 1024m
Can this be closed then @sahilsk ?
Most helpful comment
It turn out to be issue with
limitkey in helm template.Before
After