Since version 2.8.0, I'm getting the following error while running helm upgrade --install release_name chart
:
E0209 11:21:52.322201 93759 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:53674->127.0.0.1:53680: write tcp4 127.0.0.1:53674->127.0.0.1:53680: write: broken pipe
Anyone got a hint, what could be a possible cause for this?
Edit: 2.8.1 does not fix this. I'm on MacOS
Same problem with linux 64bit client
I face the same issue. Any suggested workarounds? This is very annoying.
Seeing the same thing; no apparent issues found in tiller log relating to the charts I'm deploying.
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server is (ubuntu 16.04) linux, client is (10.13.3) macos
This happens usually after succesfully deploying the chart in question, but it's causing us all sorts of headaches since helm errors out (and we depend on successful helm deploy before we do other things).
We have six instances of the same chart being deployed sequentially, and the first might fail, or the second, or the fifth. I can't find a pattern as to why.
@albrechtsimon @eldada @matus-vacula are you guys running on metal or in publicclouds?
@oivindoh I'm running on k8s in AWS. It used to be ok up until 2.7.2.
I use gke. Helm 2.8.1.
At reddit, we're seeing this happen every now and then and makes for a very bad experience. We use helm along with helmfile and it seems that when we "sync" (pretty much run a bunch of helm upgrades) we sometimes see an issue with broken pipes
So these errors might be grpc related and might not mean anything other than an overly verbose logger: https://github.com/grpc/grpc-go/issues/1362
Helm should maybe update to the latest release of grpc to fix this
Same issue running helm diff
plugin. When I run multiple helm-diff command in a sequence I always got the error 38012 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:53526->127.0.0.1:53530: write tcp4 127.0.0.1:53526->127.0.0.1:53530: write: broken pipe
. Running one by one always work
Still seeing this on 2.9.0 vs any and all clusters I have deployed on Azure via acs-engine.
E0427 10:11:49.952856 28158 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:36679->127.0.0.1:51582: write tcp4 127.0.0.1:36679->127.0.0.1:51582: write: broken pipe
/helm/2.9.0/x64/helm failed with return code: 0
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-27T00:13:02Z", GoVersion:"go1.9.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-18T23:58:35Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
Client: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.0", GitCommit:"f6025bb9ee7daf9fee0026541c90a6f557a3e0bc", GitTreeState:"clean"}
Edit: Same on 2.9.1
Having the exact same issue with helm-diff & helm 2.9.0
E0510 16:06:06.478944 19746 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:38750->127.0.0.1:41234: write tcp4 127.0.0.1:38750->127.0.0.1:41234: write: broken pipe
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:55:54Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
E0515 11:25:06.883101 170075 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 * write tcp4 127.0.0.1:38769->127.0.0.1:36224: write: broken pipe
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-19T00:05:56Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.7", GitCommit:"dd5e1a2978fd0b97d9b78e1564398aeea7e7fe92", GitTreeState:"clean", BuildDate:"2018-04-18T23:58:35Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
+1
Just was dealing with similar issue and doing helm init --upgrade --history-max=0
seemed to fix it for me.
error:
E0531 14:54:27.094118 97288 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:53496->127.0.0.1:53498: write tcp4 127.0.0.1:53496->127.0.0.1:53498: write: broken pipe
Error: UPGRADE FAILED: "dev" has no deployed releases
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:13:31Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
helm version
Client: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.2", GitCommit:"a80231648a1473929271764b920a8e346f6de844", GitTreeState:"clean"}
@ssalaues I think that only fixed your "Error: UPGRADE FAILED: "dev" has no deployed releases" issue (I'm guessing you only had a Failed release of "dev" before), not the portforward message, which appears to be very intermittent.
@oivindoh The strange thing is that I was getting the same portforward error regardless if I was doing a fresh helm install of a new deployment with new release name or trying to helm upgrade (which is what led me to this thread since it was stalling my work). In the particular case that I pasted in my previous message it was while testing a helm upgrade attempt (you're right had an issue with the previous release) but continued to get the portforward error when trying to do many other helm related commands on completely new releases.
Strange communication issue and hope it gets sorted out either way
i've seen this error just running helm list
Seeing this after upgrading from 2.7.2 to 2.9.1:
14:50:28 helm upgrade --install ...
14:50:28 E0606 21:50:28.120334 271 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44899->127.0.0.1:34138: write tcp4 127.0.0.1:44899->127.0.0.1:34138: write: broken pipe
in a 1.8.4 cluster. It didn't seem to affect the deploy, at least this time, but I'd rather not see errors in the deploy log if they're not relevant to the deploy.
Still seeing this regularly on k8s 1.10.4 + helm 1.9.1
Found in K8S v1.10.0 and helm v2.7.3
E0619 03:23:34.043302 22164 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:45405->127.0.0.1:47392: write tcp4 127.0.0.1:45405->127.0.0.1:47392: write: broken pipe
LAST DEPLOYED: Tue Jun 19 03:23:30 2018
Yes - it's still here:
helm install helm/kube-prometheus --name kube-prometheus --namespace monitoring --tls
NAME: kube-prometheus
E0709 18:06:25.276408 4526 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:56929->127.0.0.1:56931: write tcp4 127.0.0.1:56929->127.0.0.1:56931: write: broken pipe
LAST DEPLOYED: Mon Jul 9 18:06:21 2018
NAMESPACE: monitoring
STATUS: DEPLOYED
helm version --tls
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:03:09Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-28T20:13:43Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
helm list --tls
NAME REVISION UPDATED STATUS CHART NAMESPACE
kube-prometheus 1 Mon Jul 9 18:06:21 2018 DEPLOYED kube-prometheus-0.0.78 monitoring
prometheus-operator 1 Mon Jul 9 17:54:02 2018 DEPLOYED prometheus-operator-0.0.25 monitoring
write: broken pipe
looks like a networking error, where the tunneled connection between helm, kube-proxy and tiller gets interrupted. It's a little difficult to determine where (or what) is interrupting the connection from the error, though, which is coming from kubectl's port-forward APIs.
For anyone hitting this issue, can they try the following steps to manually set up a tunnel and see if they still see the error? This is how Helm initiates the connection to tiller.
$ TILLER_POD=$(kubectl get pods -n kube-system | grep tiller | awk '{ print $1 }')
$ kubectl -n kube-system port-forward $TILLER_POD 44134:44134
$ export HELM_HOST=:44134
$ helm list # or whatever command you were using when hitting this bug
Thanks!
@bacongobbler Not working for me
Would you mind explaining a bit more in detail? What didn't work for you and why? Do you have logs? That would be most helpful.
We're experiencing this on about half our deploys and unfortunately makes the CD system think the deployment failed.
2018-09-20T10:54:29.4029987Z E0920 10:54:26.165265 25441 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:34899->127.0.0.1:42674: write tcp4 127.0.0.1:34899->127.0.0.1:42674: write: broken pipe
2018-09-20T10:54:29.4182951Z ##[error]E0920 10:54:26.165265 25441 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:34899->127.0.0.1:42674: write tcp4 127.0.0.1:34899->127.0.0.1:42674: write: broken pipe
Tested using Helm 2.10,, 2.8.2 on both linux and windows.
Using TLS and Azure's Kubernetes 1.11.2
It occurs very regularly and is considerably inconveniencing us.
same issue as @Vhab;
using helm/tiller 2.11.0 with TLS and AKS 1.11.2.
helm client is being run via a vsts-agent docker image running in AKS; its part of several azure devops CD pipelines .I've also received the error running the helm client locally on Debian stretch.
I've only experienced the errors with helm upgrade and/or install commands and typically after the post deploy report
E1004 08:10:19.411418 14567 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:35543->127.0.0.1:41728: write tcp4 127.0.0.1:35543->127.0.0.1:41728: write: broken pipe
be good if there was a fix so my CD pipeline stops alerting a failure when the actually helm deployment works fine etc 😄
edit: tried @bacongobbler suggestion to manually setup the connection and the following error intermittently occurred using kubectl port-forward for multiple commands. kubectl port-forward for other uses apart from helm works okay on my cluster.
E1004 10:30:03.107023 35 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44134->127.0.0.1:38992: write tcp4 127.0.0.1:44134->127.0.0.1:38992: write: broken pipe
same problem on running job from VSTS, that deploys to AKS on azure:
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Kubernetes: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.2", GitCommit:"bb9ffb1654d4a729bb4cec18ff088eacc153c239", GitTreeState:"clean", BuildDate:"2018-08-07T23:08:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
E1002 11:57:35.278788 207 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:38037->127.0.0.1:40594: write tcp4 127.0.0.1:38037->127.0.0.1:40594: write: broken pipe
a bit scary that issue is hanging since February...
Same here, we get this error about 50% of the time when deploying to AKS from VSTS
I believe the AKS folks are aware of this bug and have hit it themselves. It appears to be an upstream issue, not necessarily a helm problem. I'll see if I can ping one of them and see if they can provide any updates on this ticket.
We are also seeing this frequently on non-AKS clusters.
I'm having the same issue with one of our tillers (3 in total) in our kubernetes cluster after upgrading from 2.9.1 to 2.11.0. All tillers are upgraded and only one has this issue.
After deleting a problematic tiller-deploy-xxxxxxx-yyyy pod which is being recreated by the deployment tiller-deploy, All helm commands are quick again. Only thing left is the error showing up 1 out of 5 times. (This is a 5 node cluster, so this might be correlated.)
@bacongobbler any update on this? Again Azure DevOps/VSTS hosted agent and AKS. Thanks.
The issue is now resolved (for now) on my side after upgrading to kubernetes v1.11.2. During this process i've also rebooted 2 of my 3 etcd/control-plane nodes. So i'm not sure whether it is the update or the reboot.
edit: Also during the upgrade process my docker runtime is restarted on every node in the cluster.
@marrobi no updates to share at this time, sorry.
still seeing this occur in our deployments, and causing some issues with a tool we're writing for Helm.
Version information if it helps at all:
ubuntu@kubenode01:/opt/flagship$ helm version
Client: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.11.0", GitCommit:"2e55dbe1fdb5fdb96b75ff144a339489417b146b", GitTreeState:"clean"}
ubuntu@kubenode01:/opt/flagship$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T18:02:47Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3", GitCommit:"a4529464e4629c21224b3d52edfe0ea91b072862", GitTreeState:"clean", BuildDate:"2018-09-09T17:53:03Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
ubuntu@kubenode01:/opt/flagship$
Our tool output, but this is really just regurgitating information returned to Helm:
ubuntu@kubenode01:/opt/flagship$ barrelman apply --diff barrelman-testing.yaml
INFO[0000] Using config file=/home/ubuntu/.barrelman/config
NewSession Context:
INFO[0000] Connected to Tiller Host=":44160" clientServerCompatible=true tillerVersion=v2.11.0
INFO[0000] Using kube config file=/home/ubuntu/.kube/config
INFO[0000] syncronizing with remote chart repositories
Enumerating objects: 25, done.
Counting objects: 100% (25/25), done.
Compressing objects: 100% (24/24), done.
Total 30 (delta 7), reused 6 (delta 1), pack-reused 5
E1201 18:12:09.572396 16908 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:44160->127.0.0.1:60512: read: connection reset by peer
E1201 18:12:10.122456 16908 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44160->127.0.0.1:60532: write tcp4 127.0.0.1:44160->127.0.0.1:60532: write: broken pipe
E1201 18:12:10.379437 16908 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44160->127.0.0.1:60546: write tcp4 127.0.0.1:44160->127.0.0.1:60546: write: broken pipe
E1201 18:12:11.184402 16908 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44160->127.0.0.1:60556: write tcp4 127.0.0.1:44160->127.0.0.1:60556: write: broken pipe
ERRO[0003] Failed to get results from Tiller cause="rpc error: code = Unknown desc = \"kube-proxy\" has no deployed releases"
ubuntu@kubenode01:/opt/flagship$
Same issue for me, I get this error all the time (GKE k8s 1.11.5). Helm version 2.12.0.
Client: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.0", GitCommit:"d325d2a9c179b33af1a024cdb5a4472b6288016a", GitTreeState:"clean"}
same here getting the error when running helm install
for jenkins chart on EKS
NAME: jenkins
E1221 11:08:49.491480 92770 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:61585->127.0.0.1:61588: write tcp4 127.0.0.1:61585->127.0.0.1:61588: write: broken pipe
any update on this?
Any update on this?
Hitting same issue here :( - but seems to only be when running in Docker. Same helm version used outside, never gave me this issue.
Mine seems to have gone away, after I fixed a path to .Files.Get.. (which helm did not complain about not being there)..
since issue with Helm 2.9.1, k8s 1.10.3
it happens when i run helm upgrade --install ...
which has a pre-install
hook takes 4 mins.
Same Issue here, using helm install
EKS Kubernetes version: 1.11.0
Helm Client: 2.12.1
Helm Server: 2.12.0
Just wanted to chime in that we have the same issue in one cluster. It happens more often than not, and puts our builds at only 29% success rate due to this, meaning 71% are failures, where almost all of those are false positives. This is for the past 7 days.
Version of Helm:
Client: &version.Version{SemVer:"v2.12.2", GitCommit:"7d2b0c73d734f6586ed222a567c5d103fed435be", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.12.2", GitCommit:"7d2b0c73d734f6586ed222a567c5d103fed435be", GitTreeState:"clean"}
Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0",
GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"archive", BuildDate:"2018-12-08T11:33:56Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.5",
GitCommit:"753b2dbc622f5cc417845f0ff8a77f539a4213ea", GitTreeState:"clean", BuildDate:"2018-11-26T14:31:35Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
TLS is enabled and Tiller is in a custom namespace, but other than that I do not think there is anything 'custom' going on.
I can't remember when it first appeared but it has been going on for several versions of helm. And it seems to affect the cluster with Helm TLS enabled much, much more (70% of cases it throws this) than one without TLS enabled (happens only every now and then, but don't have any clear estimate on that one). Could it be related to TLS being enabled? Both clusters are hosted in Azure using their AKS service.
Reposting @bacongobbler 's previous request. Please try this and let us know if this is working:
$ TILLER_POD=$(kubectl get pods -n kube-system | grep tiller | awk '{ print $1 }')
$ kubectl -n kube-system port-forward $TILLER_POD 44134:44134
$ export HELM_HOST=:44134
$ helm list # or whatever command you were using when hitting this bug
Behind the scenes, Helm is doing a kubectl port-forward
. It is possible that there is some nuance between the version of the Kubernetes library that Helm is compiled with and the version supported by Kubernetes. But we don't know what that problem may be. We need more information. If it can be reproduced with kubectl
, that gives us some clues as to where to look.
Reposting @bacongobbler 's previous request. Please try this and let us know if this is working:
$ TILLER_POD=$(kubectl get pods -n kube-system | grep tiller | awk '{ print $1 }') $ kubectl -n kube-system port-forward $TILLER_POD 44134:44134 $ export HELM_HOST=:44134 $ helm list # or whatever command you were using when hitting this bug
Behind the scenes, Helm is doing a
kubectl port-forward
. It is possible that there is some nuance between the version of the Kubernetes library that Helm is compiled with and the version supported by Kubernetes. But we don't know what that problem may be. We need more information. If it can be reproduced withkubectl
, that gives us some clues as to where to look.
Our tiller is in a custom namespace, so I ran this:
iTerm tab 1:
$ TILLER_POD=$(kubectl get pods -n $TILLER_NAMESPACE | grep tiller | awk '{ print $1 }')
$ kubectl -n $TILLER_NAMESPACE port-forward $TILLER_POD 44134:44134
iTerm tab 2:
$ export HELM_HOST=:44134
$ helm status <a deployent that was listed>
$ helm install .....
$ helm ls
$ helm status <freshly installed>
In iTerm tab 1 I get the following:
Forwarding from 127.0.0.1:44134 -> 44134
Forwarding from [::1]:44134 -> 44134
Handling connection for 44134
E0124 11:33:00.989027 4143 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44134->127.0.0.1:50435: write tcp4 127.0.0.1:44134->127.0.0.1:50435: write: broken pipe
Handling connection for 44134
Handling connection for 44134
E0124 11:33:20.690219 4143 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44134->127.0.0.1:50443: write tcp4 127.0.0.1:44134->127.0.0.1:50443: write: broken pipe
Handling connection for 44134
Handling connection for 44134
Handling connection for 44134
E0124 11:35:31.232146 4143 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44134->127.0.0.1:50465: write tcp4 127.0.0.1:44134->127.0.0.1:50465: write: broken pipe
Handling connection for 44134
E0124 11:35:40.701135 4143 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44134->127.0.0.1:50468: write tcp4 127.0.0.1:44134->127.0.0.1:50468: write: broken pipe
I can count 4 errors, one for each command.
I seem to get this same error from helm template, when I don't give it a -f valuesfile (and it fails rendering because of that)
@simpers Am I understanding this correctly: You are seeing the same error coming from kubectl
that you were seeing running Helm? That's not quite what I expected. Is the hangup happening immediately? Or is there a long delay?
I suppose we cannot rule out that Tiller is hanging up... though I think that would result in some log entries on the Tiller pod. Can you also verify for me that Tiller is not crashing/restarting?
At this point, it seems that there could be two different causes:
Thanks for the update.
@KlavsKlavsen That wold be very surprising, since helm template
is not supposed to connect to the cluster at all. Can you drop in the exact command and the error message?
Also, for those of you impacted... are you using TLS? I hadn't thought about whether the gRPC layer might be having TLS-related troubles.
Yup, I'm using TLS, and looks like many of the above are.
@simpers Am I understanding this correctly: You are seeing the same error coming from
kubectl
that you were seeing running Helm? That's not quite what I expected. Is the hangup happening immediately? Or is there a long delay?I suppose we cannot rule out that Tiller is hanging up... though I think that would result in some log entries on the Tiller pod. Can you also verify for me that Tiller is not crashing/restarting?
At this point, it seems that there could be two different causes:
- The proxy itself could be terminating on the Kubernetes side. This would most likely leave a trail of log messages... probably on the kube API server? (I'm not totally sure)
- Tiller could be experiencing some abnormal condition, which (I think) would either result in Tiller log messages or in pod restarts.
Thanks for the update.
There is no hangup at all. I'll run it once more just to make sure we're talking about the same thing haha.
So, instead of helm setting up its own port-forwarding to the cluster to do its thang, the sh-lines you provided allows me to re-use an existing connection, correct?
➜ ~ TILLER_POD=$(kubectl get pods -n tiller | grep tiller | awk '{ print $1 }')
➜ ~ kubectl -n $TILLER_NAMESPACE port-forward $TILLER_POD 44134:44134
Forwarding from 127.0.0.1:44134 -> 44134
Forwarding from [::1]:44134 -> 44134
So as I have set that up in terminal one, I go to a second tab of my iTerm2 and do the following:
➜ ~ export HELM_HOST=:44134
➜ ~ helm ls
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACE
<deployments>
I obviously do not want to post the whole list here, but the point is that no error shows up here. The command does not fail.
But I do get the following back in tab 1, though:
➜ ~ TILLER_POD=$(kubectl get pods -n tiller | grep tiller | awk '{ print $1 }')
➜ ~ kubectl -n $TILLER_NAMESPACE port-forward $TILLER_POD 44134:44134
Forwarding from 127.0.0.1:44134 -> 44134
Forwarding from [::1]:44134 -> 44134
Handling connection for 44134
E0124 18:22:16.865652 18795 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44134->127.0.0.1:58549: write tcp4 127.0.0.1:44134->127.0.0.1:58549: write: broken pipe
This will happen for almost every command, with almost meaning over 90%. Our DevOps pipelines's HelmDeploy task is tagged as failed due to this error
, when the command actually did not fail.
Also, for those of you impacted... are you using TLS? I hadn't thought about whether the gRPC layer might be having TLS-related troubles.
Yes, TLS is enabled and I have noticed that without it (we had two clusters in parallel, one of which did not have TLS enabled for Helm as it was the first cluster we created and hadn't learned the ropes yet) helm works fine and rarely does weird things.
For the cluster without TLS enabled it rarely happens, but it does sometimes, but with TLS enabled it is inversed, it is rarely that it does not happen.
I opened a PR that updates us to a newer version of gRPC, which seems to have about a dozen network and TLS fixes since our last version. I'm hopeful that will fix this issue.
yes to tls
and re the helm template thing.. thats what it looked like in the gitlab CI output.. and it went away when I added the -f in front of the yaml values file.. I will try and revert that tomorrow to verify.
@KlavsKlavsen That wold be very surprising, since
helm template
is not supposed to connect to the cluster at all. Can you drop in the exact command and the error message?
Well.. I tried doing what I did last.. and it did not start spitting out the broken pipe error.. so it must have been "something else" that caused it.. Atleast I'm not currently annoyed by it :)
@technosophos is there a chance this could be released as a patch for 2.12?
I am afraid that it must be deferred until 2.13 because it required a regeneration of one of the protobuf classes (due to a gRPC internal changes). Consequently, the binary protocol may be incompatible with earlier versions of 2.12. This is actually the reason why we only update gRPC when necessary.
That said, I will see if we can speed up the 2.13 release process at all. If this fixes a major issue, it's worth getting out the door.
Thanks, @technosophos, hoping gRPC upgrade fixes this issue.
@technosophos
Hi, I'm also facing this problem on Mac and found this issue after some googling. I've tried using helm client 2.13.0-rc.1, the error is gone, but the client just hangs.
GKE Version: 1.11.6-gke.2
Tiller Version (installed by Gitlab): v2.12.2
Output for Helm Client v2.12.3:
$ helm version --tiller-namespace gitlab-managed-apps
Client: &version.Version{SemVer:"v2.12.3", GitCommit:"eecf22f77df5f65c823aacd2dbd30ae6c65f186e", GitTreeState:"clean"}
Error: cannot connect to Tiller
$ helm ls --tiller-namespace gitlab-managed-apps
Error: transport is closing
$ helm install --name my-release --tiller-namespace gitlab-managed-apps stable/mongodb
E0216 20:08:46.435297 69434 portforward.go:331] an error occurred forwarding 58711 -> 44134: error forwarding port 44134 to pod 3fbbaff7bdeb4606f59f2e48c46544066abc658e2d0156519f7648e975ca8118, uid : exit status 1: 2019/02/16 22:08:46 socat[68417] E write(5, 0x558c5b292c50, 8192): Broken pipe
Error: transport is closing
Output for Helm Client v.2.13.0-rc.1
$ ./helm version --tiller-namespace gitlab-managed-apps
Client: &version.Version{SemVer:"v2.13.0-rc.1", GitCommit:"e0e5197f8d9b3fa13626a273ad8b3f49a8aab67e", GitTreeState:"clean"}
(... hangs ...)
^C
$ ./helm ls --tiller-namespace gitlab-managed-apps
(... hangs ...)
^C
./helm install --name my-release --tiller-namespace gitlab-managed-apps stable/mongodb
(... hangs ...)
^C
If I try to do the kubectl port-forward
trick, it's all the same... The only difference, it seems, is that 2.13.0-rc.1 keeps trying to reconnect and i see several Handling connection for 44134
, instead of just one on 2.12.3.
No relevant logs on tiller pod after all this and no restarts:
$ kubectl logs tiller-deploy-5d96b489cc-rcjn8 -f -n gitlab-managed-apps
[main] 2019/02/16 20:30:07 Starting Tiller v2.12.2 (tls=true)
[main] 2019/02/16 20:30:07 GRPC listening on :44134
[main] 2019/02/16 20:30:07 Probes listening on :44135
[main] 2019/02/16 20:30:07 Storage driver is ConfigMap
[main] 2019/02/16 20:30:07 Max history per release is 0
I've also tried bumping tiller image version to 2.13.0-rc.1, but the results are the same.
I can open another issue, if you want.
Sorry for spamming multiple messages, but it seems that when installing tiller on kube-system
namespace with helm init
, everything works fine.
'Fixed' this by removing --wait
from my deploy command
Upgraded to 2.13.0 and this issue is still happening, @technosophos.
Kubectl v1.13.4
Helm 2.13.0
AKS 1.11.3
Using TLS
Using Azure Devops hosted agents
2019-03-01T06:39:43.4897111Z [command]C:\hostedtoolcache\windows\helm\2.13.0\x64\windows-amd64\helm.exe upgrade --tiller-namespace [tillerns] --namespace [snip] --install --values [values] --wait --tls --tls-ca-cert D:\a\_temp\ca.cert.pem --tls-cert D:\a\_temp\helm-vsts.cert.pem --tls-key D:\a\_temp\helm-vsts.key.pem --values [values] --values [values] --values [values] --values [values] [release] [chart]
2019-03-01T06:40:01.6728072Z E0301 06:39:44.751630 3996 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:1704->127.0.0.1:1706: write tcp4 127.0.0.1:1704->127.0.0.1:1706: wsasend: An established connection was aborted by the software in your host machine.
2019-03-01T06:40:01.6739267Z Release "[release]" has been upgraded. Happy Helming!
2019-03-01T06:40:01.6762749Z E0301 06:40:00.814924 3996 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:1704->127.0.0.1:1707: write tcp4 127.0.0.1:1704->127.0.0.1:1707: wsasend: An established connection was aborted by the software in your host machine.
Client and server version for completeness,
2019-03-01T07:03:22.2064935Z Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
2019-03-01T07:03:22.2065659Z Server: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
2019-03-01T07:09:04.9754415Z [command]C:\hostedtoolcache\windows\kubectl\1.13.2\x64\kubectl.exe version -o json
2019-03-01T07:09:05.8327359Z {
2019-03-01T07:09:05.8327545Z "clientVersion": {
2019-03-01T07:09:05.8327674Z "major": "1",
2019-03-01T07:09:05.8327817Z "minor": "13",
2019-03-01T07:09:05.8327886Z "gitVersion": "v1.13.2",
2019-03-01T07:09:05.8327992Z "gitCommit": "cff46ab41ff0bb44d8584413b598ad8360ec1def",
2019-03-01T07:09:05.8328073Z "gitTreeState": "clean",
2019-03-01T07:09:05.8328164Z "buildDate": "2019-01-10T23:35:51Z",
2019-03-01T07:09:05.8328241Z "goVersion": "go1.11.4",
2019-03-01T07:09:05.8328355Z "compiler": "gc",
2019-03-01T07:09:05.8328441Z "platform": "windows/amd64"
2019-03-01T07:09:05.8328539Z },
2019-03-01T07:09:05.8328646Z "serverVersion": {
2019-03-01T07:09:05.8328712Z "major": "1",
2019-03-01T07:09:05.8328796Z "minor": "11",
2019-03-01T07:09:05.8328863Z "gitVersion": "v1.11.3",
2019-03-01T07:09:05.8328972Z "gitCommit": "a4529464e4629c21224b3d52edfe0ea91b072862",
2019-03-01T07:09:05.8329052Z "gitTreeState": "clean",
2019-03-01T07:09:05.8329146Z "buildDate": "2018-09-09T17:53:03Z",
2019-03-01T07:09:05.8329265Z "goVersion": "go1.10.3",
2019-03-01T07:09:05.8329352Z "compiler": "gc",
2019-03-01T07:09:05.8329437Z "platform": "linux/amd64"
2019-03-01T07:09:05.8329507Z }
2019-03-01T07:09:05.8329588Z }
md5-6caedccc088691194ed9efcc738243bc
2019-03-01T08:24:47.4912692Z E0301 08:24:34.921142 4263 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:40201->127.0.0.1:35456: write tcp4 127.0.0.1:40201->127.0.0.1:35456: write: broken pipe
Can confirm that this is still an issue in 2.13
$ helm install appscode/kubedb-catalog --name kubedb-catalog --version 0.10.0 --namespace kube-system
NAME: kubedb-catalog
E0303 20:13:56.088186 21260 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:33299->127.0.0.1:44514: write tcp4 127.0.0.1:33299->127.0.0.1:44514: write: broken pipe
LAST DEPLOYED: Sun Mar 3 20:13:54 2019
NAMESPACE: kube-system
STATUS: DEPLOYED
...
I was just trying out kubedb right now and got this. I see a lot of people saying that removing --wait
solved the problem for them, but what I think happens is that when you have that flag you are just a lot more likely to have this happen to you, as you will keep the connection up until you're done. But it does not remove the issue completely.
Just making sure it's included:
Azure AKS running k8 version 1.12.6
export TILLER_NAMESPACE="tiller"
export HELM_TLS_ENABLE="true"
Is there anything else we can provide for debugging this?
I think you need to be running tiller 2.13 and helm 2.13 for tls to work correctly.
@willejs, @simpers mentioned he was using Helm 2.13. :P
write tcp4 127.0.0.1:1704->127.0.0.1:1707: wsasend: An established connection was aborted by the software in your host machine.
@Vhab it's interesting that the error message changes based on the OS. That first error might actually confirm some of the reports here that the connection is being terminated somewhere between the client, the API server, the kubelet and tiller, but it's not identifying where (or what) is terminating the connection unfortunately.
Perhaps figuring out who's running at 127.0.0.1:1704
sending requests to 127.0.0.1:1707
might help us identify the problem.
Keep these reports coming! We're still unsure what is causing the underlying network failure but we'll keep trying to grok the logs we can find and see if we can identify a fix. THank you so much for the continued reports, we really appreciate it!
@simpers based on your error report:
write tcp4 127.0.0.1:33299->127.0.0.1:44514: write: broken pipe
If you want to try figuring out who's listening on ports 33299 and 44514, that might be helpful to diagnose where the connection's getting closed off.
I think i am actually wrong and this is still a bug. Long story, mostly with me recompiling helm-diff with 2.13 to fix it, but it doesn't. I suspect this is upstream though?
Thank you, I forgot to re-open this. Yes, right now signs are pointing towards a bug upstream affecting Helm.
On the bright side, this issue should not be a problem for Helm 3 given that we've removed tiller and interact directly with the API server. :)
Yes, right now signs are pointing towards a bug upstream affecting Helm.
It definitely feels like that.
Bit too busy to poke at it and try to get some more diagnostics at the moment, but did some googling around for possibly related issues.
https://github.com/kubernetes/kubernetes/issues/74551
https://github.com/pachyderm/pachyderm/issues/2815
These have similar symptoms.
On a side note, would there be a way to turn these errors into warnings instead? As effectively the operation was still successful.
But I suppose the source of these errors are not fine grained enough to separate out "appears to do no harm" vs "this is bad".
Nope, I'm afraid it doesn't work... This only started happening after I upgraded all of my tooling and cluster a few days ago (k8s from 1.10.5 -> 1.11.8; helm [I think I had 2.7.x] -> 2.13.0). Now I constantly run into this when trying to diff upgrades, or even execute them...
~Hey guys, while reading the thread I tried a few things out and I may've (accidentally) discovered (at least one of) the reason(s) for the error - port-forwarding to another service on the same client. I had a port-forward on port 3000 and as soon as I stopped it - the error stopped showing up. I imagine a CI client doesn't port-forward to the cluster while building (or ever), so it may be something on the cluster-side (regardless of which client actually uses that)?~
~Apologies if I completely misunderstood what's happening here - I just thought of posting this memo in the slim chance that it may alleviate some pain. Cheers.~
For what it is worth to those trying to get to the bottom of this issue...
Upgraded to 2.13.1. I can lookup version from inside the k8s cluster from another pod.
user@app-pod-9qtnz:/opt/app$ /helm/linux-amd64/helm --tiller-namespace mynamespace version --tls --tls-ca-cert /opt/app/tiller-tls-secrets/ca.cert.pem --tls-cert /opt/app/tiller-tls-secrets/helm-cert --tls-key /opt/app/tiller-tls-secrets/helm-key
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
The above assumes the local cluster config from the pod.
Cannot do it from the "outside" the cluster using a kubeconfig. Fails 100%.
Top block is port-forward, bottom is cooresponding helm
cli attempt to get version.
$ kubectl -nmynamespace --kubeconfig=/Users/me/kubeconfig port-forward tiller-deploy 44134:44134
Forwarding from 127.0.0.1:44134 -> 44134
Forwarding from [::1]:44134 -> 44134
Handling connection for 44134
Handling connection for 44134
Handling connection for 44134
E0325 14:48:23.291109 72782 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:44134->127.0.0.1:65378: read: connection reset by peer
Handling connection for 44134
E0325 14:48:24.742180 72782 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:44134->127.0.0.1:65379: read: connection reset by peer
Handling connection for 44134
E0325 14:48:26.876715 72782 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:44134->127.0.0.1:65380: read: connection reset by peer
Handling connection for 44134
E0325 14:48:29.782902 72782 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:44134->127.0.0.1:65381: read: connection reset by peer
Handling connection for 44134
E0325 14:48:34.851637 72782 portforward.go:316] error copying from local connection to remote stream: read tcp4 127.0.0.1:44134->127.0.0.1:65383: read: connection reset by peer
Handling connection for 44134
...
$ HELM_HOST=127.0.0.1:44134 helm version --tls --tls-verify --tls-ca-cert ~/ca-cert --tls-cert ~/helm-cert --tls-key ~/helm-key
Client: &version.Version{SemVer:"v2.13.0", GitCommit:"79d07943b03aea2b76c12644b4b54733bc5958d6", GitTreeState:"clean"}
...(hangs)
md5-d9d6f280ef7bf71d5bbbc6c43d1ebd52
$ kubectl -napp-ns-403 --kubeconfig=/Users/me/kubeconfig port-forward tiller-deploy-8d8cb7f47-f8mqm 44135:44135
Forwarding from 127.0.0.1:44135 -> 44135
Forwarding from [::1]:44135 -> 44135
Handling connection for 44135
md5-0c4a71223f15124df7f24360c8cc1876
$ curl localhost:44135/liveness -v
* Trying ::1...
* TCP_NODELAY set
* Connected to localhost (::1) port 44135 (#0)
> GET /liveness HTTP/1.1
> Host: localhost:44135
> User-Agent: curl/7.54.0
> Accept: */*
>
< HTTP/1.1 200 OK
< Date: Mon, 25 Mar 2019 19:59:52 GMT
< Content-Length: 0
<
* Connection #0 to host localhost left intact
md5-8fde931e38314bce9d22243a1c02a707
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.0", GitCommit:"91e7b4fd31fcd3d5f436da26c980becec37ceefe", GitTreeState:"clean", BuildDate:"2018-06-27T22:29:25Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10+", GitVersion:"v1.10.13-eks-g484b8", GitCommit:"484b857e3134d55ac6373fea2f51798fefa0533f", GitTreeState:"clean", BuildDate:"2019-03-08T05:32:36Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Here is the thing, errors in TLS validation close the connection unexpectedly and the kubectl proxy in the background complains about it without helm printing the actual error.
In my case it was as simple as adding "localhost" to the server certificate hosts and
export HELM_TLS_HOSTNAME=localhost
openssl s_client -connect
was key to narrow it down, then translate it to helm flags.
This should be definitelly flagged as a bug.
The real error is completelly silent even with --debug
I'm deploying magento in an ibm cloud private and I get this error
NAME: magento
E0416 18:45:41.935993 106642 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:35733->127.0.0.1:58668: write tcp4 127.0.0.1:35733->127.0.0.1:58668: write: broken pipe
LAST DEPLOYED: Tue Apr 16 18:45:39 2019
NAMESPACE: ace
STATUS: DEPLOYED
Here is the thing, errors in TLS validation close the connection unexpectedly and the kubectl proxy in the background complains about it without helm printing the actual error.
In my case it was as simple as adding "localhost" to the server certificate hosts and
export HELM_TLS_HOSTNAME=localhost
openssl s_client -connect
was key to narrow it down, then translate it to helm flags.This should be definitelly flagged as a bug.
The real error is completelly silent even with--debug
Confirm, that helps. I used
export HELM_TLS_HOSTNAME=tiller-server
@miguelangel-nubla @feksai Unfortunately this didn't do the trick for us.
While using --tls-verify
did reproduce a similar error, which we could then make go away with --tls-verify --tls-hostname [hostname]
, this didn't remove the semi-random occurrence of the error during a longer helm call.
helm upgrade --tiller-namespace [tillerns] --namespace [ns] --install --values [values] --wait --tls --tls-ca-cert [cert] --tls-cert [cert] --tls-key [key] --tls-verify --tls-hostname [tillerhost] --values [values] [release] [chart]
2019-04-24T10:43:30.6030274Z E0424 10:43:29.393264 3780 portforward.go:363] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:1698->127.0.0.1:1701: write tcp4 127.0.0.1:1698->127.0.0.1:1701: wsasend: An established connection was aborted by the software in your host machine.
@Vhab yea, my excitement was short-lived. Still having same error with rare successfull results. Noticied that if helm upgrade takes more than 15 seconds then it will fail. In successfull releases it took not longer than that edge
@technosophos despite #5210 we're still having the same issue. Do you think this can be fixed or should we wait for helm 3?
The current thinking is that this is an upstream bug. So I'm not sure there is much left that we can do. But it definitely won't impact Helm 3, which no longer uses the port forwarding/tunneling aspect of Kubernetes.
agree with @technosophos. With Helm 3 on the horizon is it worth fixing this issue ?
I found the error does not interfere with the actual helm install/upgrade, as that does work, for me its always appeared at the end of the post deployment report helm dumps out. Which I ignore and then use subsequent steps in the pipeline to double check the deployment is ok. Not great i admit but enough of a workaround until Helm 3 comes along.
I really would not invest any more time and effort into this issue.
The biggest issue we have with this is that our CI/CD views it as a failure.
Even though the deployment was successful, the reporting says failed.
It is having the adverse behavior that it is causing people to ignore errors raised in CI/CD because “oh, that thing always errors out.”
@codejnki agreed. I continue on error for the helm deploy step so a warning is raised instead and then I validate its worked in the next step and if that fails then its time to take notice. Not great but with Helm 3 coming and ppl educated internally on the error its still kept us using Helm.
This issue is in the way kubectl port forwarding is handled and not related to helm itself. There is an issue open on the kubernetes repo about this.
I was facing a similar issue in a different setting where I was uploading files into the pod and the reason
I was getting broken pipe turned out to be the memory limits set on the pod. The file I was uploading was larger than the pod memory limits and so I was getting the following error:
portforward.go:400] an error occurred forwarding 5000 -> 5000: error forwarding port 5000 to pod 9d0e07887b021ac9a2144416bc7736ce9b22302da25483ac730c5737e2554d7c, uid : exit status 1: 2019/05/17 03:54:30 socat[13000] E write(5, 0x186ed70, 8192): Broken pipe
On increasing the pod limits I was able to upload the file successfully.
I can confirm export HELM_TLS_HOSTNAME=<tiller-deployment-name>
works sufficiently, I didn't get the broken pipe error. :)
Dp you mean the name of the pod, like "tiller-deploy-d8cbf88b8-dcvjk".
But that changes on each deployment, how should i set this fixed?
This error should not be present in Helm 3 as Helm no longer relies on the portforwarder logic in Kubernetes to acquire a connection to Tiller (because Tiller has been removed).
I'm going to close this as resolved, however if there's a fix we can apply to Helm 2 to improve the situation that'd be great to hear and we'd love a PR. Thanks!
So you ending maintenance of helm 2 now, where Helm3 is even not yet stable and merging everything will also take time.?
Maybe this will help somebody still. I get same error
E1008 11:23:47.418883 4367 portforward.go:303] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:40737->127.0.0.1:38472: write tcp4 127.0.0.1:40737->127.0.0.1:38472: write: broken pipe
installing charts contained CRDs with
annotations:
"helm.sh/hook": crd-install
(for instance https://github.com/helm/charts/tree/master/stable/ambassador)
The error happens if on the upgrade of the existing release, so when CRDs are already there.
Removing hook annotation helped me.
Helm v.2.11.0
I am getting the same error and increase memory or export HELM_TLS_HOSTNAME=
Why is the issue closed? As it stands right now it is impossible to use helm with CI / CD with TLS due to this issue.
Do we have to wait for Helm 3?
Update: I believe it should be stated on the front page that there is a major issue so that people won't waste time with helm for the time being?
Update 2, my temporary solution: As a temporary solution in CI / CD script prior to running any Helm install / upgrade commands I disable exit on non 0 code (set +e), run the helm command, re-enable exit on non 0 code (set -e) and use kubectl rollout status to wait for deployment to become available with a timeout set. If timeout is hit, it means something had gone wrong. In my case I only care about deployments becoming available.
For example:
set +e
helm upgrade --install prometheus stable/prometheus
set -e
kubectl rollout status --timeout=30s deployment/prometheus-server
Still happens:
$ helm version
Client: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.15.1", GitCommit:"cf1de4f8ba70eded310918a8af3a96bfe8e7683b", GitTreeState:"clean"}
Same issue persists with K8S cluster hosted on GCP.
Command: helm upgrade --install ******
Helm version: v2.14.2
Kubernetes version.Info: {Major:"1", Minor:"13+", GitVersion:"v1.13.11-gke.9", GoVersion:"go1.11.13b4", Compiler:"gc", Platform:"linux/amd64"}
Error:
E1109 00:08:37.767306 269 portforward.go:372] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44603->127.0.0.1:38070: write tcp4 127.0.0.1:44603->127.0.0.1:38070: write: broken pipe
E1109 00:08:37.767306 269 portforward.go:372] error copying from remote stream to local connection: readfrom tcp4 127.0.0.1:44603->127.0.0.1:38070: write tcp4 127.0.0.1:44603->127.0.0.1:38070: write: broken pipe
log: exiting because of error: log: cannot create log: open /tmp/helm.bf54a0af24f1.root.log.ERROR.20191109-000837.269: no such file or directory\n
I notice the same issue on version:
Helm:
Server: &version.Version{SemVer:"v2.16.1", GitCommit:"bbdfe5e7803a12bbdf97e94cd847859890cf4050", GitTreeState:"clean"}
Kubernetes:
Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.2", GitCommit:"f6278300bebbb750328ac16ee6dd3aa7d3549568", GitTreeState:"clean", BuildDate:"2019-08-05T09:15:22Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"
Any ideas?
Helm3 is out. Try using that.
use safari instead of chrome
I found the problem. Issue was with storage, tillers was able at the end report problem with allocating the storage resources. But that is interesting how error handling was implemented ;)
I was able to track down the cause of this issue to expired Tiller and Helm certificates.
I originally secured my Tiller installation following these instructions https://v2.helm.sh/docs/using_helm/#using-ssl-between-helm-and-tiller. In the tutorial, the duration for the validity of the Helm and Tiller certificates is set to 365 days. I originally generated the certificates with this value, but over 365 days ago.
When running any Helm command (e.g. helm version
or helm list
), I received an error message similar to this:
an error occurred forwarding 49855 -> 44134: error forwarding port 44134 to pod 4106da54d86955cc3f88c866cf45afdaf0c6edf9f471ad669f23ba56dc77e6ab, uid : exit status 1: 2020/05/27 21:33:00 socat[15077] E write(5, 0x5642d8387150, 24): Broken pipe
helm init \
--service-account tiller \
--tiller-namespace tiller \
--tiller-tls \
--tiller-tls-cert tiller.crt \
--tiller-tls-key ~/.ssh/tiller.key \
--tiller-tls-verify \
--tls-ca-cert ~/.ssh/ca.helm.crt \
--upgrade
Most helpful comment
I am afraid that it must be deferred until 2.13 because it required a regeneration of one of the protobuf classes (due to a gRPC internal changes). Consequently, the binary protocol may be incompatible with earlier versions of 2.12. This is actually the reason why we only update gRPC when necessary.
That said, I will see if we can speed up the 2.13 release process at all. If this fixes a major issue, it's worth getting out the door.