Cloud Provider/Platform (AKS, GKE, Minikube etc.):
IBM ICP 3.1.1
First, I can access the namespace ok
➜ charts git:(atlaspatch) kubectl get pods git:(atlaspatch|)
No resources found.
Here are the versions - though note ICP was quite backlevel on k8s. A possibility this is related (as not within 1 or 2 versions)
➜ charts git:(atlaspatch) kubectl version git:(atlaspatch|)
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T18:55:03Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.3+icp-ee", GitCommit:"5f4327948913b3c0bd330cae0e5bf7cf09d1f0ae", GitTreeState:"clean", BuildDate:"2018-09-21T05:53:39Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
This was from 'homebrew' as is the following:
➜ charts git:(atlaspatch) /usr/local/bin/helm version git:(atlaspatch|)
Client: &version.Version{SemVer:"v2.14.0", GitCommit:"05811b84a3f93603dd6c2fcfe57944dfa7ab7fd0", GitTreeState:"clean"}
Error: pods is forbidden: User "https://front-01.icp-test.zc2.ibm.com:9443/oidc/endpoint/OP#[email protected]" cannot list pods in the namespace "kube-system"
Here's my downloaded binary of alpha1 from github:
➜ charts git:(atlaspatch) ~/src/helm3/helm version git:(atlaspatch|)
version.BuildInfo{Version:"v3.0.0-alpha.1", GitCommit:"b9a54967f838723fe241172a6b94d18caf8bcdca", GitTreeState:"clean"
And now I will try to deploy a chart. I was quite expecting some k8s issues due the versions, but here's an issue I've seen similar before:
➜ charts git:(atlaspatch) ~/src/helm3/helm install vdc --generate-name git:(atlaspatch|)
Error: could not get server version from Kubernetes: Get https://front-01.icp-test.zc2.ibm.com:8001/version?timeout=32s: dial tcp: lookup front-01.icp-test.zc2.ibm.com on 194.168.4.100:53: no such host
We know we can get versino -- the 'no such host' is telling. It sounds very much like an issue I raised against docker initially
https://github.com/docker/for-mac/issues/3281
It never got responded to, but the explanation seems to be:
https://github.com/kubernetes/kubernetes/issues/23130
I don't think this is resolved in the regular macos builds yet -- and I guess helm is following that pattern, but it would be very helpful on mac if this could be done.
Meantime I will try building myself to avoid the vpn dns issue.
@planetf1 Were you able to build it and make it work? It would be great if you could post how you built it, so that if someone else has the same issue, they can use it too! 😄
That's a fair question. Apologies I don't have the exact details currently, but will try to add next time I update. I've not built anything in go before (mostly java/c/c++ etc)
I am building under MacOS (10.14.6 beta currently) and regularly use 'homebrew'. I installed go,godep,gcc - but already had MANY homebrew packages installed.
I accessed the alpha code using
- cd /Users/jonesn/IdeaProjects/go/src/helm.sh
- git clone https://github.com/helm/helm
- git checkout dev-v3
I also set the environment as follows:
export GOPATH=/Users/jonesn/IdeaProjects/go
export CGO_ENABLED=1
The former was the root of my go related source tree (I use intellij so all is under there, whilst the latter was key to working around this issue
Then I followed the instructions from https://v3.helm.sh/docs/developers/
This is somewhat from memory, so hopefully it's enough to help someone else .. at least for now.
With this done helm was able to resolve vpn addresses ok :-)
I have sometimes this issue too (helm v2.14.1 installed via brew), it's also happened with kubectl.
My workaround :
sudo killall -HUP mDNSResponder
It's probably an underlying issue from golang on macOS.
See :
Yeah, this is an issue with us disabling CGO in the build pipeline, which will not cause Helm to not use the /etc/resolver DNS resolution as mentioned in golang/go#12524. This issue affects both Helm 2 and Helm 3.
CGO is disabled by default when cross-compiling to generate statically generated binaries. This used to be quite important for Tiller as it was running inside a container where the underlying infrastructure is unknown. Perhaps we can lift that requirement now that Helm 3 is client-only. We could probably use a CI system like Azure Pipelines (or the new Github Actions feature announced recently with CI/CD integration) so we can build the client natively for each platform.
This issue has been marked as stale because it has been open for 90 days with no activity. This thread will be automatically closed in 30 days if no further activity occurs.
This is still an issue with the latest helm3 binary
Most helpful comment
Yeah, this is an issue with us disabling CGO in the build pipeline, which will not cause Helm to not use the /etc/resolver DNS resolution as mentioned in golang/go#12524. This issue affects both Helm 2 and Helm 3.
CGO is disabled by default when cross-compiling to generate statically generated binaries. This used to be quite important for Tiller as it was running inside a container where the underlying infrastructure is unknown. Perhaps we can lift that requirement now that Helm 3 is client-only. We could probably use a CI system like Azure Pipelines (or the new Github Actions feature announced recently with CI/CD integration) so we can build the client natively for each platform.