I'm under the company PROXY and the env vars are correctly set, furthermore:
docker info
******
**
HTTP Proxy: http://<MY IP:PORT>/
HTTPS Proxy: http://<MY IP:PORT>/
No Proxy: localhost,127.0.0.0/8,::1,172.17.0.0/16,10.0.0.0/8,192.0.0.0/8,.cluster.local
I installed the latest KinD version using:
GO111MODULE="on" go get sigs.k8s.io/kind@master
and created the cluster kind create cluster --loglevel debug without having issues Cluster creation complete. except for these lines:
I0731 11:59:36.479894 123 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
I0731 11:59:36.978819 123 round_trippers.go:438] GET https://172.17.0.3:6443/healthz?timeout=32s 500 Internal Server Error in 4 milliseconds
[apiclient] All control plane components are healthy after 50.506272 seconds
Now when I launch:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-beta1/aio/deploy/recommended.yaml
and checking kubectl describe pod kubernetes-dashboard-5c8f9556c4-m7q4g the image is not pulled and to me it seems a proxy related issue.
Warning Failed 6m48s (x4 over 9m12s) kubelet, kind-control-plane Error: ErrImagePull
Warning Failed 6m48s kubelet, kind-control-plane Failed to pull image "kubernetesui/dashboard:v2.0.0-beta1": rpc error: code = Unknown desc = failed to resolve image "docker.io/kubernetesui/dashboard:v2.0.0-beta1": no available registry endpoint: failed to do request: Head https://registry-1.docker.io/v2/kubernetesui/dashboard/manifests/v2.0.0-beta1: dial tcp 52.87.94.70:443: connect: connection refused
Could you help me please?
Thanks.
D
:thinking: I think that @BenTheElder fixed that issue with https://github.com/kubernetes-sigs/kind/pull/694
@murdav can you try the master version?
I tried the master version:
I installed the latest KinD version using:
GO111MODULE="on" go get sigs.k8s.io/kind@master
Thanks.
I tried the master version:
I installed the latest KinD version using:
GO111MODULE="on" go get sigs.k8s.io/kind@masterThanks.
ups, my bad, sorry, seems I read to fast :sweat_smile:
@murdav can you create a file replacing the values your *_proxy variables in your nodes
mkdir -p /etc/systemd/system/containerd.service.d/
cat <<EOF >/etc/systemd/system/containerd.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=${HTTP_PROXY:-}"
Environment="HTTPS_PROXY=${HTTPS_PROXY:-}"
Environment="NO_PROXY=${NO_PROXY:-localhost}"
EOF
and restart containerd systemctl restart containerd?
@BenTheElder this is what I refer in my comment https://github.com/kubernetes-sigs/kind/pull/694#issuecomment-509534110 , I think that somehow containerd is not using that env variables with that approach
@aojea it works doing what you suggested on each node!
I think that somehow containerd is not using that env variables with that approach
I can confirm this issue.
@BenTheElder this is what I refer in my comment #694 (comment) , I think that somehow containerd is not using that env variables with that approach
I'm fairly certain we are not yet using those changes, as they are only in the base image and we need new node images with them. that will come with the next release and hopefully resolve this.
/retitle proxy issue: any pod image can't be pulled
those changes should be in the current release, but I cannot confirm if they solve this issue or not.
those changes should be in the current release, but I cannot confirm if they solve this issue or not.
I've tested kindest/node:v1.15.3. It works well.
thanks @cofyc
tentatively closing, please re-open or file a new issue if you see this again!
I am still having this issue with kind version v0.5.1 , as this version was submitted on August 21 and judging from the comments I guess the fix is still not released, right?
Any idea when can we get a kind release with the fix for this bug?
Thanks
the release is delayed on rounding out the set of mildly breaking changes we'll land to consolidate that to one release that requires migration, we're trying to finish those up and avoid future ones.
I expect maybe another week or two, depending ...
in the meantime GO111MODULE=on go get sigs.k8s.io/kind@7842d72f04807a716247295130e0010660438ca5 should pin to the current commit.
HEAD works fine, we use it for Kubernetes CI, it's just that if you don't stick to a particular commit things may change between commits...
@BentheElder
is this released?
I am using v0.6.1 but I have same problem.
It is released
Most helpful comment
I've tested
kindest/node:v1.15.3. It works well.