What happened:
I have an image that needs to be pulled from a private repository. Before deploying my app I created a (correctly formatted) secret containing a .dockerconfigjson key with my registry credentials in json as it's value. In my deployment I point to the secret using spec.template.spec.containers.imagePullSecrets. I have also tried it by adding a default imagePullSecret to my service account without success.
I get the fallowing events when I use describe pod:
Normal Scheduled 56s default-scheduler Successfully assigned PROJECT/dev-kind-PROJECT-5bc86565-cqfwg to kind-control-plane
Normal Pulling 16s (x3 over 55s) kubelet, kind-control-plane pulling image "<MY_PRIVATE_IMAGE>"
Warning Failed 16s (x3 over 54s) kubelet, kind-control-plane Failed to pull image "<MY_PRIVATE_IMAGE>": rpc error: code = Unknown desc = failed to resolve image "<MY_PRIVATE_IMAGE>": no available registry endpoint: pull access denied, repository does not exist or may require authorization: server message: invalid_token: authorization failed
Warning Failed 16s (x3 over 54s) kubelet, kind-control-plane Error: ErrImagePull
Normal BackOff 4s (x3 over 54s) kubelet, kind-control-plane Back-off pulling image "<MY_PRIVATE_IMAGE>"
Warning Failed 4s (x3 over 54s) kubelet, kind-control-plane Error: ImagePullBackOff
(Note that the same deployment works fine on my production Kubernetes cluster.)
What you expected to happen:
That it pulls the image from my private repository.
How to reproduce it (as minimally and precisely as possible):
Have an image in a private repository that requires authentication. And try and use that image. See this how-to for setting up the image pull secret.
Anything else we need to know?:
I tried kind v0.5.1 with the fallowing node image version:
kindest/node:v1.15.3@sha256:27e388752544890482a86b90d8ac50fcfa63a2e8656a96ec5337b902ec8e5157kindest/node:v1.13.10@sha256:2f5f882a6d0527a2284d29042f3a6a07402e1699d792d0d5a9b9a48ef155fa2aI also tried it with kind v0.4.0 with kindest/node:v1.13.7@sha256:f3f1cfc2318d1eb88d91253a9c5fa45f6e9121b6b1e65aea6c7ef59f1549aaaf.
All the above with golang version v1.12.9.
Environment:
kind version): Both v0.5.1 and v0.4.0kubectl version): 1.15.3, 1.13.10 and 1.13.7docker info): 19.03.1/etc/os-release): MacOS 10.14.61) What did you use to create the secret?
This is a Kubernetes feature not specific to kind.
2) Does the node have connectivity to the registry? Can you exec to one and ping it or similar?
Private registries are known to generally work with kind and we actually have our own docs with more options https://kind.sigs.k8s.io/docs/user/private-registries/
I just tested this with a private registry in quay.io.
Kind version: v0.5.1
The image quay.io/o0o/nginx:stable is private. I can verify that by execing into the kind node and do a crictl pull:
09:18 $ docker exec -ti kind-control-plane bash
root@kind-control-plane:/# crictl pull quay.io/o0o/nginx:stable
FATA[0001] pulling image failed: rpc error: code = Unknown desc = failed to resolve image "quay.io/o0o/nginx:stable": no available registry endpoint: unexpected status code https://quay.io/v2/o0o/nginx/manifests/stable: 401 Unauthorized
Now I create a secret associated with a robot account on quay.io that has read access to the image.
example secret:
apiVersion: v1
kind: Secret
metadata:
name: o0o-test-pull-secret
data:
.dockerconfigjson: CnsKICBhdXRoczogewogICAgcXVheS5pbzogewogICAgICBhdXRoOiBkWE5sY2pwNWIzVnljbTlp
YjNSMGIydGxiZz09CiAgICAgIGVtYWlsOiAKICAgIH0KICB9Cn0=
type: kubernetes.io/dockerconfigjson
The content of dockerconfigjson is encoded base64.
{
"auths": {
"quay.io": {
"auth": "dXNlcjp5b3Vycm9ib3R0b2tlbg=="
"email": ""
}
}
}
The encoded auth line is:
user:yourrobottoken
With that config I can pull my image:
I applied the secret and a pod that looks like this:
apiVersion: v1
kind: Pod
metadata:
name: somepod
spec:
containers:
- name: web
image: quay.io/o0o/nginx:stable
imagePullSecrets:
- name: o0o-test-pull-secret
Logs from the pod on deployment:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 14s default-scheduler Successfully assigned default/somepod to kind-control-plane
Normal Pulling 13s kubelet, kind-control-plane Pulling image "quay.io/o0o/nginx:stable"
Normal Pulled 4s kubelet, kind-control-plane Successfully pulled image "quay.io/o0o/nginx:stable"
Normal Created 4s kubelet, kind-control-plane Created container web
Normal Started 4s kubelet, kind-control-plane Started container web
After testing out the above and successfully pull a private image from quay.io I can only conclude that it's has something todo with my private repository.
Using the below command inside the running kind-control-plane container leads to the same error:
crictl pull --auths <my_auth_string> <image>
Thanks for the help and time!!!
For anyone who comes across this, I had the same issue and it drove me crazy.
Here is the problem:
In our env we use artifactory and bintray.io to serve our docker images. I know this bug is for a different registry type, but it still may be pertinent. For us this has nothing to do with kind, it is in fact artifactory and bintray causing the issue. Ultimately I had to downgrade to kind node version to use an older version of containerd and it worked. In our env we use k8s 1.14.x. 1.14.10 & 1.14.9 do not work since they use containerd 1.3.2 and 1.3.0 respectively. I had to downgrade to use 1.14.6 which uses containerd 1.2.6 and everything worked.
so
kind create cluster --image kindest/node:v1.14.6
Here are the bugs:
https://github.com/containerd/containerd/issues/3761
--> https://github.com/containerd/containerd/issues/3556
--> https://www.jfrog.com/jira/browse/RTFACT-20170
In the end my issue was related to the auth layer on top of the registry I was using. It was fixed with this PR: https://github.com/cesanta/docker_auth/pull/265
this came up again, reached out to the wonderful @rimusz for follow up 馃檹
I've contacted Jfrog support and they answered that it should be fixed by now
Looking at the Jira status, the issue appears to be resolved in Artifactory versions 6.17+.
https://www.jfrog.com/jira/browse/RTFACT-20170
yes, it looks that way as I wasn't able to reproduce it
Hello there
I know this one has been closed but I ran into the same issue recently . when you create your secret, pay attention to the namespace. may be it seems obvious for others but you can easily forget that, mainly if you are building an app with many microservices. if your secret is not in the same namespace than your pod/deployment resource it won't work and you will get the error declared above.
kubectl create secret docker-registry mysecretname --docker-server='myserver' --docker-username='myusername' --docker-password='mypassword' --docker-email='[email protected]' --namespace='mynamespace'apiVersion: v1
kind: Pod
metadata:
name: "myPodName"
namespace: "mynamespace" --> IMPORTANT
spec:
containers:
- name: myPodName
image: my.pod.image/path:tag
imagePullSecrets:
restartPolicy: Never
````
Thanks @thynquest that's a good reminder. Perhaps we should add a short note to the guide. I think it's in the linked kubernetes docs but there's a lot there 馃槄
yes @BenTheElder I think we should; it is a good idea..I am not sure that I will be the last to fall for that issue
Most helpful comment
Hello there
I know this one has been closed but I ran into the same issue recently . when you create your secret, pay attention to the namespace. may be it seems obvious for others but you can easily forget that, mainly if you are building an app with many microservices. if your secret is not in the same namespace than your pod/deployment resource it won't work and you will get the error declared above.
kubectl create secret docker-registry mysecretname --docker-server='myserver' --docker-username='myusername' --docker-password='mypassword' --docker-email='[email protected]' --namespace='mynamespace'````yaml
apiVersion: v1
kind: Pod
metadata:
name: "myPodName"
namespace: "mynamespace" --> IMPORTANT
spec:
containers:
- name: myPodName
image: my.pod.image/path:tag
imagePullSecrets:
restartPolicy: Never
````