Hello
just created a two node (master+minion) cluster with kubeadm following http://kubernetes.io/docs/getting-started-guides/kubeadm/, and trying to run a pod with images on an internal, private registry with authentication (following http://kubernetes.io/docs/user-guide/images/#configuring-nodes-to-authenticate-to-a-private-repository) and it doesn't work.
docker login private-registry.internal.tld works
docker pull private-registry.internal.tld/image works
but kubelet seems to just ignore this. Since that there's a tutorial on this, and I couldn't find any outstanding kubelet/kubernetes open issue with this, I'm guessing that maybe it's a more specific kubeadm problem.
How can I debug this? Thanks
Are you using https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/registry/auth/registry-auth-rc.yaml?
I don't think this is a kubeadm-specific issue.
Have you copied to and configured /root/.docker/config.json on all nodes?
No, I'm using Nexus OSS as the private registry.
As said I was following the basic guide I linked and yes, I've set up /root/.docker/config.json on every minion node, but not on the master (which doesn't run any non-system pod). Do I have to login on the master too?
Anyway I can see the ImagePullError in the minion's kubelet log.
This is the error log I get from the pod:
Error syncing pod, skipping: [failed to "StartContainer" for "myimage" with ErrImagePull: "image pull failed for docker-images-ro.company.com/company/myimage:latest, this may be because there are no credentials on this request.
details: (Get https://docker-images-ro.company.com/v2/company/myimage/manifests/latest: no basic auth credentials)"
No problem if you want to move the issue to the kubernetes project instead of kubeadm
More infO: if I set ImagePullPolicy: IfNotPresent and then manually docker pull the images on the minion host, it works. If I set ImagePullPolicy: Always it doesn't work even with pre-pulled images
@vide Did you manage to overcome this? I too am using a Nexus OSS private container registry and am experiencing the same issue. I built my cluster with kube-aws, but it's irrelevant. I'm curious to see if this is a Nexus-specific issue.
I'm facing the same issue. I'm using Nexus 3. Any solutions for this ?
I managed my connection to Nexus 3 with ImagePullSecrets and adding it to the service account, so that it will automatically be added to all deployed pods.
Configure ImagePullSecret with the docker config
https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
Instead of adding the imagePullSecrets to every pod, i have added it to the service account which is used to deploy my pods:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account
With this configuration everything works as i expected.
Preauthentication with docker login on every node was not successful.
In my experience, adding an ImagePullSecret to the kube-system service account works for most but not all images: CNI (Weave/Flannel/etc.) and kube-proxy images still need to be pre-pulled on worker nodes. On the master node, all images must be pre-pulled because there is no way to specify an ImagePullSecret before the master node is fully ready.
I've my setting in /etc/kubernetes/apiserver on MASTER NODE like this :
KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota"
Is this fine?
I had to remove ServiceAccount & SecurityContextDeny from the list as per
the installation guide in the kubernetes master node's apiserver config
Please let me know if I need to add this back ?
Raja
On 19 April 2017 at 21:19, Mark Janssen notifications@github.com wrote:
In my experience, adding an ImagePullSecret to the kube-system service
account works for most but not all images: CNI (Weave/Flannel/etc.) and
kube-proxy images still need to be pre-pulled on worker nodes. On the
master node, all images must be pre-pulled because there is no way to
specify an ImagePullSecret before the master node is fully ready.—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubeadm/issues/71#issuecomment-295423539,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AL-vhduKAB381AxK_JmZVLFfBZXEz7-jks5rxmxcgaJpZM4K8nyP
.
BTW, I'm using flannel.
I managed to successfully login to the Nexus 3 Repository using ImagePullSecrets/ServiceAccounts but it failed further down there by failing to start the container:
Status for PODS are set to CrashLoopBackOff .
[root@master ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
ipf-deployment-2355814806-8re95 0/1 CrashLoopBackOff 1 18s
ipf-deployment-2355814806-k0wjr 0/1 CrashLoopBackOff 1 18s
ipf-deployment-2355814806-mw7se 0/1 CrashLoopBackOff 1 18s
I've tried to understand what's going on and looked at various sites and forums but no luck: Any idea what am I missing ?
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ ------
.....
.....
1m 20s 7 {kubelet 192.168.XXX.XX} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "ipf-reference" with CrashLoopBackOff: "Back-off 1m20s restarting failed container=ipf-reference pod=ipf-deployment-2355814806-8re95_default(aa34b7ed-25d4-11e7-acf1-000c290d68bc)"
3m 9s 7 {kubelet 192.168.XXX.XX} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using "ClusterFirst" policy. Falling back to DNSDefault policy.
Closing as not related to kubeadm. Please reopen if you think it is and can reproduce with v1.6
I too am running into the same symptom with a Quay.io repo. I've seen this work before on non-kubeadm clusters. That doesn't mean that this is kubeadm-specific, though.
@gtaylor If you think you found something that may be kubeadm-related, please open an issue with relevant details so that we can understand what the problem might be. It might as well be something that other solutions set up for you that's out-of-scope for kubeadm. In that case we should document it instead.
I don't think this is kubeadm-specific, but we may not be doing what we can to resolve the issue with kubeadm setups. The cwd of my kubelet process is /. I don't know whether that is something that kubeadm could/should be involved with fixing.
People with non-kubeadm'd clusters are seeing this as well, so it's not _caused_ by kubeadm.
Hi, I know this is not the right place to ask but I'm lost and in need of directions. I want to be able to use my docker image for deployment of a small service. This means I need to set up a local docker registry on my cluster (4 machines I have under my desk). Could anybody please tell me where should I look? I know there is an addon on github, but the Readme is so vague I don't get how I can use those .yaml files for my cluster.
Most helpful comment
I managed my connection to Nexus 3 with ImagePullSecrets and adding it to the service account, so that it will automatically be added to all deployed pods.
Configure ImagePullSecret with the docker config
https://kubernetes.io/docs/concepts/containers/images/#specifying-imagepullsecrets-on-a-pod
Instead of adding the imagePullSecrets to every pod, i have added it to the service account which is used to deploy my pods:
https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#adding-imagepullsecrets-to-a-service-account
With this configuration everything works as i expected.
Preauthentication with docker login on every node was not successful.