NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-3825951078-ywerf 0/1 ImagePullBackOff 0 11m
root@slave2:/home/project/src/k8s.io/kubernetes# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
gcr.io/google_containers/kubernetes-dashboard-amd64 v1.1.1 353f4398afc5 13 days ago 55.83 MB
Dashboard version:
Kubernetes version:1.4.0
Operating system:linux
Node.js version:
Go version:1.6.2
Followed the tutorial WebUI(Dashboard), exec
kubectl create -f https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml command
root@slave2:/home/zhangjian/project/src/k8s.io/kubernetes# kubectl describe pods --namespace=kube-system
Name: kubernetes-dashboard-3825951078-ywerf
Namespace: kube-system
Node: 127.0.0.1/127.0.0.1
Start Time: Tue, 23 Aug 2016 02:54:52 +0000
Labels: app=kubernetes-dashboard
pod-template-hash=3825951078
Status: Pending
IP: 172.17.0.4
Controllers: ReplicaSet/kubernetes-dashboard-3825951078
Containers:
kubernetes-dashboard:
Container ID:
Image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1
Image ID:
Port: 9090/TCP
State: Waiting
Reason: ImagePullBackOff
Ready: False
Restart Count: 0
Liveness: http-get http://:9090/ delay=30s timeout=30s period=10s #success=1 #failure=3
Volume Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-0yoov (ro)
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-0yoov:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-0yoov
QoS Class: BestEffort
Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
1m 1m 1 {default-scheduler } Normal Scheduled Successfully assigned kubernetes-dashboard-3825951078-ywerf to 127.0.0.1
38s 38s 1 {kubelet 127.0.0.1} spec.containers{kubernetes-dashboard} Warning Failed Failed to pull image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1": image pull failed for gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1, this may be because there are no credentials on this request. details: (Error response from daemon: unable to ping registry endpoint https://gcr.io/v0/
v2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 74.125.130.82:443: i/o timeout
v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp 74.125.130.82:443: i/o timeout)
38s 38s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with ErrImagePull: "image pull failed for gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1, this may be because there are no credentials on this request. details: (Error response from daemon: unable to ping registry endpoint https://gcr.io/v0/\nv2 ping attempt failed with error: Get https://gcr.io/v2/: dial tcp 74.125.130.82:443: i/o timeout\n v1 ping attempt failed with error: Get https://gcr.io/v1/_ping: dial tcp 74.125.130.82:443: i/o timeout)"
37s 37s 1 {kubelet 127.0.0.1} spec.containers{kubernetes-dashboard} Normal BackOff Back-off pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1"
37s 37s 1 {kubelet 127.0.0.1} Warning FailedSync Error syncing pod, skipping: failed to "StartContainer" for "kubernetes-dashboard" with ImagePullBackOff: "Back-off pulling image \"gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1\""
1m 22s 2 {kubelet 127.0.0.1} spec.containers{kubernetes-dashboard} Normal Pulling pulling image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.1.1"
Running Dashboard successfully.
My PC can't pull this image due to network problems, so I got the image on another machine and loaded it into my PC by docker load --input. But, it still to pull image even if the image already exists.
Can your PC pull other images? I guess this is a problem with networking. You should be able to pull images from your Kubernetes nodes.
@bryk Yes, this is networking of this node problem. So, I loaded these images into local before create the dashboard service.
A bit late response but it's possible that your cluster has Admission Controller configured with AlwaysPullImages policy and this forces pull instead of using already loaded image.
http://kubernetes.io/docs/admin/admission-controllers/#alwayspullimages
Closing as stale. Please reopen if @floreks' instructions did not help.
i am having problem the same on with the latest version of kubernetes-dashboard v1.6.1 , i had download the previous version of the dashboard 1.6.0 earlier which is working properly with the latest stable versions of kubernetes 1.6.3.
I see the same issue also. I installed Kubernetes via kubeadm (https://kubernetes.io/docs/getting-started-guides/kubeadm). Following the instructions at https://github.com/kubernetes/dashboard#kubernetes-dashboard, I run kubectl create -f https://git.io/kube-dashboard and the pod ends up in ImagePullBackOff status.
kubectl describe pods shows the error: Failed to pull image "gcr.io/google_containers/kubernetes-dashboard-amd64:v1.6.1": rpc error: code = 2 desc = Tag v1.6.1 not found in repository gcr.io/google_containers/kubernetes-dashboard-amd64
Indeed, trying to pull the image on my computer shows the same error.
@shubb30 Linking to #1961. Images should be available soon.
Now that the 1.6.1 image is released, I see the dashboard is running and working.
I upgraded to 1.10.5 successfully. While container images such nginx come up just fine. Images loaded from k8s-gcrio.azureedge.net are returning " unauthorized: authentication required"
pulling image "k8s-gcrio.azureedge.net/exechealthz-amd64:1.2"
Failed to pull image "k8s-gcrio.azureedge.net/exechealthz-amd64:1.2": rpc error: code = Unknown desc = unauthorized: authentication required
Restarting the node fixed the issue.
I'm having issue with pulling 1.10.0 now.
docker pull k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
Error response from daemon: manifest for k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 not found
We are in the process of releasing it.
@floreks This proved to be a bit disruptive to our workflow (having the YAML update before the dependent images were released). Perhaps the release of the new YAML can come after the images are made available?
Also, is there an ETA for when this new image will be released?
Most helpful comment
I'm having issue with pulling
1.10.0now.