Kubernetes: kubectl not able to pull the image from private repository

Created on 16 Feb 2017  路  3Comments  路  Source: kubernetes/kubernetes

Is this a BUG REPORT or FEATURE REQUEST?: BUG

Kubernetes version (use kubectl version):

Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-12T04:57:25Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.3", GitCommit:"029c3a408176b55c30846f0faedf56aae5992e9b", GitTreeState:"clean", BuildDate:"2017-02-15T06:34:56Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

Environment:

Cloud provider or hardware configuration: 2GB RAM/50GB HDD VM
OS (e.g. from /etc/os-release):
NAME="Ubuntu"
VERSION="16.04 LTS (Xenial Xerus)"
ID=ubuntu
ID_LIKE=debian
PRETTY_NAME="Ubuntu 16.04 LTS"
VERSION_ID="16.04"
HOME_URL="http://www.ubuntu.com/"
SUPPORT_URL="http://help.ubuntu.com/"
BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
UBUNTU_CODENAME=xenial

Kernel (e.g. uname -a):
Linux ubuntu 4.4.0-21-generic #37-Ubuntu SMP Mon Apr 18 18:33:37 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Install tools: kubeadm, kubectl, docker
Others:NA
What happened: ImagePullBackOff while pulling from a private repository

What you expected to happen: It shloud pulled the image from the private repository

How to reproduce it (as minimally and precisely as possible):

  • Created a file for private repository and restart the docker in both the node(Master and slave)
    root@ubuntu:~# vi /etc/systemd/system/docker.service.d/private-registry.conf
        [Service]
        ExecStart=
        ExecStart=/usr/bin/dockerd --insecure-registry 123.456.789.0:9595
  • Then , Docker login
    docker login 123.456.789.0:9595
  • docker info
    Containers: 87
    Running: 18
    Paused: 0
    Stopped: 69
    Images: 175
    Server Version: 1.12.3
    Storage Driver: aufs
    Root Dir: /var/lib/docker/aufs
    Backing Filesystem: extfs
    Dirs: 384
    Dirperm1 Supported: true
    Logging Driver: json-file
    Cgroup Driver: cgroupfs
    Plugins:
    Volume: local
    Network: host bridge null overlay
    Swarm: inactive
    Runtimes: runc
    Default Runtime: runc
    Security Options: apparmor seccomp
    Kernel Version: 4.4.0-21-generic
    Operating System: Ubuntu 16.04 LTS
    OSType: linux
    Architecture: x86_64
    CPUs: 1
    Total Memory: 1.937 GiB
    Name: ubuntu
    ID: FXD7:JQJZ:HO3R:D2NK:RWYL:7DCY:PC2M:43PM:MA7C:QSPN:4RGS:5W6H
    Docker Root Dir: /var/lib/docker
    Debug Mode (client): false
    Debug Mode (server): false
    Registry: https://index.docker.io/v1/
    WARNING: No swap limit support
    Insecure Registries:
    123.456.789.0:9595
    127.0.0.0/8

  • docker -v
    Docker version 1.12.3, build 6b644ec

  • Initiate kubeadm in master
    kubeadm init --token 123456.1234567890123456 --api-advertise-addresses 192.168.91.133

  • Create kubernates secret
    kubectl create secret docker-registry my-secret --docker-server=123.456.789.0 --docker-username=admin --docker-password=XXXX [email protected]
  • Created the weive network

    kubectl apply -f https://git.io/weave-kube

  • From slave, join the master network
    kubeadm join --token=123456.1234567890123456 192.168.91.133
  • Pod definition
apiVersion: v1
kind: Pod
metadata:
  name: test-pod
  labels:
    name: test
spec:
  containers:
    - image: 123.456.789.0:9595/test
      name: test
      ports:
        - containerPort: 8443
  imagePullSecrets:
    - name: my-secret
  • Then, tried to create a pod. The configured pod image is located in the nexus docker repository.I am getting the below trace while describe the the pod
Name:           test-pod
Namespace:      default
Node:           ubuntu-child/192.168.91.134
Start Time:     Thu, 16 Feb 2017 12:26:56 +0530
Labels:         name=test
Status:         Pending
IP:             10.44.0.2
Controllers:    <none>
Containers:
  test:
    Container ID:
    Image:              123.456.789.0:9595/test
    Image ID:
    Port:               8443/TCP
    State:              Waiting
      Reason:           ErrImagePull
    Ready:              False
    Restart Count:      0
    Volume Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-vkj94 (ro)
    Environment Variables:      <none>
Conditions:
  Type          Status
  Initialized   True
  Ready         False
  PodScheduled  True
Volumes:
  default-token-vkj94:
    Type:       Secret (a volume populated by a Secret)
    SecretName: default-token-vkj94
QoS Class:      BestEffort
Tolerations:    <none>
Events:
  FirstSeen     LastSeen        Count   From                    SubObjectPath                   Type            Reason          Message
  ---------     --------        -----   ----                    -------------                   --------        ------          -------
  9s            9s              1       {default-scheduler }                                    Normal          Scheduled       Successfully assigned test-pod to ubuntu-child
  7s            7s              1       {kubelet ubuntu-child}  spec.containers{test}   Normal          Pulling         pulling image "123.456.789.0:9595/test"
  7s            7s              1       {kubelet ubuntu-child}  spec.containers{test}   Warning         Failed          Failed to pull image "123.456.789.0:9595/test": Error: image test:latest not found
  7s            7s              1       {kubelet ubuntu-child}                                  Warning         FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "test" with ErrImagePull: "Error: image test:latest not found"

  7s    7s      1       {kubelet ubuntu-child}  spec.containers{test}   Normal  BackOff         Back-off pulling image "123.456.789.0:9595/test"
  7s    7s      1       {kubelet ubuntu-child}                                  Warning FailedSync      Error syncing pod, skipping: failed to "StartContainer" for "test" with ImagePullBackOff: "Back-off pulling image \"123.456.789.0:9595/test\""

Most helpful comment

Issue was exists due to port number. After putting the port number it has started working as expected.

kubectl create secret docker-registry my-secret --docker-server=123.456.789.0:9595 --docker-username=admin --docker-password=XXXX [email protected]

All 3 comments

From the slave and master, I could pull the private repository. But, the problem is existing when kubectl tries to pull the image from the private repository though I have added my secret in the pod definition.

Issue was exists due to port number. After putting the port number it has started working as expected.

kubectl create secret docker-registry my-secret --docker-server=123.456.789.0:9595 --docker-username=admin --docker-password=XXXX [email protected]

Was this page helpful?
0 / 5 - 0 ratings

Related issues

montanaflynn picture montanaflynn  路  3Comments

arun-gupta picture arun-gupta  路  3Comments

errordeveloper picture errordeveloper  路  3Comments

sjenning picture sjenning  路  3Comments

mml picture mml  路  3Comments