Kubeadm: Use kubeadm with proxy and output is ‘exit status 1’

Created on 7 Aug 2018  Â·  9Comments  Â·  Source: kubernetes/kubeadm

Hello everyone!
I'm living China, our network can't reach gcr.io
So I want to use use proxy when I run "kubeadm init"

My proxy server is on http://127.0.0.1:45007 , so I runexport http_proxy=http://127.0.0.1:45007 && kubeadm init

But the output is:

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
    [WARNING HTTPProxy]: Connection to "https://192.168.1.3" uses proxy "http://127.0.0.1:45007". If that is not intended, adjust your proxy settings
    [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:45007". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
I0806 07:03:09.507508   25649 kernel_validator.go:81] Validating kernel version
I0806 07:03:09.507594   25649 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
    [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-apiserver-amd64:v1.11.1]: exit status 1
    [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-controller-manager-amd64:v1.11.1]: exit status 1
    [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-scheduler-amd64:v1.11.1]: exit status 1
    [ERROR ImagePull]: failed to pull image [k8s.gcr.io/kube-proxy-amd64:v1.11.1]: exit status 1
    [ERROR ImagePull]: failed to pull image [k8s.gcr.io/pause:3.1]: exit status 1
    [ERROR ImagePull]: failed to pull image [k8s.gcr.io/etcd-amd64:3.2.18]: exit status 1
    [ERROR ImagePull]: failed to pull image [k8s.gcr.io/coredns:1.1.3]: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`

Kubernetes version (use 1.11.1):
OS (Ubuntu 18.04):
Kernel (4.15):
Install tools:
kubelet, kubeadm, kubectl

I used the proxy and kubeadm detected my settings, but there are still errors. I can use curl or docker pull to get the images, Docker looks completely normal. How should I solve it?

prioritawaiting-more-evidence

Most helpful comment

@neolit123 @kad @bart0sh Thanks, the problem is solved :smiley:

All 9 comments

@mxooc
we should have the image inspect / pull handling better in 1.12.
i'm assuming in you can skip the errors with --ignore-preflight-errors=ImagePull?

you need to pre-pull the images before that:
kubeadm config images list --kubernetes-version=1.11.1
and then using Docker or pulling using kubeadm:
kubeadm config images pull --kubernetes-version=1.11.1

@bart0sh i think we have this better at HEAD right now.
does it still say exit status 1 if an image cannot be pulled or does the combined output help with that?

@neolit123 > i think we have this better at HEAD right now.
does it still say exit status 1 if an image cannot be pulled or does the combined output help with that?

Yes, we have much better error reporting for this at HEAD. The output includes combined output of 'docker pull' or 'crictl pull'. It may include 'error 1' as well as this is what exec.Command API returns as an error.

@neolit123 When I run kubeadm config images pull --kubernetes-version=1.11.1 ,
The output is :
failed to pull image "k8s.gcr.io/kube-apiserver-amd64:v1.11.1": exit status 1
In this command, I can't use --ignore-preflight-errors=ImagePull to skip it.
Then I run kubeadm init --ignore-preflight-errors=ImagePull
It says :

[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
    [WARNING HTTPProxy]: Connection to "https://192.168.1.3" uses proxy "http://127.0.0.1:45007". If that is not intended, adjust your proxy settings
    [WARNING HTTPProxyCIDR]: connection to "10.96.0.0/12" uses proxy "http://127.0.0.1:45007". This may lead to malfunctional cluster setup. Make sure that Pod and Services IP ranges specified correctly as exceptions in proxy configuration
I0808 06:18:40.442917    6242 kernel_validator.go:81] Validating kernel version
I0808 06:18:40.443044    6242 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03
[preflight] Some fatal errors occurred:
    [ERROR Port-10250]: Port 10250 is in use
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`          

How to designation a port? I try to add -p or--port, but the output is the same as before.

@mxooc
what application is using that port?

you can try something like this to check:
sudo lsof -i :10250

also, please make sure you reset this node with kubeadm reset first.

@mxooc BTW, keep in mind, that 127.0.0.1 on the host scope where kubeadm run is not the same 127.0.0.1 as inside containers for control plane. Please use real IP address of the proxy instead of localhost.

Also, please make sure that you have variable no_proxy set to at least something like

export no_proxy=192.168.0.0/16,10.0.0.0/8,.local

One more thing, can you please show your output for docker info ?

@neolit123

COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
kubelet 5541 root   18u  IPv6 405756      0t0  TCP *:10250 (LISTEN)

@kad output is :

Containers: 0
 Running: 0
 Paused: 0
 Stopped: 0
Images: 6
Server Version: 18.06.0-ce
Storage Driver: overlay2
 Backing Filesystem: extfs
 Supports d_type: true
 Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
 Volume: local
 Network: bridge host macvlan null overlay
 Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: d64c661f1d51c48782c9cec8fda7604785f93587
runc version: 69663f0bd4b60df09991c08812a60108003fa340
init version: fec3683
Security Options:
 apparmor
 seccomp
  Profile: default
Kernel Version: 4.15.0-30-generic
Operating System: Ubuntu 18.04.1 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 3.716GiB
Name: c
ID: DZWQ:74K3:UXBP:2UVS:SGIT:C6K5:PNY7:NIA5:ILW2:TAJ5:R4NM:AHHP
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Username: chenyan05888
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
 127.0.0.0/8
Live Restore Enabled: false

WARNING: No swap limit support

It's Always Time Out :disappointed:

        Unfortunately, an error has occurred:
            timed out waiting for the condition

        This error is likely caused by:
            - The kubelet is not running
            - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
            - No internet connection is available so the kubelet cannot pull or find the following control plane images:
                - k8s.gcr.io/kube-apiserver-amd64:v1.11.1
                - k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
                - k8s.gcr.io/kube-scheduler-amd64:v1.11.1
                - k8s.gcr.io/etcd-amd64:3.2.18
                - You can check or miligate this in beforehand with "kubeadm config images pull" to make sure the images
                  are downloaded locally and cached.

        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
            - 'systemctl status kubelet'
            - 'journalctl -xeu kubelet'

        Additionally, a control plane component may have crashed or exited when started by the container runtime.
        To troubleshoot, list all containers using your preferred container runtimes CLI, e.g. docker.
        Here is one example how you may list all Kubernetes containers running in docker:
            - 'docker ps -a | grep kube | grep -v pause'
            Once you have found the failing container, you can inspect its logs with:
            - 'docker logs CONTAINERID'
couldn't initialize a Kubernetes cluster

@mxooc according to what I see in your docker info output, you don't have proxy configured for docker daemon. Please follow documentation at https://docs.docker.com/config/daemon/systemd/#httphttps-proxy to set it correctly, and validate that docker info has following lines similar to:

kad@kad:~> docker info  | grep -i proxy
Http Proxy: http://proxy-chain.example.com:8080
Https Proxy: http://proxy-chain.example.com:8080
No Proxy: localhost,127.0.0.1,.example.com
kad@kad:~>

@neolit123 @kad @bart0sh Thanks, the problem is solved :smiley:

Was this page helpful?
0 / 5 - 0 ratings