Choose one: BUG REPORT
kubeadm version (use kubeadm version):
root@ip:~ # kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.0-alpha.0.2473+fea4ad2783f59d", GitCommit:"fea4ad2783f59d44cc0db907523a07499db172ec", GitTreeState:"clean", BuildDate:"2018-07-27T03:13:19Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/ppc64le"}
root@ip:~ #
Environment:
kubectl version):uname -a):root@ip:~ # kubeadm init --kubernetes-version ci-cross/latest
[init] using Kubernetes version: v1.12.0-alpha.0.2473+fea4ad2783f59d
[preflight] running pre-flight checks
I0726 23:51:46.864964 11860 kernel_validator.go:81] Validating kernel version
I0726 23:51:46.865467 11860 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d
, error: exit status 1
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-controller-manager-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-controller-manager-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d
, error: exit status 1
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-scheduler-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-scheduler-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d
, error: exit status 1
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-proxy-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-proxy-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with --ignore-preflight-errors=...
root@ip:~ #
cluster should have come up
run
kubeadm init --kubernetes-version ci-cross/latest with latest version of kubeadm on ppc64le
@luxas @dims
y, we are not yet ready for this. a few more things need to change in kubeadm
y, we are not yet ready for this. a few more things need to change in kubeadm
it was working before, I'm not really sure what screwed up all of a sudden.!
it was working before, I'm not really sure what screwed up all of a sudden.!
i'm not exactly sure how this worked before, but google did make some changes recently to the staging/main container registry.
for this to work right now, images need to be uploaded on each CI build to gcr.io, and i don't think this will happen...gcr.io right now only contains release related images.
if you know where the images are you can change this in the kubeadm config:
imageRepository: k8s.gcr.io
@mkumatag @neolit123 let me leave a few notes here that may help.
The build job output for ci-cross is here : http://gcsweb.k8s.io/gcs/kubernetes-jenkins/logs/ci-kubernetes-cross-build/?marker=logs%2fci-kubernetes-cross-build%2f7153%2f
if you pick one of the runs say : https://storage.googleapis.com/kubernetes-jenkins/logs/ci-kubernetes-cross-build/7141/build-log.txt
you can see the following :
I0618 05:50:28.230] Pushing gcr.io/kubernetes-ci-images/hyperkube-amd64:v1.12.0-alpha.0.1067_6d3f5b75f56323:
So if you run the following you can see the image there ...
docker run -it --entrypoint /bin/sh "gcr.io/kubernetes-ci-images/hyperkube:v1.12.0-alpha.0.1067_6d3f5b75f56323"
and if i stick in the version+sha from your output, that image is there too ...
docker run -it --entrypoint /bin/sh "gcr.io/kubernetes-ci-images/hyperkube:v1.12.0-alpha.0.2473_fea4ad2783f59d"
You can also go view the images here:
https://console.cloud.google.com/gcr/images/kubernetes-ci-images/GLOBAL/kube-apiserver?gcrImageListsize=50
Does that give you enough clues to figure out why kubeadm may be tripping?
@dims
thanks for the insights.
i missed the part where kubeadm handles ci/ci-cross as gcr.io/kubernetes-ci-images, but this should all work fine regardless.
my problem with this report is that:
kubeadm init --kubernetes-version ci-cross/latest
and
kubeadm config images pull --kubernetes-version ci-cross/latest
work for me on amd64 and with the latest k8s master.
also the ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d images for the control plane exist in the registry.
@sudeeshjohn
please re-test and report if this has started working for you.
@neolit123 I have re-checked, I'm still facing the same issue.
root@ip:~/test-infra/tests/k8s-conformance# kubeadm init --kubernetes-version ci-cross/latest
[init] using Kubernetes version: v1.12.0-alpha.0.2554+24fa5edb60e64b
[preflight] running pre-flight checks
I0730 22:10:13.106985 22229 kernel_validator.go:81] Validating kernel version
I0730 22:10:13.107169 22229 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.03.1-ce. Max validated version: 17.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[preflight] Some fatal errors occurred:
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b
, error: exit status 1
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-controller-manager-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-controller-manager-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b
, error: exit status 1
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-scheduler-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-scheduler-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b
, error: exit status 1
[ERROR ImagePull]: failed to check if image gcr.io/kubernetes-ci-images/kube-proxy-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b exists: output: []
Error: No such object: gcr.io/kubernetes-ci-images/kube-proxy-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b
, error: exit status 1
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
root@ip:~/test-infra/tests/k8s-conformance#
thank you,
i'm not exactly sure why i'm not getting these errors.
but this will obviously fail:
docker inspect gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d
(which is behind the failed to check if image... error)
refs:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/preflight/checks.go#L830
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/preflight/checks.go#L1014
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/util/runtime/runtime.go#L183
feeding the URL instead of the ID won't work.
it should be:
docker inspect `docker images <URL> --format "{{.ID}}`
@rosti @bart0sh
/kind bug
P.S.: no idea about crictl.
I'll have a look at this. Assigning to my mentor.
/assign @kad
I am not sure if ppc64le images are built and deployed to the registry at all. There are Ubuntu packages for ppc64el, but I don't think that these work...
@sudeeshjohn Do you know the last stable version that worked with ppc64le?
rosti@ubuntu:~/go/src/k8s.io/kubernetes$ docker inspect k8s.gcr.io/kube-apiserver-ppc64el:v1.11.1
[]
Error: No such object: k8s.gcr.io/kube-apiserver-ppc64el:v1.11.1
rosti@ubuntu:~/go/src/k8s.io/kubernetes$ docker inspect k8s.gcr.io/kube-apiserver-ppc64le:v1.11.1
[]
Error: No such object: k8s.gcr.io/kube-apiserver-ppc64le:v1.11.1
rosti@ubuntu:~/go/src/k8s.io/kubernetes$ docker inspect k8s.gcr.io/kube-apiserver-amd64:v1.11.1
[
{
"Id": "sha256:816332bd9d1150597e693bfa47af7b4474be6a0d7bc7b33cefca2ae14c59e75c",
...
@neolit123
this will obviously fail:
docker inspect gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le:v1.12.0-alpha.0.2473_fea4ad2783f59d
It works for me just fine:
$ docker inspect gcr.io/kubernetes-ci-images/kube-apiserver-amd64:v1.12.0-alpha.0.2565_10688257e63e4d
[
{
"Id": "sha256:e306f51308937215238de64452c5904c2420d78abc5ffa3eb09633db69ae54f2",
"RepoTags": [
"gcr.io/kubernetes-ci-images/kube-apiserver-amd64:v1.12.0-alpha.0.2565_10688257e63e4d"
],
"RepoDigests": [
"gcr.io/kubernetes-ci-images/kube-apiserver-amd64@sha256:a254478ab42fa1f38486d9d91ba5a43ddf90f0588c3f15b6cf01277ffdeb97e2"
],
...
It looks like the reason for this issue is that runtime returns error when the above command fails:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/util/runtime/runtime.go#L186
And then ImagePullCheck check fails because of this: https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/preflight/checks.go#L838
I think runtime shoudn't return error in this case. I'll prepare PR.
@bart0sh you have changed the image suffix to amd64 which we have. The thing is that ppc64le images are missing and it will always fail.
The problem here is that we generate kubeadm packages for ppc, but we don't have Docker images for it.
@rosti
The problem here is that we generate kubeadm packages for ppc, but we don't have Docker images for it.
but this:
docker pull gcr.io/kubernetes-ci-images/kube-apiserver-ppc64le:v1.12.0-alpha.0.2554_24fa5edb60e64b works for me
@bart0sh
It works for me just fine:
hm. my mistake, debugging early in the morning.
I think runtime shoudn't return error in this case. I'll prepare PR.
yes, seems like so. it attempts to pull images right after so it should not treat missing images like fatal errors.
Yep, you are right. For some reason I was trying to pull ppc64el images (probably because of the deb package suffix).
@sudeeshjohn given you are using the master branch (possibly building from source), care to verify if this PR fixes the issue for you?
https://github.com/kubernetes/kubernetes/pull/66822
cool , that patch works .. Thank you
root@ip:/go/src/github.com/kubernetes# ./kubeadm init --kubernetes-version ci-cross/latest
[init] using Kubernetes version: v1.12.0-alpha.0.2608+8e2d37ee63d0a1
[preflight] running pre-flight checks
I0801 03:27:23.300869 10998 kernel_validator.go:81] Validating kernel version
I0801 03:27:23.301114 10998 kernel_validator.go:96] Validating kernel config
[WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 18.06.0-ce. Max validated version: 17.03
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight/images] Pulling images required for setting up a Kubernetes cluster
[preflight/images] This might take a minute or two, depending on the speed of your internet connection
[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[preflight] Activating the kubelet service
[certificates] Generated ca certificate and key.
[certificates] Generated apiserver certificate and key.
[certificates] apiserver serving cert is signed for DNS names [ip9-114-192-231.pok.stglabs.ibm.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 9.114.192.231]
[certificates] Generated apiserver-kubelet-client certificate and key.
[certificates] Generated sa key and public key.
[certificates] Generated front-proxy-ca certificate and key.
[certificates] Generated front-proxy-client certificate and key.
[certificates] Generated etcd/ca certificate and key.
[certificates] Generated etcd/server certificate and key.
[certificates] etcd/server serving cert is signed for DNS names [ip9-114-192-231.pok.stglabs.ibm.com localhost] and IPs [127.0.0.1 ::1]
[certificates] Generated etcd/peer certificate and key.
[certificates] etcd/peer serving cert is signed for DNS names [ip9-114-192-231.pok.stglabs.ibm.com localhost] and IPs [9.114.192.231 127.0.0.1 ::1]
[certificates] Generated etcd/healthcheck-client certificate and key.
[certificates] Generated apiserver-etcd-client certificate and key.
[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"
[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"
[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"
[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"
[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"
[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"
[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"
[init] this might take a minute or longer if the control plane images have to be pulled
[apiclient] All control plane components are healthy after 23.003715 seconds
[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster
[markmaster] Marking the node ip9-114-192-231.pok.stglabs.ibm.com as master by adding the label "node-role.kubernetes.io/master=''"
[markmaster] Marking the node ip9-114-192-231.pok.stglabs.ibm.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "ip9-114-192-231.pok.stglabs.ibm.com" as an annotation
[bootstraptoken] using token: bf95ei.ei3vltp25mekfxcx
[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join ip 6443 --token bf95ei.ei3vltp25mekfxcx --discovery-token-ca-cert-hash sha256:d646674f1eefb7b78867c988699884f645c8156682efa3b6f4382967008bb4eb
root@ip:/go/src/github.com/kubernetes#
@sudeeshjohn thanks for testing!
@sudeeshjohn thanks! one more request. what does your kubeadm config images list say?
@dims here is what it shows
root@ip:/go/src/github.com/kubernetes# ./kubeadm config images list
k8s.gcr.io/kube-apiserver-ppc64le:v1.11.1
k8s.gcr.io/kube-controller-manager-ppc64le:v1.11.1
k8s.gcr.io/kube-scheduler-ppc64le:v1.11.1
k8s.gcr.io/kube-proxy-ppc64le:v1.11.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd-ppc64le:3.2.18
k8s.gcr.io/coredns:1.1.3
root@ip:/go/src/github.com/kubernetes#
Thanks @sudeeshjohn