i upgrade my cluster to version 1.9.4 of kubernetes and after that my kubernetes dashboard stopped in a CrashLoopBackOff state.
i have tried to delete it and create it again
but i get the same error.
here is the log:
2018/03/21 11:04:16 Starting overwatch
2018/03/21 11:04:16 Using in-cluster config to connect to apiserver
2018/03/21 11:04:16 Using service account token for csrf signing
2018/03/21 11:04:16 No request provided. Skipping authorization
2018/03/21 11:04:16 Successful initial request to the apiserver, version: v1.9.4
2018/03/21 11:04:16 Generating JWE encryption key
2018/03/21 11:04:16 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2018/03/21 11:04:16 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2018/03/21 11:04:17 Initializing JWE encryption key from synchronized object
2018/03/21 11:04:17 Creating in-cluster Heapster client
2018/03/21 11:04:17 Auto-generating certificates
2018/03/21 11:04:17 [ECDSAManager] Failed to open dashboard.crt for writing: open /certs/dashboard.crt: read-only file system
What kops version are you running? The command kops version, will display
this information.
Version 1.9.0-alpha.1 (git-f799036a3)
What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T10:09:24Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.4", GitCommit:"bee2d1505c4fe820744d26d41ecd3fdd4a3d6546", GitTreeState:"clean", BuildDate:"2018-03-12T16:21:35Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
What cloud provider are you using?
AWS
What commands did you run? What is the simplest way to reproduce this issue?
kubectl create -f https://raw.githubusercontent.com/kubernetes/kops/master/addons/kubernetes-dashboard/v1.8.1.yaml
What happened after the commands executed?
kube-system kubernetes-dashboard-6ddcb6df4c-blb6x 0/1 CrashLoopBackOff 12 40m
What did you expect to happen?
i expect the pod to be running
Please provide your cluster manifest. Execute
kops get --name my.example.com -oyaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
Anything else do we need to know?
Seeing the same for 1.8.10 for my throwaway testing cluster pipeline:
Booted with kops Version 1.9.0-alpha.1 (git-f799036a3) and the latest template :
kops create cluster \
> --bastion \
> --dns private \
> --dns-zone=ZYDKGKSR1KJYO \
> --networking=flannel-vxlan \
> --topology='private' \
> --zones=us-west-1b \
> --node-count=2 \
> --node-size=c5.xlarge \
> --master-count=1 \
> --master-size=m4.large \
> --node-volume-size=150 \
> --kubernetes-version=1.8.10 \
> --ssh-public-key=~/.ssh/lolbyethx.pub \
> --state=s3://bla-none-of-your-business \
> ${NAME}
doertedev@stefan-workstation:~/Code/infra/on-demand/k8s$ kubectl version -v 10
I0321 17:03:08.565098 24568 loader.go:357] Config loaded from file /home/doertedev/.kube/config
I0321 17:03:08.567250 24568 round_trippers.go:417] curl -k -v -XGET -H "User-Agent: kubectl/v1.9.5 (linux/amd64) kubernetes/f01a2bf" -H "Authorization: Basic YWRtaW46aU04aHpsOHQzOG1MZGh1bGltZWZCSW5aRXg4b1h0bjA=" -H "Accept: application/json, */*" https://api-professional-devops-alcoholics-pnic3f-1280399744.us-west-1.elb.amazonaws.com/version
I0321 17:03:09.388941 24568 round_trippers.go:436] GET https://api-professional-devops-alcoholics-pnic3f-1280399744.us-west-1.elb.amazonaws.com/version 200 OK in 821 milliseconds
I0321 17:03:09.388988 24568 round_trippers.go:442] Response Headers:
I0321 17:03:09.389004 24568 round_trippers.go:445] Content-Type: application/json
I0321 17:03:09.389022 24568 round_trippers.go:445] Content-Length: 261
I0321 17:03:09.389033 24568 round_trippers.go:445] Date: Wed, 21 Mar 2018 16:03:09 GMT
I0321 17:03:09.412033 24568 request.go:873] Response Body: {
"major": "1",
"minor": "8",
"gitVersion": "v1.8.10",
"gitCommit": "044cd262c40234014f01b40ed7b9d09adbafe9b1",
"gitTreeState": "clean",
"buildDate": "2018-03-19T17:44:09Z",
"goVersion": "go1.8.3",
"compiler": "gc",
"platform": "linux/amd64"
}
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.5", GitCommit:"f01a2bf98249a4db383560443a59bed0c13575df", GitTreeState:"clean", BuildDate:"2018-03-19T15:59:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.10", GitCommit:"044cd262c40234014f01b40ed7b9d09adbafe9b1", GitTreeState:"clean", BuildDate:"2018-03-19T17:44:09Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
doertedev@stefan-workstation:~/Code/infra/on-demand/k8s$ kubectl get pods --namespace=kube-system -v 8
I0321 17:02:10.604479 24171 loader.go:357] Config loaded from file /home/doertedev/.kube/config
I0321 17:02:10.617702 24171 round_trippers.go:414] GET https://api-professional-devops-alcoholics-pnic3f-1280399744.us-west-1.elb.amazonaws.com/api/v1/namespaces/kube-system/pods?limit=500
I0321 17:02:10.617729 24171 round_trippers.go:421] Request Headers:
I0321 17:02:10.617738 24171 round_trippers.go:424] User-Agent: kubectl/v1.9.5 (linux/amd64) kubernetes/f01a2bf
I0321 17:02:10.617747 24171 round_trippers.go:424] Accept: application/json
I0321 17:02:10.617757 24171 round_trippers.go:424] Authorization: Basic YWRtaW46aU04aHpsOHQzOG1MZGh1bGltZWZCSW5aRXg4b1h0bjA=
I0321 17:02:11.417679 24171 round_trippers.go:439] Response Status: 200 OK in 799 milliseconds
I0321 17:02:11.417704 24171 round_trippers.go:442] Response Headers:
I0321 17:02:11.417716 24171 round_trippers.go:445] Content-Type: application/json
I0321 17:02:11.417729 24171 round_trippers.go:445] Date: Wed, 21 Mar 2018 16:02:11 GMT
I0321 17:02:11.809248 24171 request.go:873] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/kube-system/pods","resourceVersion":"2682"},"items":[{"metadata":{"name":"dns-controller-5f989c969c-zbjkh","generateName":"dns-controller-5f989c969c-","namespace":"kube-system","selfLink":"/api/v1/namespaces/kube-system/pods/dns-controller-5f989c969c-zbjkh","uid":"1d622225-2d1d-11e8-95dd-061da378a924","resourceVersion":"349","creationTimestamp":"2018-03-21T15:32:50Z","labels":{"k8s-addon":"dns-controller.addons.k8s.io","k8s-app":"dns-controller","pod-template-hash":"1954575257","version":"v1.9.0-alpha.1"},"annotations":{"kubernetes.io/created-by":"{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"kube-system\",\"name\":\"dns-controller-5f989c969c\",\"uid\":\"1cc5efe7-2d1d-11e8-95dd-061da378a924\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"221\"}}\n","scheduler.alpha.kubernetes.io/critical-pod":"","scheduler.alpha.kubernetes.io/tolerations":"[{\"key\": \"dedicated [truncated 67264 chars]
I0321 17:02:11.834121 24171 round_trippers.go:414] GET https://api-professional-devops-alcoholics-pnic3f-1280399744.us-west-1.elb.amazonaws.com/swagger-2.0.0.pb-v1
I0321 17:02:11.834213 24171 round_trippers.go:421] Request Headers:
I0321 17:02:11.834241 24171 round_trippers.go:424] Accept: application/json, */*
I0321 17:02:11.834275 24171 round_trippers.go:424] If-None-Match: "9EFC87403772BEFCA334C5F75843ECAFFE3C8C4711510879B3EEFE0A259A6625FA065F07AAB6AFEC620AD00883B1E75F8FB84422ED1A8C8E6E2E604748373C2D"
I0321 17:02:11.834311 24171 round_trippers.go:424] If-Modified-Since: Wed, 21 Mar 2018 15:32:17 GMT
I0321 17:02:11.834340 24171 round_trippers.go:424] User-Agent: kubectl/v1.9.5 (linux/amd64) kubernetes/f01a2bf
I0321 17:02:11.834369 24171 round_trippers.go:424] Authorization: Basic YWRtaW46aU04aHpsOHQzOG1MZGh1bGltZWZCSW5aRXg4b1h0bjA=
I0321 17:02:12.028460 24171 round_trippers.go:439] Response Status: 304 Not Modified in 194 milliseconds
I0321 17:02:12.028494 24171 round_trippers.go:442] Response Headers:
I0321 17:02:12.028504 24171 round_trippers.go:445] Etag: "9EFC87403772BEFCA334C5F75843ECAFFE3C8C4711510879B3EEFE0A259A6625FA065F07AAB6AFEC620AD00883B1E75F8FB84422ED1A8C8E6E2E604748373C2D"
I0321 17:02:12.028514 24171 round_trippers.go:445] Vary: Accept-Encoding
I0321 17:02:12.028525 24171 round_trippers.go:445] Date: Wed, 21 Mar 2018 16:02:11 GMT
I0321 17:02:12.195989 24171 request.go:871] Response Body:
00000000 0a 03 32 2e 30 12 15 0a 0a 4b 75 62 65 72 6e 65 |..2.0....Kuberne|
00000010 74 65 73 12 07 76 31 2e 38 2e 31 30 42 a5 88 52 |tes..v1.8.10B..R|
00000020 12 ca 02 0a 05 2f 61 70 69 2f 12 c0 02 12 bd 02 |...../api/......|
00000030 0a 04 63 6f 72 65 1a 1a 67 65 74 20 61 76 61 69 |..core..get avai|
00000040 6c 61 62 6c 65 20 41 50 49 20 76 65 72 73 69 6f |lable API versio|
00000050 6e 73 2a 12 67 65 74 43 6f 72 65 41 50 49 56 65 |ns*.getCoreAPIVe|
00000060 72 73 69 6f 6e 73 32 10 61 70 70 6c 69 63 61 74 |rsions2.applicat|
00000070 69 6f 6e 2f 6a 73 6f 6e 32 10 61 70 70 6c 69 63 |ion/json2.applic|
00000080 61 74 69 6f 6e 2f 79 61 6d 6c 32 23 61 70 70 6c |ation/yaml2#appl|
00000090 69 63 61 74 69 6f 6e 2f 76 6e 64 2e 6b 75 62 65 |ication/vnd.kube|
000000a0 72 6e 65 74 65 73 2e 70 72 6f 74 6f 62 75 66 3a |rnetes.protobuf:|
000000b0 10 61 70 70 6c 69 63 61 74 69 6f 6e 2f 6a 73 6f |.application/jso|
000000c0 6e 3a 10 61 70 70 6c 69 63 61 74 69 6f 6e 2f 79 |n:.application/ [truncated 9092031 chars]
NAME READY STATUS RESTARTS AGE
dns-controller-5f989c969c-zbjkh 1/1 Running 0 29m
etcd-server-events-ip-172-20-35-209.us-west-1.compute.internal 1/1 Running 0 29m
etcd-server-ip-172-20-35-209.us-west-1.compute.internal 1/1 Running 0 28m
kube-apiserver-ip-172-20-35-209.us-west-1.compute.internal 1/1 Running 0 28m
kube-controller-manager-ip-172-20-35-209.us-west-1.compute.internal 1/1 Running 0 29m
kube-dns-58b58b768-g4tmz 3/3 Running 0 29m
kube-dns-58b58b768-rsvg9 3/3 Running 0 26m
kube-dns-autoscaler-f4c47db64-zrskm 1/1 Running 0 29m
kube-flannel-ds-9hmr9 1/1 Running 2 27m
kube-flannel-ds-d9sgh 1/1 Running 0 29m
kube-flannel-ds-sttwb 1/1 Running 0 27m
kube-proxy-ip-172-20-35-209.us-west-1.compute.internal 1/1 Running 0 28m
kube-proxy-ip-172-20-41-169.us-west-1.compute.internal 1/1 Running 0 26m
kube-proxy-ip-172-20-48-57.us-west-1.compute.internal 1/1 Running 0 26m
kube-scheduler-ip-172-20-35-209.us-west-1.compute.internal 1/1 Running 0 28m
kubernetes-dashboard-7798c48646-p5kq6 0/1 CrashLoopBackOff 7 12m
doertedev@stefan-workstation:~/Code/infra/on-demand/k8s$ kops get --name ${NAME} -oyaml -v 10
I0321 17:00:46.104750 23658 s3context.go:172] Found bucket "fuggen-s3-bucket" in region "us-west-1"
I0321 17:00:46.104963 23658 s3fs.go:198] Reading file "s3://fuggen-s3-bucket/professional-devops-alcoholics/config"
I0321 17:00:46.982639 23658 s3fs.go:235] Listing objects in S3 bucket "fuggen-s3-bucket" with prefix "professional-devops-alcoholics/instancegroup/"
I0321 17:00:47.332938 23658 s3fs.go:261] Listed files in s3://fuggen-s3-bucket/professional-devops-alcoholics/instancegroup: [s3://fuggen-s3-bucket/professional-devops-alcoholics/instancegroup/bastions s3://fuggen-s3-bucket/professional-devops-alcoholics/instancegroup/master-us-west-1b s3://fuggen-s3-bucket/professional-devops-alcoholics/instancegroup/nodes]
I0321 17:00:47.332984 23658 s3fs.go:198] Reading file "s3://fuggen-s3-bucket/professional-devops-alcoholics/instancegroup/bastions"
I0321 17:00:47.544630 23658 s3fs.go:198] Reading file "s3://fuggen-s3-bucket/professional-devops-alcoholics/instancegroup/master-us-west-1b"
I0321 17:00:47.769731 23658 s3fs.go:198] Reading file "s3://fuggen-s3-bucket/professional-devops-alcoholics/instancegroup/nodes"
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2018-03-21T15:28:18Z
name: professional-devops-alcoholics
spec:
api:
loadBalancer:
type: Public
authorization:
rbac: {}
channel: stable
cloudProvider: aws
configBase: s3://fuggen-s3-bucket/professional-devops-alcoholics
dnsZone: ZYDKGKSR1KJYO
etcdClusters:
- etcdMembers:
- instanceGroup: master-us-west-1b
name: b
name: main
- etcdMembers:
- instanceGroup: master-us-west-1b
name: b
name: events
iam:
allowContainerRegistry: true
legacy: false
kubernetesApiAccess:
- 0.0.0.0/0
kubernetesVersion: 1.8.10
masterPublicName: api.professional-devops-alcoholics
networkCIDR: 172.20.0.0/16
networking:
flannel:
backend: vxlan
nonMasqueradeCIDR: 100.64.0.0/10
sshAccess:
- 0.0.0.0/0
subnets:
- cidr: 172.20.32.0/19
name: us-west-1b
type: Private
zone: us-west-1b
- cidr: 172.20.0.0/22
name: utility-us-west-1b
type: Utility
zone: us-west-1b
topology:
bastion:
bastionPublicName: bastion.professional-devops-alcoholics
dns:
type: Private
masters: private
nodes: private
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-21T15:28:19Z
labels:
kops.k8s.io/cluster: professional-devops-alcoholics
name: bastions
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: t2.micro
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: bastions
role: Bastion
subnets:
- utility-us-west-1b
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-21T15:28:19Z
labels:
kops.k8s.io/cluster: professional-devops-alcoholics
name: master-us-west-1b
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: m4.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-us-west-1b
role: Master
subnets:
- us-west-1b
---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-03-21T15:28:19Z
labels:
kops.k8s.io/cluster: professional-devops-alcoholics
name: nodes
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: c5.xlarge
maxSize: 2
minSize: 2
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
rootVolumeSize: 150
subnets:
- us-west-1b
Seems like it is something related/fixed in kubernetes/dashboard v1.8.3
https://github.com/kubernetes/dashboard/releases/tag/v1.8.3
https://github.com/kubernetes/dashboard/pull/2795
I can confirm that I just tried to update the version of dashboard from v1.8.1 to v1.8.3 and this fixes the above issues (CrashLoopBackOff/[ECDSAManager] Failed to open dashboard.crt for writing: open /certs/dashboard.crt: read-only file system)
@lasse-kristensen @doertedev
Thanks for letting us know. Will reroll, pipelines and report.
:+1: Confirmed working here.
i have just upgraded to 1.8.3 and its working fine now, thank you.
I have the same issue on my kubernetes cluster. The dashboard version is v1.8.3.
kubectl version --short
Client Version: v1.11.2
Server Version: v1.11.2
docker version
Client:
Version: 18.06.1-ce
API version: 1.38
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:24:56 2018
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 18.06.1-ce
API version: 1.38 (minimum version 1.12)
Go version: go1.10.3
Git commit: e68fc7a
Built: Tue Aug 21 17:23:21 2018
OS/Arch: linux/amd64
Experimental: false
@Sinai65
You're using Kubernetes 1.11.x , which is not officially supported at the moment by KOPS (Please refer to the chart on the https://github.com/kubernetes/kops/ page). I highly advise you to follow the KOPS compatibility chart :)
For your issue, maybe you can try to deploy the latest version of dashboard (1.10.0) available at https://github.com/kubernetes/kops/blob/master/addons/kubernetes-dashboard/v1.10.0.yaml but no guarantee it will fix your issue.
Cheers
Most helpful comment
Seems like it is something related/fixed in kubernetes/dashboard v1.8.3
https://github.com/kubernetes/dashboard/releases/tag/v1.8.3
https://github.com/kubernetes/dashboard/pull/2795
I can confirm that I just tried to update the version of dashboard from v1.8.1 to v1.8.3 and this fixes the above issues (CrashLoopBackOff/[ECDSAManager] Failed to open dashboard.crt for writing: open /certs/dashboard.crt: read-only file system)
@lasse-kristensen @doertedev