Times out when scheduling/downloading images.
Logs indicate timeout waiting for postgres volumes to be bound
ran
chectl server:start --platform minikube --multiuser
Eclipse che is deployed and a url is generated
kubectl version)oc version)minikube version and kubectl version)minishift version and oc version)docker version and kubectl version)minikube version: v1.9.1
commit: d8747aec7ebf8332ddae276d5f8fb42d3152b5a1
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:58:59Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.0", GitCommit:"9e991415386e4cf155a24b1da15becaa390438d8", GitTreeState:"clean", BuildDate:"2020-03-25T14:50:46Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"linux/amd64"}
โ Verify Kubernetes API...OK
โ ๐ Looking for an already existing Eclipse Che instance
โ Verify if Eclipse Che is deployed into namespace "che"...it is not
โ โ๏ธ Minikube preflight checklist
โ Verify if kubectl is installed
โ Verify if minikube is installed
โ Verify if minikube is running
โ Start minikube [skipped]
โ Minikube is already running.
โ Check Kubernetes version: Found v1.18.0.
โ Verify if minikube ingress addon is enabled
โ Enable minikube ingress addon
โ Retrieving minikube IP and domain for ingress URLs...172.17.0.2.nip.io.
Eclipse Che logs will be available in '/tmp/chectl-logs/1586030573786'
โ Start following logs
โ Start following Operator logs...done
โ Start following Eclipse Che logs...done
โ Start following Postgres logs...done
โ Start following Keycloak logs...done
โ Start following Plugin registry logs...done
โ Start following Devfile registry logs...done
โ Start following events
โ Start following namespace events...done
โ ๐โ Running the Eclipse Che operator
โ Copying operator resources...done.
โ Create Namespace (che)...It already exists.
โ Create ServiceAccount che-operator in namespace che...It already exists.
โ Create Role che-operator in namespace che...It already exists.
โ Create ClusterRole che-operator...It already exists.
โ Create RoleBinding che-operator in namespace che...It already exists.
โ Create ClusterRoleBinding che-operator...It already exists.
โ Create CRD checlusters.org.eclipse.che...It already exists.
โ Waiting 5 seconds for the new Kubernetes resources to get flushed...done.
โ Create deployment che-operator in namespace che...It already exists.
โ Create Eclipse Che cluster eclipse-che in namespace che...It already exists.
โฏ โ
Post installation checklist
โฏ Eclipse Che pod bootstrap
โ scheduling
โ ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefined
downloading images
starting
Retrieving Eclipse Che server URL
Eclipse Che status check
โบ Error: Error: ERR_TIMEOUT: Timeout set to pod wait timeout 300000. podExist: false, currentPhase: undefined
โบ Installation failed, check logs in '/tmp/chectl-logs/1586030573786'
time="2020-04-04T20:02:45Z" level=info msg="Default 'info' log level is applied"
time="2020-04-04T20:02:45Z" level=info msg="Go Version: go1.12.12"
time="2020-04-04T20:02:45Z" level=info msg="Go OS/Arch: linux/amd64"
time="2020-04-04T20:02:45Z" level=info msg="operator-sdk Version: v0.5.0"
time="2020-04-04T20:02:45Z" level=info msg="Operator is running on Kubernetes"
time="2020-04-04T20:02:45Z" level=info msg="Registering Che Components Types"
time="2020-04-04T20:02:45Z" level=info msg="Starting the Cmd"
time="2020-04-04T20:02:45Z" level=info msg="Waiting for PVC postgres-data to be bound. Default timeout: 10 seconds"
time="2020-04-04T20:02:55Z" level=warning msg="Timeout waiting for a PVC postgres-data to be bound. Current phase is Pending"
time="2020-04-04T20:02:55Z" level=warning msg="Sometimes PVC can be bound only when the first consumer is created"
time="2020-04-04T20:02:56Z" level=info msg="Waiting for deployment postgres. Default timeout: 420 seconds"
LAST SEEN|TYPE|REASON|OBJECT|MESSAGE
:-----:|:-----:|:-----:|:-----:|:-----:
22m|Normal|Scheduled|pod/che-operator-7b9fd956cb-fwbt8|Successfully assigned che/che-operator-7b9fd956cb-fwbt8 to minikube
22m|Normal|Pulling|pod/che-operator-7b9fd956cb-fwbt8|Pulling image "quay.io/eclipse/che-operator:7.10.0"
21m|Normal|Pulled|pod/che-operator-7b9fd956cb-fwbt8|Successfully pulled image "quay.io/eclipse/che-operator:7.10.0"
21m|Normal|Created|pod/che-operator-7b9fd956cb-fwbt8|Created container che-operator
21m|Normal|Started|pod/che-operator-7b9fd956cb-fwbt8|Started container che-operator
18s|Normal|SandboxChanged|pod/che-operator-7b9fd956cb-fwbt8|Pod sandbox changed, it will be killed and re-created.
16s|Normal|Pulling|pod/che-operator-7b9fd956cb-fwbt8|Pulling image "quay.io/eclipse/che-operator:7.10.0"
13s|Normal|Pulled|pod/che-operator-7b9fd956cb-fwbt8|Successfully pulled image "quay.io/eclipse/che-operator:7.10.0"
13s|Normal|Created|pod/che-operator-7b9fd956cb-fwbt8|Created container che-operator
13s|Normal|Started|pod/che-operator-7b9fd956cb-fwbt8|Started container che-operator
22m|Normal|SuccessfulCreate|replicaset/che-operator-7b9fd956cb|Created pod: che-operator-7b9fd956cb-fwbt8
22m|Normal|ScalingReplicaSet|deployment/che-operator|Scaled up replica set che-operator-7b9fd956cb to 1
6m46s|Warning|FailedScheduling|pod/postgres-6448d66f7f-2hn8w|running "VolumeBinding" filter plugin for pod "postgres-6448d66f7f-2hn8w"$
9s|Warning|FailedScheduling|pod/postgres-6448d66f7f-2hn8w|running "VolumeBinding" filter plugin for pod "postgres-6448d66f7f-2hn8w"$
21m|Normal|SuccessfulCreate|replicaset/postgres-6448d66f7f|Created pod: postgres-6448d66f7f-2hn8w
6m30s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
21m|Normal|ScalingReplicaSet|deployment/postgres|Scaled up replica set postgres-6448d66f7f to 1
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Warning|FailedScheduling|pod/postgres-6448d66f7f-2hn8w|running "VolumeBinding" filter plugin for pod "postgres-6448d66f7f-2hn8w"$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Warning|FailedScheduling|pod/postgres-6448d66f7f-2hn8w|running "VolumeBinding" filter plugin for pod "postgres-6448d66f7f-2hn8w"$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner "k8s.i$
0s|Normal|ExternalProvisioning|persistentvolumeclaim/postgres-data|waiting for a volume to be created, either by external provisioner
@cbyreddy
That's might be the cause. https://github.com/kubernetes/minikube/issues/7218
pls. downgrade minikube to v1.8 and try again.
storage provision error, yes. Workaround is to use the storageClassName in crd:
minikube creates a VM for setting up the cluster so /data and /data/wksp have to be created and chmod 777 in the vm for this to work. Sames goes to whatever path you choose if you modify this values.
SIDE NOTE: this could also require to disable default tls option in yaml too:
tlsSupport: false
SIDE NOTE2: also the domain should be forced in yaml:
ingressDomain: 'minikube-lan-ip.nip.io'
````yaml
postgresPVCStorageClassName: eclipseche
workspacePVCStorageClassName: eclipsechewksp
ingressDomain: 'minikube-lan-ip.nip.io' #CHANGE TO A REAL minikube-lan-ip
tlsSupport: false
````
create storage classes and volumes accordingly:
````yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: eclipsechewksp
labels:
type: local
spec:
storageClassName: eclipsechewksp
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
apiVersion: v1
kind: PersistentVolume
metadata:
name: eclipseche
labels:
type: local
spec:
storageClassName: eclipseche
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: eclipsechewksp
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: k8s.io/minikube-hostpath
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: eclipseche
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
````
after this use the additional argument in chectl server:start:
bash
chectl server:start --platform minikube --multiuser --che-operator-cr-yaml=/usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yaml
upon attempts to start chectl (using chectl server:delete and server:start again) the postgres folder (called userdata) has to be removed and the volumes in the minikube cluster have to ve removed and created again (using kubectl delete -f and apply -f with the provided yaml).
so to recap:
to remove the unsuccessfull che start garbage files and volumes.
bash
chectl server:delete
kubectl delete -f <storageclass_and_volumes.yaml>
rm -rf /data/userdata
to try again:
bash
kubectl apply -f <storageclass_and_volumes.yaml>
chectl server:start --platform minikube --multiuser --che-operator-cr-yaml=/usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yaml
@cbyreddy
That's might be the cause. kubernetes/minikube#7218
pls. downgrade minikube to v1.8 and try again.
I was having issues with v1.8 too so I tried again using k3s. I got to a much later stage of the install process before it errored out again.
sudo chectl server:start --platform k8s --multiuser --domain 192.168.1.137.nip.io
โ Verify Kubernetes API...OK
โ ๐ Looking for an already existing Eclipse Che instance
โ Verify if Eclipse Che is deployed into namespace "che"...it is not
โ โ๏ธ Kubernetes preflight checklist
โ Verify if kubectl is installed
โ Verify remote kubernetes status...done.
โ Check Kubernetes version: Found v1.17.4+k3s1.
โ Verify domain is set...set to 192.168.1.137.nip.io.
โ Check if cluster accessible... ok
Eclipse Che logs will be available in '/tmp/chectl-logs/1586681575409'
โ Start following logs
โ Start following Operator logs [skipped]
โ Start following Eclipse Che logs...done
โ Start following Postgres logs...done
โ Start following Keycloak logs...done
โ Start following Plugin registry logs...done
โ Start following Devfile registry logs...done
โ Start following events
โ Start following namespace events...done
โ ๐โ Running Helm to install Eclipse Che
โ Verify if helm is installed
โ Check Helm Version: Found v3.1.2+gd878d4d
โ Create Namespace (che)...done.
โ Check Eclipse Che TLS certificate...going to generate self-signed one
โ Check Cert Manager deployment...not deployed
โ Deploy cert-manager...done
โ Wait for cert-manager...ready
โ Check Cert Manager CA certificate...generating new one
โ Set up Eclipse Che certificates issuer...done
โ Request self-signed certificate...done
โ Wait for self-signed certificate...ready
โ โ[MANUAL ACTION REQUIRED] Please add local Eclipse Che CA certificate into your browser: /home/admin/cheCA.crt
โ Check Cluster Role Binding...does not exists.
โ Preparing Eclipse Che Helm Chart...done.
โ Updating Helm Chart dependencies...done.
โ Deploying Eclipse Che Helm Chart...done.
โฏ โ
Post installation checklist
โ PostgreSQL pod bootstrap
โ scheduling...done.
โ downloading images...done.
โ starting...done.
โ Devfile registry pod bootstrap
โ scheduling...done.
โ downloading images...done.
โ starting...done.
โ Plugin registry pod bootstrap
โ scheduling...done.
โ downloading images...done.
โ starting...done.
โฏ Eclipse Che pod bootstrap
โ scheduling...done.
โ downloading images...done.
โ starting
โ ERR_TIMEOUT: Timeout set to pod ready timeout 130000
Retrieving Eclipse Che server URL
Eclipse Che status check
Show important messages
โบ Error: Error: ERR_TIMEOUT: Timeout set to pod ready timeout 130000
โบ Installation failed, check logs in '/tmp/chectl-logs/1586681575409'
Any idea what could have gone wrong now? I don't think it is a storage provisioning issue but I'm not sure.
Here is the log output for the che pod
https://pastebin.com/9NatW21j
I have had this problem with both microk8s and k3s!
@cbyreddy
chectl version ?
Pls provide logs for the second installation.
https://pastebin.com/9NatW21j isn't available
@cbyreddy
chectl version?Pls provide logs for the second installation.
https://pastebin.com/9NatW21j isn't available
Sorry, here you go.
https://pastebin.com/aTY9zLRn
This seems to be the issue but I'm not sure
Caused by: javax.net.ssl.SSLHandshakeException: java.security.cert.CertificateException: No name matching keycloak-che.192.168.1.137.nip.io found
related one
https://github.com/eclipse/che/issues/16429
related one
16429
Is there anything I can try to fix the error or is it a bug?
It is a bug. I will take a look on it later.
What I can suggest for now:
chectl server:start --platform minikube --installer helm --multiuser
@cbyreddy
Could you specify the chectl version you used?
@zarinfam
Could you specify the chectl version you used?
@cbyreddy
I am closing this one.
Feel free to open a new issue.
Most helpful comment
storage provision error, yes. Workaround is to use the storageClassName in crd:
minikube creates a VM for setting up the cluster so /data and /data/wksp have to be created and chmod 777 in the vm for this to work. Sames goes to whatever path you choose if you modify this values.
SIDE NOTE: this could also require to disable default tls option in yaml too:
tlsSupport: false
SIDE NOTE2: also the domain should be forced in yaml:
ingressDomain: 'minikube-lan-ip.nip.io'
````yaml
file: /usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yaml
postgresPVCStorageClassName: eclipseche
workspacePVCStorageClassName: eclipsechewksp
ingressDomain: 'minikube-lan-ip.nip.io' #CHANGE TO A REAL minikube-lan-ip
tlsSupport: false
````
create storage classes and volumes accordingly:
````yaml
file: storageclass_and_volumes.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: eclipsechewksp
labels:
type: local
spec:
storageClassName: eclipsechewksp
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/wksp"
apiVersion: v1
kind: PersistentVolume
metadata:
name: eclipseche
labels:
type: local
spec:
storageClassName: eclipseche
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/data/"
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: eclipsechewksp
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: eclipseche
annotations:
storageclass.kubernetes.io/is-default-class: "false"
provisioner: k8s.io/minikube-hostpath
reclaimPolicy: Retain
````
after this use the additional argument in chectl server:start:
bash chectl server:start --platform minikube --multiuser --che-operator-cr-yaml=/usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yamlupon attempts to start chectl (using chectl server:delete and server:start again) the postgres folder (called userdata) has to be removed and the volumes in the minikube cluster have to ve removed and created again (using kubectl delete -f and apply -f with the provided yaml).
so to recap:
to remove the unsuccessfull che start garbage files and volumes.
bash chectl server:delete kubectl delete -f <storageclass_and_volumes.yaml> rm -rf /data/userdatato try again:
bash kubectl apply -f <storageclass_and_volumes.yaml> chectl server:start --platform minikube --multiuser --che-operator-cr-yaml=/usr/local/lib/chectl/templates/che-operator/crds/org_v1_che_cr.yaml