Zero-to-jupyterhub-k8s: Error: PersistentVolumeClaim is not bound: "hub-db-dir"

Created on 20 Sep 2017  路  11Comments  路  Source: jupyterhub/zero-to-jupyterhub-k8s

From @gijs in https://github.com/jupyterhub/helm-chart/issues/49

Hi,

I'm trying to run this chart on GKE but the deployment keeps failing with PersistentVolumeClaim is not bound: "hub-db-dir".

Steps I took:

  • Created a container cluster (v1.7.0)
  • $ kubectl proxy
  • $ kubectl config current-context
  • Installed Helm (v2.5.0)
  • $ helm init --upgrade
  • Added a file config.yaml:
hub:
  cookieSecret: "xxxx"
proxy:
  secretToken: "xxxx"
  • $ helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
  • $ helm install jupyterhub/jupyterhub --version=v0.4 --name=jupyterhub-test --namespace=jupyterhub-test -f config.yaml

This resulted in the following output:

NAME:   xxxxxx
LAST DEPLOYED: Fri Jul 14 15:30:48 2017
NAMESPACE: xxxxxxx
STATUS: DEPLOYED

RESOURCES:
==> v1/Service
NAME          CLUSTER-IP     EXTERNAL-IP  PORT(S)       AGE
proxy-public  10.31.247.38   <pending>    80:32453/TCP  1s
proxy-api     10.31.241.160  <none>       8001/TCP      1s
hub           10.31.247.218  <none>       8081/TCP      1s

==> v1beta1/Deployment
NAME              DESIRED  CURRENT  UP-TO-DATE  AVAILABLE  AGE
hub-deployment    1        1        1           0          1s
proxy-deployment  1        1        1           0          1s

==> v1/Secret
NAME        TYPE    DATA  AGE
hub-secret  Opaque  2     1s

==> v1/ConfigMap
NAME          DATA  AGE
hub-config-1  13    1s

==> v1/PersistentVolumeClaim
NAME        STATUS   VOLUME    CAPACITY  ACCESSMODES  STORAGECLASS  AGE
hub-db-dir  Pending  standard  1s


NOTES:
Thank you for installing JupyterHub!

Your release is named xxxxxxx and installed into the namespace xxxxxx.

You can find if the hub and proxy is ready by doing:

 kubectl --namespace=xxxxxx get pod

and watching for both those pods to be in status 'Ready'.

You can find the public IP of the JupyterHub by doing:

 kubectl --namespace=xxxxxx get svc proxy-public

It might take a few minutes for it to appear!

Note that this is still an alpha release! If you have questions, feel free to
  1. Come chat with us at https://gitter.im/jupyterhub/jupyterhub
  2. File issues at https://github.com/jupyterhub/helm-chart/issues

Output from kubectl's get pod and get svc:

$ kubectl --namespace=xxxxx get pod

NAME                              READY     STATUS    RESTARTS   AGE
hub-deployment-820122001-d236w    0/1       Pending   0          1m
proxy-deployment-51742714-tc73p   1/1       Running   0          1m

$ kubectl --namespace=xxxxx get svc

NAME           CLUSTER-IP      EXTERNAL-IP      PORT(S)        AGE
hub            10.31.247.218   <none>           8081/TCP       2m
proxy-api      10.31.241.160   <none>           8001/TCP       2m
proxy-public   10.31.247.38    104.199.48.144   80:32453/TCP   2m

(The external IP for the proxy-public service should be accessible in a minute or two.)

Nothing appears on the external IP ofcourse..
Any ideas how I can fix the PersistentVolumeClaim is not bound: "hub-db-dir" error?

Thanks!

Gijs

question

Most helpful comment

@yuvipanda My response are:

1.- I am using Rancher to running Kubernetes.
2.-
Name: hub-db-dir Namespace: kube-jupyterhub StorageClass: Status: Pending Volume: Labels: <none> Annotations: <none> Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 13s (x25 over 6m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set

3.-
``` Name: hub-db-dir
Namespace: kube-jupyterhub
StorageClass:
Status: Pending
Volume:
Labels:
Annotations:
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 13s (x25 over 6m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set

kubectl --namespace=kube-jupyterhub describe pod
Name: hub-deployment-84944c8fc5-97wd7
Namespace: kube-jupyterhub
Node:
Labels: name=hub-pod
pod-template-hash=4050074971
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-jupyterhub","name":"hub-deployment-84944c8fc5","uid":"6640deef-d2cd-11e7-bdc...
Status: Pending
IP:
Created By: ReplicaSet/hub-deployment-84944c8fc5
Controlled By: ReplicaSet/hub-deployment-84944c8fc5
Containers:
hub-container:
Image: jupyterhub/k8s-hub:v0.4
Port: 8081/TCP
Requests:
cpu: 200m
memory: 512Mi
Environment:
SINGLEUSER_IMAGE: jupyterhub/k8s-singleuser-sample:v0.4
JPY_COOKIE_SECRET: Optional: false
POD_NAMESPACE: kube-jupyterhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hnh7h (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config-1
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
default-token-hnh7h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hnh7h
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m default-scheduler persistentvolumeclaim "hub-db-dir" not found
Warning FailedScheduling 56s (x25 over 7m) default-scheduler PersistentVolumeClaim is not bound: "hub-db-dir"

Name: proxy-deployment-65d5c87bc-p7tqn
Namespace: kube-jupyterhub
Node: chivo-lab.chivo.cl/10.6.91.207
Start Time: Sun, 26 Nov 2017 17:15:28 +0000
Labels: name=proxy-pod
pod-template-hash=218174367
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-jupyterhub","name":"proxy-deployment-65d5c87bc","uid":"6640e1fc-d2cd-11e7-bd...
Status: Running
IP: 10.42.93.230
Created By: ReplicaSet/proxy-deployment-65d5c87bc
Controlled By: ReplicaSet/proxy-deployment-65d5c87bc
Containers:
proxy-container:
Container ID: docker://fbcb96e23c4b866618d22247931c1712b94404f890871d0246633c4faae46d45
Image: jupyterhub/configurable-http-proxy:2.0.1
Image ID: docker-pullable://jupyterhub/configurable-http-proxy@sha256:8c7cbf56bff642ee9a34b1fe4fa5af1cf694909358120ee7e0b6378fdef590c8
Ports: 8000/TCP, 8001/TCP
Command:
configurable-http-proxy
--ip=0.0.0.0
--port=8000
--api-ip=0.0.0.0
--api-port=8001
--default-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)
--error-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)
--log-level=debug
State: Running
Started: Sun, 26 Nov 2017 17:16:05 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 200m
memory: 512Mi
Environment:
CONFIGPROXY_AUTH_TOKEN: Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hnh7h (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-hnh7h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hnh7h
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m default-scheduler Successfully assigned proxy-deployment-65d5c87bc-p7tqn to chivo-lab.chivo.cl
Normal SuccessfulMountVolume 7m kubelet, chivo-lab.chivo.cl MountVolume.SetUp succeeded for volume "default-token-hnh7h"
Normal Pulling 7m kubelet, chivo-lab.chivo.cl pulling image "jupyterhub/configurable-http-proxy:2.0.1"
Normal Pulled 6m kubelet, chivo-lab.chivo.cl Successfully pulled image "jupyterhub/configurable-http-proxy:2.0.1"
Normal Created 6m kubelet, chivo-lab.chivo.cl Created container
Normal Started 6m kubelet, chivo-lab.chivo.cl Started container
Warning DNSSearchForming 23s (x10 over 7m) kubelet, chivo-lab.chivo.cl Search Line limits were exceeded, some dns names have been omitted, the applied search line is: kube-jupyterhub.svc.cluster.local svc.cluster.local cluster.local rancher.internal eth.cluster cm.cluster

Name: pull-all-nodes-1511716438-sj-jupyterhub-1-lwmvr
Namespace: kube-jupyterhub
Node: chivo-lab.chivo.cl/10.6.91.207
Start Time: Sun, 26 Nov 2017 17:13:58 +0000
Labels: controller-uid=30c6edca-d2cd-11e7-bdce-0296e497bcfc
job-name=pull-all-nodes-1511716438-sj-jupyterhub-1
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"Job","namespace":"kube-jupyterhub","name":"pull-all-nodes-1511716438-sj-jupyterhub-1","uid":"30c6edca-d2cd...
Status: Succeeded
IP: 10.42.92.122
Created By: Job/pull-all-nodes-1511716438-sj-jupyterhub-1
Controlled By: Job/pull-all-nodes-1511716438-sj-jupyterhub-1
Containers:
all-nodes-puller:
Container ID: docker://477f9e04c7bf5562bc9e9313dc8d374a89e2230936daca3b5bfe11d3365f1d04
Image: yuvipanda/image-allnodes-puller:v0.8
Image ID: docker-pullable://yuvipanda/image-allnodes-puller@sha256:662189f5243ef5eba41bda71dd842f43ad7b53184b29deee0465676de0c2ad51
Port:
Args:
jupyterhub/k8s-singleuser-sample
v0.4
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 26 Nov 2017 17:14:14 +0000
Finished: Sun, 26 Nov 2017 17:15:16 +0000
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hnh7h (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-hnh7h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hnh7h
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned pull-all-nodes-1511716438-sj-jupyterhub-1-lwmvr to chivo-lab.chivo.cl
Normal SuccessfulMountVolume 8m kubelet, chivo-lab.chivo.cl MountVolume.SetUp succeeded for volume "default-token-hnh7h"
Normal Pulling 8m kubelet, chivo-lab.chivo.cl pulling image "yuvipanda/image-allnodes-puller:v0.8"
Normal Pulled 8m kubelet, chivo-lab.chivo.cl Successfully pulled image "yuvipanda/image-allnodes-puller:v0.8"
Normal Created 8m kubelet, chivo-lab.chivo.cl Created container
Normal Started 8m kubelet, chivo-lab.chivo.cl Started container
Warning DNSSearchForming 7m (x6 over 8m) kubelet, chivo-lab.chivo.cl Search Line limits were exceeded, some dns names have been omitted, the applied search line is: kube-jupyterhub.svc.cluster.local svc.cluster.local cluster.local rancher.internal eth.cluster cm.clusterv```

Cheers,

All 11 comments

same issue here, I am running on Jetstream after using https://github.com/data-8/kubeadm-bootstrap.

It looks like there is a problem with storing volumes.

sudo kubectl --namespace=iris-jupyterhub get pv
No resources found.
sudo kubectl --namespace=iris-jupyterhub get pvc
NAME         STATUS    VOLUME    CAPACITY   ACCESSMODES   STORAGECLASS   AGE
hub-db-dir   Pending                                                     27m

I have the same error, anyone have find the solution ?

Having the same exact issue. Volume provisioning works outside of helm and I also tried hardcoding my values and annotations to the pvc.yaml. I even tried making the volume first and the install process fails that 'hub-db-dir' already exists. Please advise

the proxy pod comes up just fine.

@zonca @camilo-nunez @BrianVanEtten can you give me the following information:

  1. What cloud are you running on?
  2. What's the output of 'kubectl --namespace= describe pvc'?
  3. What's the output of 'kubectl --namespace= describe pod'?

usually this means your cloud provider hasn't provisioned a disk for you, which in GKE often means you are out of quota. The describe commands should tell us more...

You're right, it's the volume provisioning end for sure, but I am not using a cloud provider. I have setup a Trident provisioner (netapp) that works with any PVC I make as long as I pass the following annotation:

volume.beta.kubernetes.io/storage-class: my-storage-class

It's definitely possible that I'm not understanding helm enough to properly pass this annotation so all volumes it attempts to create (for each project pod) actually get created with the provisioner. Is beta not supported maybe?

Yesterday I was able to get passed the hub-deployment as Pending by creating a PersistentVolume (not a PVC) with the iscsi iqn information hardcoded, then allowing the default template PVC to claim it as 'hub-db-dir' volume. I also has to login to that endpoint manually with my worker nodes (via iscsiadm) in order for each node to use it. I quickly found out that each project spawns a pod and also reaches out to provision a volume for it. This is where is gets caught in Pending status again.

Any help is appreciated. Cheers!

@yuvipanda My response are:

1.- I am using Rancher to running Kubernetes.
2.-
Name: hub-db-dir Namespace: kube-jupyterhub StorageClass: Status: Pending Volume: Labels: <none> Annotations: <none> Capacity: Access Modes: Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal FailedBinding 13s (x25 over 6m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set

3.-
``` Name: hub-db-dir
Namespace: kube-jupyterhub
StorageClass:
Status: Pending
Volume:
Labels:
Annotations:
Capacity:
Access Modes:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal FailedBinding 13s (x25 over 6m) persistentvolume-controller no persistent volumes available for this claim and no storage class is set

kubectl --namespace=kube-jupyterhub describe pod
Name: hub-deployment-84944c8fc5-97wd7
Namespace: kube-jupyterhub
Node:
Labels: name=hub-pod
pod-template-hash=4050074971
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-jupyterhub","name":"hub-deployment-84944c8fc5","uid":"6640deef-d2cd-11e7-bdc...
Status: Pending
IP:
Created By: ReplicaSet/hub-deployment-84944c8fc5
Controlled By: ReplicaSet/hub-deployment-84944c8fc5
Containers:
hub-container:
Image: jupyterhub/k8s-hub:v0.4
Port: 8081/TCP
Requests:
cpu: 200m
memory: 512Mi
Environment:
SINGLEUSER_IMAGE: jupyterhub/k8s-singleuser-sample:v0.4
JPY_COOKIE_SECRET: Optional: false
POD_NAMESPACE: kube-jupyterhub (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: Optional: false
Mounts:
/etc/jupyterhub/config/ from config (rw)
/srv/jupyterhub from hub-db-dir (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hnh7h (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub-config-1
Optional: false
hub-db-dir:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
default-token-hnh7h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hnh7h
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 7m default-scheduler persistentvolumeclaim "hub-db-dir" not found
Warning FailedScheduling 56s (x25 over 7m) default-scheduler PersistentVolumeClaim is not bound: "hub-db-dir"

Name: proxy-deployment-65d5c87bc-p7tqn
Namespace: kube-jupyterhub
Node: chivo-lab.chivo.cl/10.6.91.207
Start Time: Sun, 26 Nov 2017 17:15:28 +0000
Labels: name=proxy-pod
pod-template-hash=218174367
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicaSet","namespace":"kube-jupyterhub","name":"proxy-deployment-65d5c87bc","uid":"6640e1fc-d2cd-11e7-bd...
Status: Running
IP: 10.42.93.230
Created By: ReplicaSet/proxy-deployment-65d5c87bc
Controlled By: ReplicaSet/proxy-deployment-65d5c87bc
Containers:
proxy-container:
Container ID: docker://fbcb96e23c4b866618d22247931c1712b94404f890871d0246633c4faae46d45
Image: jupyterhub/configurable-http-proxy:2.0.1
Image ID: docker-pullable://jupyterhub/configurable-http-proxy@sha256:8c7cbf56bff642ee9a34b1fe4fa5af1cf694909358120ee7e0b6378fdef590c8
Ports: 8000/TCP, 8001/TCP
Command:
configurable-http-proxy
--ip=0.0.0.0
--port=8000
--api-ip=0.0.0.0
--api-port=8001
--default-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)
--error-target=http://$(HUB_SERVICE_HOST):$(HUB_SERVICE_PORT)
--log-level=debug
State: Running
Started: Sun, 26 Nov 2017 17:16:05 +0000
Ready: True
Restart Count: 0
Requests:
cpu: 200m
memory: 512Mi
Environment:
CONFIGPROXY_AUTH_TOKEN: Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hnh7h (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-hnh7h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hnh7h
Optional: false
QoS Class: Burstable
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m default-scheduler Successfully assigned proxy-deployment-65d5c87bc-p7tqn to chivo-lab.chivo.cl
Normal SuccessfulMountVolume 7m kubelet, chivo-lab.chivo.cl MountVolume.SetUp succeeded for volume "default-token-hnh7h"
Normal Pulling 7m kubelet, chivo-lab.chivo.cl pulling image "jupyterhub/configurable-http-proxy:2.0.1"
Normal Pulled 6m kubelet, chivo-lab.chivo.cl Successfully pulled image "jupyterhub/configurable-http-proxy:2.0.1"
Normal Created 6m kubelet, chivo-lab.chivo.cl Created container
Normal Started 6m kubelet, chivo-lab.chivo.cl Started container
Warning DNSSearchForming 23s (x10 over 7m) kubelet, chivo-lab.chivo.cl Search Line limits were exceeded, some dns names have been omitted, the applied search line is: kube-jupyterhub.svc.cluster.local svc.cluster.local cluster.local rancher.internal eth.cluster cm.cluster

Name: pull-all-nodes-1511716438-sj-jupyterhub-1-lwmvr
Namespace: kube-jupyterhub
Node: chivo-lab.chivo.cl/10.6.91.207
Start Time: Sun, 26 Nov 2017 17:13:58 +0000
Labels: controller-uid=30c6edca-d2cd-11e7-bdce-0296e497bcfc
job-name=pull-all-nodes-1511716438-sj-jupyterhub-1
Annotations: kubernetes.io/created-by={"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"Job","namespace":"kube-jupyterhub","name":"pull-all-nodes-1511716438-sj-jupyterhub-1","uid":"30c6edca-d2cd...
Status: Succeeded
IP: 10.42.92.122
Created By: Job/pull-all-nodes-1511716438-sj-jupyterhub-1
Controlled By: Job/pull-all-nodes-1511716438-sj-jupyterhub-1
Containers:
all-nodes-puller:
Container ID: docker://477f9e04c7bf5562bc9e9313dc8d374a89e2230936daca3b5bfe11d3365f1d04
Image: yuvipanda/image-allnodes-puller:v0.8
Image ID: docker-pullable://yuvipanda/image-allnodes-puller@sha256:662189f5243ef5eba41bda71dd842f43ad7b53184b29deee0465676de0c2ad51
Port:
Args:
jupyterhub/k8s-singleuser-sample
v0.4
State: Terminated
Reason: Completed
Exit Code: 0
Started: Sun, 26 Nov 2017 17:14:14 +0000
Finished: Sun, 26 Nov 2017 17:15:16 +0000
Ready: False
Restart Count: 0
Environment:
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-hnh7h (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-hnh7h:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-hnh7h
Optional: false
QoS Class: BestEffort
Node-Selectors:
Tolerations: node.alpha.kubernetes.io/notReady:NoExecute for 300s
node.alpha.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m default-scheduler Successfully assigned pull-all-nodes-1511716438-sj-jupyterhub-1-lwmvr to chivo-lab.chivo.cl
Normal SuccessfulMountVolume 8m kubelet, chivo-lab.chivo.cl MountVolume.SetUp succeeded for volume "default-token-hnh7h"
Normal Pulling 8m kubelet, chivo-lab.chivo.cl pulling image "yuvipanda/image-allnodes-puller:v0.8"
Normal Pulled 8m kubelet, chivo-lab.chivo.cl Successfully pulled image "yuvipanda/image-allnodes-puller:v0.8"
Normal Created 8m kubelet, chivo-lab.chivo.cl Created container
Normal Started 8m kubelet, chivo-lab.chivo.cl Started container
Warning DNSSearchForming 7m (x6 over 8m) kubelet, chivo-lab.chivo.cl Search Line limits were exceeded, some dns names have been omitted, the applied search line is: kube-jupyterhub.svc.cluster.local svc.cluster.local cluster.local rancher.internal eth.cluster cm.clusterv```

Cheers,

I was running on XSEDE Jetstream on a K8S cluster configured with https://github.com/data-8/kubeadm-bootstrap this has no setup of a PV provider. I'll try this again installing rook.

Hi @camilo-nunez I met exactly the same problem as yous. Did you solve this issue? Thanks.
I ran this on my own cluster, not any cloud service.
cc @yuvipanda

Add some info:
I use Kubernetes v1.9.2 on my own 2-node cluster.
I created storage class according to https://kubernetes.io/docs/concepts/storage/storage-classes/#local and https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/84670ff9158490c6887580e05dcc1e2430fee57b/doc/source/user-storage.md#type-of-storage-provisioned
and enabled feature gates for VolumeScheduling and PersistentLocalVolumes, to use the local volume plugin.
For storage class, I created it by config:

kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
  name: local-fast
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer

and ran $ kubectl create -f storage_class.yaml --validate=false because I met error:

error validating "storage_class.yaml": error validating data: ValidationError(StorageClass): unknown field "volumeBindingMode" in io.k8s.api.storage.v1.StorageClass; if you choose to ignore these errors, turn validation off with --validate=false
$ kubectl describe storageclass
Name:            local-fast
IsDefaultClass:  Yes
Annotations:     storageclass.kubernetes.io/is-default-class=true
Provisioner:     kubernetes.io/no-provisioner
Parameters:      <none>
ReclaimPolicy:   Delete
Events:          <none>



md5-7511aba6e15d3f79fa73b67af1a8070f



$ kubectl --namespace=jupyterhub-class describe pvc
Name:          hub-db-dir
Namespace:     jupyterhub-class
StorageClass:  local-fast
Status:        Pending
Volume:
Labels:        <none>
Annotations:   <none>
Finalizers:    []
Capacity:
Access Modes:
Events:
  Type     Reason              Age               From                         Message
  ----     ------              ----              ----                         -------
  Warning  ProvisioningFailed  1m (x26 over 7m)  persistentvolume-controller  no volume plugin matched

Thanks if you could have a look for me.

EDIT oops okay multiple issues discussed, closing as the initial issue seem to be resolved from my own experience.


I'm closing this as I deem it too be out of scope for this repository to support with. But, from my understanding you need to ensure you have a default StorageClass k8s resource, and that creating a single PVC can actually make you dynamically get provisioned storage. Or, use non-dynamically provisioned storage somehow, but each user needs their own storage right?

Was this page helpful?
0 / 5 - 0 ratings