My environment: Azure/AKS, K8s: 1.12.5, Z2JH chart: v0.9-38ae89e, shared storage: Azure Files.
_What is wrong with my configuration (helm upgrade fails, likely due some issue in config.yaml)?_
List all PVs in current context:
ablekh@mgmt:~/sbdh_jh_v2.3$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-3b0215eb-3307-11e9-8565-56b4ab6d8c78 1Gi RWO Delete Bound sbdh-jh-v2-3/hub-db-dir default 22d
pvc-63034a1f-334c-11e9-8565-56b4ab6d8c78 10Gi RWO Delete Bound sbdh-jh-v2-3/claim-ablekh default 22d
sbdh-k8s-storage-share 100Gi RWX Retain Available 3h
Display info on statically created PV/share:
ablekh@mgmt:~/sbdh_jh_v2.3$ kubectl describe pv sbdh-k8s-storage-share
Name: sbdh-k8s-storage-share
Labels: usage=sbdh-k8s-storage-share
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"usage":"sbdh-k8s-storage-share"},"name":"sbdh-k8s-sto...
Finalizers: [kubernetes.io/pv-protection]
StorageClass:
Status: Available
Claim:
Reclaim Policy: Retain
Access Modes: RWX
Capacity: 100Gi
Node Affinity: <none>
Message:
Source:
Type: AzureFile (an Azure File Service mount on the host and bind mount to the pod)
SecretName: azure-secret
SecretNamespace:
ShareName: sbdh-k8s-storage-share
ReadOnly: false
Events: <none>
Mount created share as a volume in a pod (just to test):
apiVersion: v1
kind: Pod
metadata:
name: sbdh-k8s-storage-pod
spec:
containers:
- image: nginx:1.15.5
name: sbdh-k8s-storage-pod
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
volumeMounts:
- name: sbdh-k8s-azure-files-vol
mountPath: /mnt/azure
volumes:
- name: sbdh-k8s-azure-files-vol
azureFile:
secretName: azure-secret
shareName: sbdh-k8s-storage-share
readOnly: false
SUCCESS:
ablekh@mgmt:~/sbdh_jh_v2.3$ kubectl get pods
NAME READY STATUS RESTARTS AGE
sbdh-k8s-storage-pod 1/1 Running 0 4h32m
ablekh@mgmt:~/sbdh_jh_v2.3$ kubectl describe pods sbdh-k8s-storage-pod
Name: sbdh-k8s-storage-pod
Namespace: default
Priority: 0
PriorityClassName: <none>
Node: aks-nodepool1-29754211-0/10.240.0.4
Start Time: Tue, 12 Mar 2019 01:03:26 -0400
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"sbdh-k8s-storage-pod","namespace":"default"},"spec":{"containers":[{"...
Status: Running
IP: 10.244.0.74
Containers:
sbdh-k8s-storage-pod:
Container ID: docker://1322ad2f61e012d18c6d9e4c3a414dd6e5be8f9df2b6ad738b5d4428fbcc0f94
Image: nginx:1.15.5
Image ID: docker-pullable://nginx@sha256:b73f527d86e3461fd652f62cf47e7b375196063bbbd503e853af5be16597cb2e
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 12 Mar 2019 01:03:44 -0400
Ready: True
Restart Count: 0
Limits:
cpu: 250m
memory: 256Mi
Requests:
cpu: 100m
memory: 128Mi
Environment:
KUBERNETES_PORT_443_TCP_ADDR: sbdh-jh-v2-3-9925bb29.hcp.eastus.azmk8s.io
KUBERNETES_PORT: tcp://sbdh-jh-v2-3-9925bb29.hcp.eastus.azmk8s.io:443
KUBERNETES_PORT_443_TCP: tcp://sbdh-jh-v2-3-9925bb29.hcp.eastus.azmk8s.io:443
KUBERNETES_SERVICE_HOST: sbdh-jh-v2-3-9925bb29.hcp.eastus.azmk8s.io
Mounts:
/mnt/azure from sbdh-k8s-azure-files-vol (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-k7g8w (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
sbdh-k8s-azure-files-vol:
Type: AzureFile (an Azure File Service mount on the host and bind mount to the pod)
SecretName: azure-secret
ShareName: sbdh-k8s-storage-share
ReadOnly: false
default-token-k7g8w:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-k7g8w
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Update Helm chart configuration to mount created PV as a shared volume for single-user containers:
hub:
cookieSecret: <SECRET>
extraConfig: |
c.KubeSpawner.singleuser_image_pull_secrets = "sbdh-acr-secret"
c.JupyterHub.allow_named_servers = True
proxy:
secretToken: <SECRET>
type: ClusterIP
https:
enabled: true
type: letsencrypt
letsencrypt:
contactEmail: <EMAIL_ADDR>
hosts:
- <FQDN>
prePuller:
continuous:
enabled: true
imagePullSecrets:
- name: sbdh-acr-secret-contrib
rbac:
enabled: true
auth:
admin:
users:
- ablekh
type: github
github:
clientId: <SECRET>
clientSecret: <SECRET>
callbackUrl: "https://<FQDN>/hub/oauth_callback"
singleuser:
storage:
extraVolumes:
- name: jupyterhub-shared
persistentVolumeClaim:
claimName: jupyterhub-shared-volume
# Set this annotation to NOT let Kubernetes automatically create
# a persistent volume for this volume claim.
storageClass: ""
extraVolumeMounts:
- name: jupyterhub-shared
mountPath: /home/shared
#image:
#pullPolicy: Always
profileList:
- display_name: "Minimal Jupyter environment"
description: "Provides access to Python kernel only."
default: true
- display_name: "Data science environment"
description: "Provides access to Python, R and Julia kernels."
kubespawner_override:
image: jupyter/datascience-notebook:6fb3eca57bd3
- display_name: "Custom SBDH data science environment - SBDH-DS-O"
description: "Provides access to Python, R, Julia and Octave kernels."
kubespawner_override:
image: <PRIVATE_DOCKER_REGISTRY>/datascience-octave-notebook:v3
Upgrade cluster (suggested helm repo update does not help) - FAILURE:
ablekh@mgmt:~/sbdh_jh_v2.3$ helm upgrade sbdh-jh-v2.3 jupyterhub/jupyterhub --version=0.9-38ae89e -f config.yaml
Error: failed to download "jupyterhub/jupyterhub" (hint: running `helm repo update` may help)
ablekh@mgmt:~/sbdh_jh_v2.3$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "jupyterhub" chart repository
...Successfully got an update from the "coreos" chart repository
...Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
ablekh@mgmt:~/sbdh_jh_v2.3$ helm upgrade sbdh-jh-v2.3 jupyterhub/jupyterhub --version=0.9-38ae89e -f config.yaml
Error: failed to download "jupyterhub/jupyterhub" (hint: running `helm repo update` may help)
Am I missing something obvious (or less obvious)? Please advise.
Hey @ablekh =D Hmmm, have you added the jupyterhub helm chart repostiory on this computer? It seems like it does not know how to find the jupyterhub chart.
helm repo add jupyterhub https://jupyterhub.github.io/helm-chart/
helm repo update
Aside from this, I wonder about your singleuser.storage.extraVolumes configuration, I figure this should be just like your test pod. I looked in documentation I think relates, and figure it does not currently provide a broad enough example.
What is written within extraVolumes will become part of the kubernetes pod definition, so there is no manipulation that takes place between the helm chart -> kubespawner using these values -> and kubespawner generating a pod specification using these extraVolumes and extraVolumeMounts.
Hey @consideRatio :-) I appreciate your help, as always. You are right on the chart repo - adding it fixes Helm's complaints. However, I'm quite surprised by this, since I had the repo added earlier (in prior terminal sessions) and thought that this information persists between sessions ... Apparently, this is not the case.
On the shared storage configuration, after I cleared the Helm issue, I'm getting the following error when the system attempts to attach the storage:
2019-03-12 19:26:38+00:00 [Warning] persistentvolumeclaim "jupyterhub-shared-volume" not found
I think I know what the problem is (I messed up a bit). Will try to fix this and share the results below ...
P.S. In the meantime, if you have any experience w/ named servers, could you take a look at https://github.com/jupyterhub/zero-to-jupyterhub-k8s/issues/1186?
Hmm ... I have re-worked the configuration as follows, but the error still persists:
2019-03-12 19:55:15+00:00 [Warning] persistentvolumeclaim "sbdh-k8s-shared-storage-claim" not found
"Missing" PVC apparently exists and is well:
root@mgmt:~/sbdh_jh_v2.3# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
sbdh-k8s-shared-storage-claim Bound sbdh-k8s-storage-share 100Gi RWX 7m48s
root@mgmt:~/sbdh_jh_v2.3# kubectl describe pvc sbdh-k8s-shared-storage-claim
Name: sbdh-k8s-shared-storage-claim
Namespace: default
StorageClass:
Status: Bound
Volume: sbdh-k8s-storage-share
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{"volume.beta.kubernetes.io/storage-class":""},"name":"sbdh-k8...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-class:
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 100Gi
Access Modes: RWX
Events: <none>
Mounted By: <none>
The PVC object was created (kubectl apply -f azure-files-pvc.yaml) from the following declaration:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: sbdh-k8s-shared-storage-claim
# Set this annotation to NOT let Kubernetes automatically create
# a persistent volume for this volume claim.
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
selector:
# To make sure we match the claim with the exact volume, match the label
matchLabels:
usage: sbdh-k8s-storage-share
Relevant part of Helm chart configuration:
singleuser:
storage:
extraVolumes:
- name: jupyterhub-shared
persistentVolumeClaim:
claimName: sbdh-k8s-shared-storage-claim
extraVolumeMounts:
- name: jupyterhub-shared
mountPath: /home/jovyan/shared
Thoughts? cc: @consideRatio @yuvipanda @ryanlovett
I'm suspecting one issue, though: storage class (SC) is missing from PVC info. I thought that it is possible to define a storage class directly in PV declaration, but this is likely not true, as I now understand. Will define SC separately and reference it in PVC declaration. Getting closer ... :-)
What confuses me, is that your success standalone example was written with the following within its pod definition:
volumes:
- name: sbdh-k8s-azure-files-vol
azureFile:
secretName: azure-secret
shareName: sbdh-k8s-storage-share
readOnly: false
But, when you are declaring the relevant part of your helm chart config, which is simply a pass through to the pod specification, you declare the following that differs from your example.
extraVolumes:
- name: jupyterhub-shared
persistentVolumeClaim:
claimName: sbdh-k8s-shared-storage-claim
They should be the same, right? And that is what I meant with the example in the docs relating to writing extraVolumes etc as too specific and not broad enough. It only contained an example with a PVC involved, while your working PV + POD example contained no PVC at all. Btw, I lack any specific insights into AzureFile volumes, but from your working example, I assume you should change the extraVolumes part within the helm chart values to match what you have in the work examples pod definition under "volumes".
@consideRatio Thanks! Hmm ... I don't see what is inconsistent in two snippets you shared above ... Please see my previous comment above for my "new insights" ;-)
@ablekh my key question about your config is: why do you introduce a volume referencing a PVC in the helm chart config of the singleuser pod, when you don't do it in the test pod that you named "sbdh-k8s-storage-pod" and declared "success" for?
You don't seem to need a PVC, so my idea right now is that you should reproduce your success, and to do that, you should write:
extraVolumes:
- name: sbdh-k8s-azure-files-vol
azureFile:
secretName: azure-secret
shareName: sbdh-k8s-storage-share
readOnly: false
rather than
extraVolumes:
- name: jupyterhub-shared
persistentVolumeClaim:
claimName: sbdh-k8s-shared-storage-claim
And never introduce a PVC resource at all, as you did not seem to have done that in the initial standalone example.
@consideRatio Thank you so much for your further advice. I made relevant changes and upgraded cluster.
However, spawning a single-user server fails due to lack of access to secrets:
2019-03-12 22:50:46+00:00 [Warning] MountVolume.SetUp failed for volume "sbdh-k8s-azure-files-vol" : Couldn't get secret sbdh-jh-v2-3/azure-secret
2019-03-12 22:51:06+00:00 [Normal] AttachVolume.Attach succeeded for volume "pvc-63034a1f-334c-11e9-8565-56b4ab6d8c78"
The strange thing is that the system fails to get secret, which I have previous created like this:
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
with result:
secret/azure-secret created
Perhaps, the created secret is not accessible across namespaces ... (notice: it is looking in sbdh-jh-v2-3).
yepp they need to be in the same namespace if you dont reference the specific namespace with a not often seen syntax, if it even is possible, hmmm
when you run the kubectl create command, you can pass a -n mynamespace btw
@consideRatio Thank you for confirming my thoughts. After some additional relatively small efforts (e.g., copying secrets to the target namespace, Helm chart configuration cleanup), I'm happy to report that I finally got shared storage (Azure Files-based) working on our pilot AKS cluster. I still want to test some aspects to make sure that everything works, but basic functionality seems to be there. I will keep everyone posted ...
After having gone through this ordeal, I came to conclusion that Z2JH documentation's part on storage is quite confusing and incomplete (including examples). The most confusing aspect is that by hiding some complexity via Helm chart, Z2JH approach is quite different from standard non-Helm Kubernetes documentation out there, which implies manual creation of storage-focused K8s objects, like PVs, PVCs and secrets. If Z2JH chart takes care of all those automagically (via config.yaml), then documentation should clearly say so. I hope that this feedback will be helpful in further improving the docs. cc: @choldgraf
Closing this issue as Resolved. Feel free to comment further.
@ablekh thanks for investigating this, how was tour final storage config? Did you need to create the PVC in the end? Im mostly interested in how your extraVolume definition looked like, and if it reference a PVC, how that looked like.
Perhaps you could create an issue for the repo with the latest comment about the need to improve the storage docs, linking this issue?
I dont want this feedback to get lost :)
/erik from mobile
@consideRatio My pleasure. Thank you for helping. Will post relevant part(s) of final configuration later (spoiler: there was no need to create a PVC - actually, I removed one that I have originally created). Will create the documentation improvement issue later as well. Don't worry - the feedback won't get lost. :-)
I am using google cloud and was getting the same persistent volume claim "example-volume" not found error. I followed the documentation referenced by @consideRatio and am able to make it work. Make sure to add namespace when you create the pv i.e kubectl apply -f pvc-demo.yaml --namespace=jhub.
The directory shared is successfully mounted at home.
For anyone who is interested, this is what I did.
pvc-demo.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-vol
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Create the volume with the same namespace
kubectl apply -f pvc-demo.yaml --namespace=jhub
Relevant part of the helm chart - config.yaml
singleuser:
storage:
capacity: 2Gi
extraVolumes:
- name: shared-vol
persistentVolumeClaim:
claimName: shared-vol
extraVolumeMounts:
- name: shared-vol
mountPath: /home/shared
Upgrade helm
helm upgrade jhub jupyterhub/jupyterhub -f config.yaml
I still plan to post what has worked for me. When I get a chance (hectic time now).
BTW, @consideRatio: any thoughts on https://discourse.jupyter.org/t/jh-z2jh-helm-chart-mapping/1471?
@consideRatio My pleasure. Thank you for helping. Will post relevant part(s) of final configuration later (spoiler: there was no need to create a PVC - actually, I removed one that I have originally created). Will create the documentation improvement issue later as well. Don't worry - the feedback won't get lost. :-)
Hi, did you ever have time to write any documentation about this?
@MattiasVahlberg Hi! Unfortunately, I haven't had a chance to do that. I got distracted by other things. Having said that, I can probably dig out my latest configuration, if you still need it. Please let me know.
@ablekh Hi, no worries. After some research I found out how it worked, could wish a little more of the official documentation sometimes!
Have a great weekend.
@MattiasVahlberg All right. I'm glad that you figured it out. You, too, have a great weekend.
@lakshaykc This worked for me, thanks a lot!
Most helpful comment
I am using google cloud and was getting the same
persistent volume claim "example-volume" not founderror. I followed the documentation referenced by @consideRatio and am able to make it work. Make sure to add namespace when you create the pv i.ekubectl apply -f pvc-demo.yaml --namespace=jhub.The directory
sharedis successfully mounted athome.For anyone who is interested, this is what I did.
pvc-demo.yaml
Create the volume with the same namespace
kubectl apply -f pvc-demo.yaml --namespace=jhubRelevant part of the helm chart - config.yaml
Upgrade helm
helm upgrade jhub jupyterhub/jupyterhub -f config.yaml