Describe the bug
I am trying to upgrade my jupyterhub helm release and use a new docker image. But, no matter what method I use, the image is not being updated.
Infra info:
I upgrade my docker image by publishing new version (with new tag) to docker repository. And then I update the tag in helm's config.yaml:
singleuser:
image:
name: <account>.dkr.ecr.eu-west-1.amazonaws.com/<repo>
tag: <tag> # <- bump up this to the tag of my newest image
Then, I upgrade helm release:
helm upgrade jhub jupyterhub/jupyterhub --version=0.7.0 --values config.yaml
This doesn't work, the old image is still being used when I startup a notebook server.
I have also tried deleting the helm release and re-installing:
helm delete --purge jhub && helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.7.0 --values config.yaml
And also have tried deleting the namespace and release and reinstalling:
helm delete --purge jhub && kubectl delete namespace jhub && helm upgrade --install jhub jupyterhub/jupyterhub --namespace jhub --version=0.7.0 --values config.yaml
I have also tried using this setting in config.yaml:
hub:
imagePullPolicy: Always
What is strange is that when I check the docker images that are currently being used in my kubernetes cluster, via kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].image}", I see the correct docker image repo and tag. But, it is not the one that is being used for notebook servers.
What could be happening? Old docker image is cached somehow?
I checked one of my pod descriptions and saw a strange event message
Normal Pulled 32m kubelet, <<REDACTED>> Container image "<AWS_ACCOUNT>.dkr.ecr.eu-west-1.amazonaws.com/<REPO>:NEW_TAG" already present on machine
How can this be? The image being referred to in this instance is an image that is brand new? Somehow, the new image must be generating the same hash as the original one, which causes k8s cluster to think the image does not need to be pulled ...bug??
(full describe output, with sensitive info redacted): https://gist.github.com/wierzba3/dac8513a3e9bde2453de8944352b41eb)
@consideRatio I'm going to move this issue to the Zero to JupyterHub repo.
Hey @wierzba3!
It is not clear to me if you want to use a custom docker image for the hub pod, or for the user pods. Changes to the hub pods image is under hub in config.yaml, and changes to the user pods image are under singleuser in config.yaml.
You need to use singleuser.image.pullPolicy: "Always" rather than hub.imagePullPolicy: "Always" if you want to force the correct image to be repulled if it was cached on the node already. See the reference docs for details: http://z2jh.jupyter.org/en/latest/reference.html#helm-chart-configuration-reference.
Was it the user image you wanted to change? Then you should update the singleuser.image.name and singleuser.image.tag. If it the image of jupyterhub itself, it should instead be hub.image.name and hub.image.tag i think, verify this in the configuration reference.
Note that I have learned, that if you want to inject certain information, like a custom config or template files for the hub etc to the hub, a better option than building a new image may be to use hub.extraVolumes and hub.extraVolumeMounts to mount information to it. I think if you initially mount a blank 1GB volume, you could use kubectl cp to write files there, and then restart the hub to see some changes.
But, if you wanted to install additional packages to the hub, and does not want to do it as part of a startup script, then I guess you need to actually create a new image built from the old one.
@consideRatio I was unaware of the distinction between hub docker image and user docker image
replacing
hub.imagePullPolicy: "Always"
with
singleuser.image.pullPolicy: "Always"
worked! thanks so much!!
:D you're welcome @wierzba3
Most helpful comment
@consideRatio I was unaware of the distinction between hub docker image and user docker image
replacing
with
worked! thanks so much!!