A configuration like shown below with two profiles where one profile has a guarantee bigger than the default limit doesn't work right now. When you try to launch your user's pod for the second profile the Pod spec that gets sent to kubernetes has the defaut mem limit in it, instead of the mem limit set in the second profile. Full log
singleuser:
cpu:
guarantee: 0.5
limit: 2
memory:
guarantee: 2G
limit: 3G
profileList:
- display_name: "Launches fast (2CPUs, 2G)"
description: "Just the basics"
default: true
- display_name: "Launches more slowly (4CPUs, 9G)"
description: "More toys!"
kubespawner_override:
cpu_guarantee: 0.8
cpu_limit: 4
mem_guarantee: '9G'
mem_limit': '9G'
I've not investigated if this is a bug in the helm chart or kubespawner.
The workaround is to define the resources in each profile instead of having a "default".
Sounds like the helm chart values are applied last, after the kubespawner overrides, an issue of z2jh!
Ping @metonymic-smokey, i think this is a very suitable issue to tackle to get to understand what goes on in the project!
The provided values to render the helm chart templates will update a configmap, the configmap is mounted as many files on the hub pod, these files are read by the jupyterhub_config.py file. This configuration of kubespawner probably ends up making the kubespawner overrides get too low priority or something like this hmmm...
Something else I've noticed, and I'm not sure if this is by design, is that the user placeholder pods don't use the hardware profiles like normal user pods would. I've got a set of 4 profiles for singleuser notebook pods with one of them being the default:
{
"display_name": "micro",
"slug": "micro",
"description": "Useful for scale testing a lot of pods",
"default": true,
"kubespawner_override": {
"cpu_guarantee": 0.015,
"cpu_limit": 1,
"mem_guarantee": "64M",
"mem_limit": "1G"
}
}
and is much smaller than the default here [1]. For load testing I wanted to auto-scale nodes up before creating 100 of these micro user notebook pods and was (semi) surprised when the autoscaler started creating 3 new nodes rather than 1 (these are 32GB nodes). And that's because (I'm assuming) each placeholder pod is guaranteed 1G of memory.
The docs here [3] are a bit misleading where it says:
The user placeholders will have the same resources requests as the default user.
Though I guess that depends on what you consider default to be. I can understand that this is a limitation though because the user placeholder pods aren't getting created by the KubeSpawner and thus aren't using the hardware profiles.
Note that in our case we don't set resources values for singleuser because of _this_ issue about how it doesn't work with hardware profiles.
Would it be possible to decouple or override the singleuser resources for placeholder pods?
[1] https://github.com/jupyterhub/zero-to-jupyterhub-k8s/blob/master/jupyterhub/values.yaml#L305
[2] https://cloud.ibm.com/docs/containers?topic=containers-limitations
[3] https://zero-to-jupyterhub.readthedocs.io/en/latest/administrator/optimization.html#efficient-cluster-autoscaling
Is this still the case? To have a different resource limit/guarantee we have to remove the default one and define it per profile?
Thanks!
"default": True, under kubespawner_override.
Most helpful comment
Ping @metonymic-smokey, i think this is a very suitable issue to tackle to get to understand what goes on in the project!
The provided values to render the helm chart templates will update a configmap, the configmap is mounted as many files on the hub pod, these files are read by the jupyterhub_config.py file. This configuration of kubespawner probably ends up making the kubespawner overrides get too low priority or something like this hmmm...