When I create a KIND cluster; it sets the Node Allocatable memory to the max memory that my laptop has. However I want to limit it to something lower because I'm usually running other things on my laptop as well (like a browser)
Usually one would do this through specifying kubelet flags as I read here: https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/
I tried looking in the docs but found nothing about how to override kubelet flags so this is probably not the right way.
Reason is i'm trying to debug some deployment locally; but because kind thinks there is more RAM than it actually has it over-allocates and my computer runs out of memory and locks up before I can debug. I want to turn that over-allocation into a scheduling error; by giving kind less memory to work with.
I think this and https://github.com/kubernetes-sigs/kind/issues/877 are the same question. Feel free to close if you agree
supporting this first class might be a better answer than #877's current WIP approach
supporting this first class might be a better answer than #877's current WIP approach
yeah, definitely that's the missing part in that PR, is just not isolating the nodes, is "converting" them in VMs and for that we need kubelet to only "see" the allocated resources ... hehe, I didn't get that before 馃槄
Let me play with this https://kubernetes.io/docs/tasks/administer-cluster/reserve-compute-resources/#example-scenario and see how it goes
/assign
How would this look with kubeadmConfigPatch?
Answering my own question:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
system-reserved: memory=4Gi
Thanks @arianvp . I did this, and copied the block to my worker node as well. It reduced the allocatable memory for the control-plane node, but not the worker node.
Any ideas?
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
system-reserved: memory=8Gi
- role: worker
kubeadmConfigPatches:
- |
kind: InitConfiguration
nodeRegistration:
kubeletExtraArgs:
system-reserved: memory=8Gi
Figured it out. For worker nodes you need JoinConfiguration (not InitConfiguration).
See comment here
Most helpful comment
Answering my own question: