Zero-to-jupyterhub-k8s: To disable the initContainer running as root to block an insecure IP to expose by default?

Created on 5 Oct 2020  路  6Comments  路  Source: jupyterhub/zero-to-jupyterhub-k8s

We currently have singleuser.cloudMetadata.enabled=false, which mean we actively disable access to a metadata REST API endpoint on k8s nodes on the cloud that can expose some potentially sensitive stuff. This is an endpoint that we already disable access to by default in our k8s NetworkPolicy for the singleuser pod's.

The drawback of disabling this in the way we do it, is that we require to startup an extra initContainer for the user pods, and it requires a lot of privileges compared to normal pods, which is the cause of #1408 for example.

I think it could make sense to disable this fix by default now, but a drawback is that the NetworkPolicy resources while created by default, isn't enforced without a NetworkPolicy controller available in the k8s cluster, and that isn't installed by default in all k8s clusters.

Overall though, I lean towards thinking we should stop having a default that adds this privileged initContainer to the user pods in favor of relying on the NetworkPolicy we create.

All 6 comments

I agree with disabling it by default, and documenting that if your cluster doesn't support egress NetworkPolicies you many need to enable the initContainer.

Giving access to the metadata service is insecure/dangerous. Why make that the default? Instead my preference would be adding docs on how to switch the config so that you can deploy it in a cluster that uses pod security policies or if you know that you have a network policy controller and want to remove the extra init container.

This way the default is secure and the users who know what they are doing/care about the optimisation can enable that mode.

Why make that the default?

Mostly because it requires the startup of one container before the other that has root privileges, and can cause PodSecurityPolicies to react, which will force users to learn about what this pod without prior knowledge what it is about, and then see what they can do about it.

The case is that it is secure by default with a networkpolicycontroller, but I'm okay to the idea of having it around still by default as a redundancy that will end up useful sometimes.

Since there isn't clear agreement, I want to opt for keeping it around I think. I'm updating my PR.

a drawback is that the NetworkPolicy resources while created by default, isn't enforced without a NetworkPolicy controller available in the k8s cluster, and that isn't installed by default in all k8s clusters.

This made me think that with the propose change the default would become not-secure, because k8s clusters don't have a NetworkPolicy controller installed by default.

I think users who operate (on) a cluster that has pod security policies in place are more knowledgable (or have access to support that is more knowledgable) because someone had to turn on pod security policies in the first place. They also have a "good" error message (as in we can make it easy for them to find an explanation in the docs based on the error) that explains what is happening and how to disable the initContainer.

The average deployer might not know that their out-of-the-box k8s cluster and the out-of-the-box z2jh config together lead to a dangerous state. And the error messages you get from running in this state aren't easy for us to have in the documentation. In my imagination it would be something like "i got hacked, help", which is hard to notice/describe in the docs.

@betatim I agree with you fully now :) I just pushed the changes to #1805!

Was this page helpful?
0 / 5 - 0 ratings