The kubelet API gives access to a wide variety of resources.
Allow authenticating requests to the kubelet API using any of:
Allow authorizing requests to the kubelet API using one of:
@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
on docs PR@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/feature-reviewers
on this issue to get approval before checking this off@kubernetes/docs
@kubernetes/feature-reviewers
on this issue to get approval before checking this offFEATURE_STATUS: IN_DEVELOPMENT
cc @kubernetes/sig-auth
cc @kubernetes/sig-auth
Just noting approval for the feature.
@liggitt @deads2k are y'all planning on doing this work? I am cross-tracking this on the SIG node task list.
@philips yes, I'll be doing the work
Just wondering, will this be in 1.5? Trying to determine if our team needs to invest the time into securing these communications via SSH and firewall rules to prevent https://github.com/kayrus/kubelet-exploit, or if we can just hold off for a bit and utilize our existing TLS client certs once this lands.
Just wondering, will this be in 1.5?
Yes, we are targeting 1.5.
@deads2k awesome, thanks!!
/cc @kubernetes/huawei
Does this feature target alpha for 1.5?
Does this feature target alpha for 1.5?
@idvoretskyi at least alpha
@erictune @liggitt This mirrors the mechanism we've used in OpenShift since 1.0. Want to call it beta?
I'd be comfortable with beta. It is built on top of two beta APIs, and has been tested. The remaining work is automated load testing and default enablement in the various install/setup methods
No objection to beta. Ask node team too?
On Thu, Nov 17, 2016 at 5:29 AM, Jordan Liggitt [email protected]
wrote:
I'd be comfortable with beta. It is built on top of two beta APIs, and has
been tested. The remaining work is automated load testing and default
enablement in the various install/setup methods—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/89#issuecomment-261246409,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AHuudnmBmvnWB6MaZFzXHqy96__HdUdGks5q_FbRgaJpZM4J65Mz
.
I am a lead on @kubernetes/sig-node and agree this should be beta.
I'm removing the team/SIG-Node label given that SIG-Auth is listed as the owner in the 1.5 feature spreadsheet; if we're using labels to describe areas of overlap then we need something else in these issues to identify owner
I agree that the feature is probably stable, but it makes me a little uncomfortable that our main deployment targets don't turn it on by default, so we have no end-to-end tests in CI verifying it plays nicely with the rest of the system.
@liggitt can you confirm that this item targets stable in 1.6?
@liggitt can you confirm that this item targets stable in 1.6?
Yes
keeping in beta status until the TokenReview and SubjectAccessReview APIs move to stable (now targeting 1.7)
@liggitt @idvoretskyi moving to next milestone then
Hi guys!
So, what's the status on this? I saw some PR for docs which seems to have changed.
Having a secure kubernetes cluster right out of kubeadm
'box' would be great. On a sidenote can one use calico
or any of the Networking and Network Policy
addons for this? Clearly AFAIK VXLAN based ones are insufficient as VXLAN doesn't provide security (please correct me if i'm wrong) but I'm thinking something like Calico could help? would a properly configured Calico be enough to make the These connections are not currently safe to run over untrusted and/or public networks
warning from the apiserver -> nodes, pods, and services
section that lives here obsolete?
Having a secure kubernetes cluster right out of kubeadm 'box' would be great
That is the case. kubeadm uses the latest stable security features for pretty much every release.
Regarding the other questions you have, I don't think they are very relevant to this feature. This feature is about locking down the kubelet API endpoint, not network (Pod2Pod, Node2Node) communication.
@liggitt I guess we could close this. Stable in v1.6 which was released some time ago. Most providers (like kubeadm) enable this by default.
Regarding the other questions you have, I don't think they are very relevant to this feature. This feature is about locking down the kubelet API endpoint, not network (Pod2Pod, Node2Node) communication.
Locking it down as in an access control applied to the level of the kubelet API endpoint itself, right? not preventing access from it, because I can still pretty much do a wget --no-check-certificate https://<ip>:10250
on the nodes etc.
It would be great to have a way to only access this through ssh tunnels or perhaps HTTP client based certificates, no? although ssh tunnels sound more secure (as the port wouldn't be available from the outside)
In no way I have considered the implications/work of such a feature given Kubernetes architecture. Simply by my book no port to connect to available is better than having it available
The kubelet port at https://<ip>:10250
can with this feature (in the kubeadm case) only be accessed by client certificates signed by the cluster CA with O=system:masters
, which basically means API Servers are the only ones that can access the kubelet ports by default.
wget --no-check-certificate https://
:1025
That will yield an Unauthorized
response. If you try to use a client cert that is not in the system:masters
organization you will get Forbidden
as the return.
That will yield an Unauthorized response. If you try to use a client cert that is not in the system:masters organization you will get Forbidden as the return.
🤔 I was pretty sure I got a 404
instead of a 401
from running that wget with an out of the box kubeadm
1.7.1 setup cluster. I guess I can always change the config, but it would be nice if kubeadm
did it for us from the start perhaps (?)
@joantune @luxas can you open issues against kubeadm for this? This issue is only to track the feature implemented in the kubelet.
By the way, the documentation is ok detailing what are the concepts of Nodes, Pods and Clusters, and how they interact with each other, however, I found no reference on how these interact with the host's world, i.e. it would be nice to see some diagrams or similar on how the several services and concepts (pods, nodes, services, processes [i.e. kubectl, kubeproxy kubeadm]) interact with the host and its native network.
I have been looking quite thoroughly at the docs yet I have no clear picture of how that looks like in my head. Making it clearer would lower the barrier to learn it and use it.
Can anyone point me to a diagram/doc? I'm asking because I want to know exactly what should I expose or shouldn't network wise.
For instance, imagine that I have an equivalent to the Amazon's VPCs, what should be exposed to the outside what doesn't? how does a service get exposed? (i.e. is there any kubernetes process for which all the traffic flows through [e.g. kubeproxy] or does it expose the pod's port directly) would it be ok to have kubectl only accessible internally (if i'm willing to do an SSH tunnel to one of the machines). Would that still allow Kubernetes to work well?
These things would help sysadmins to have a clear picture if their network and cluster configuration are secure or not and confident that they are exposing the least they should
@joantune The best reference I've found for some of the questions you asked is this video:
https://www.youtube.com/watch?v=y2bhV81MfKQ
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
keeping in beta status until the TokenReview and SubjectAccessReview APIs move to stable (now targeting 1.7)
@liggitt I'm pretty sure we can close this now as many deployments secure the kubelet API OOTB, and the APIs are v1/stable already.
@liggitt I'm pretty sure we can close this now as many deployments secure the kubelet API OOTB, and the APIs are v1/stable already.
agree, closing