Design Doc: https://github.com/kubernetes/kubernetes/blob/master/docs/design/podaffinity.md
/pkg/apis/...
)_FEATURE_STATUS is used for feature tracking and to be updated by @kubernetes/feature-reviewers._
FEATURE_STATUS: IN_DEVELOPMENT
More advice:
Design
Coding
Docs
cc @kubernetes/sig-scheduling
One thing which might make a common case of anti-affinity simpler is to allow expansion of the "spread" concept to an abitrary label. That is, if I could say:
spread: { type: database }
That would let me express the idea of "don't run a pod with type: database on a node with any other pod of type: database", and thus allow a very simple way of expressing "don't put two Postgres pods on the same node, and don't put them on the same node as Cassandra or MySQL".
I'd expect that there are a number of cases where a specific _class_ of applications tends to use the same resources. For example, I can imagine not wanting two busy http routers to go on the same node due to network competition, even if one is HAProxy and the other is Nginx.
... continued:
One refinement of this is that I can imagine wanting user-controllable "weak" vs. "hard" spread rule. For example, in most of my Postgres deployments, I would rather be short one or two pods than put two Postgres pods on the same machine (a hard rule). On the other hand, for Etcd, I could imagine saying "don't put two pods from this class on the same node if you can help it", which would be a soft rule.
Both of the things you mentioned are supported. See the design doc linked to above (it had the wrong URL originally and I fixed it last night, so you may have read the wrong doc if you already looked at that).
Ah, I read the design doc and I couldn't find that particular feature. Keywords/lines?
"don't run a pod with type: database on a node with any other pod of type: database"
See "Can only schedule P onto nodes that are running pods that satisfy P1. (Assumes all nodes have a label with key node and value specifying their node name.)". Then substitute
"weak" vs. "hard" spread rule.
See the comment for PreferredDuringSchedulingIgnoredDuringExecution (that's the "soft" flavor), as compared to the other two, in the PodAffinity/PodAntiAffinity types in the API section of the doc.
Keen, thanks!
@davidopp says this is done.
Sorry, I was mis-remembering what this issue is; the part of this that is described in #51 is done, but this one was not intended to be finished in 1.4. I've moved to 1.5 milestone.
Trying to implement pod (anti)affinity for DaemonSets too. Someone PTAL: https://github.com/kubernetes/kubernetes/pull/31136
Goal for 1.5 is to move this to Beta. More details in kubernetes/kubernetes#25319
/cc @rrati @jayunit100
@wojtek-t can you explain in which stage this feature is going to be delivered in 1.5? @davidopp has defined beta, while in this conversation https://github.com/kubernetes/kubernetes/pull/31136#issuecomment-253549312 I see some comments with concerns?
@idvoretskyi - most probably it won't get to beta, but that's not final decision from what I know.
It's not going to be beta. There are a few features we recently decided to remove from the set we were going to move to beta in 1.5. I'll update the feature bugs shortly.
The details can be found here: https://github.com/kubernetes/kubernetes/issues/30819 and here: https://github.com/kubernetes/kubernetes/issues/34508
The general gist is: annotations as a mechanism for alpha-beta-GA api promotion has a number of issues, and @kubernetes/sig-api-machinery is working on a "happy-path" which is still TBD.
Yes, what @timothysc . I'm removing the beta-in-1.5 label and the 1.5 milestone.
I have a use case for this feature which I don't think is covered by the current design, but please correct me if I'm wrong.
Imagine a simple example cluster with two nodes. I want to create deployments in this cluster with two pod replicas each. I want to require that the two pods are not on the same node. I can do this with pod affinity based on a label like app=foo, but when I edit the deployment, creating a new replica set, the new pods can't be scheduled, because each node already has a pod with the label app=foo. I would have to change the deployment's labels and affinity rules each time I deploy.
What I really want is a way to require that pods with the same labels and the same pod-template-hash don't end up on the same node, but I don't think there's a way to express that in the current affinity system because there's no operator for "equal to the value of that label for this pod". In other words, I'd have to know the value of the pod-template-hash in advance somehow.
My understanding of the way Deployments work for rolling update is that a second RS is created, initially with 0 replicas, and then the first RS is scaled down as the second RS is scaled up. So the total number of replicas across the two RSes is 2, except perhaps for transient conditions. Initially both are in the "old version," then one is in the "new version" and one is in the "old vesion", and finally both are in the "new vesion."
If you have maxSurge == 0, you get "up to but not more than N" behavior.
If you want to keep availability >= 100% of your original N, you'd need
something that is unique for each RS as you note.
We don't really have downward API for regular fields, but I could imagine
something like that (have pod affinity rules depend on the current value of
a label for a pod).
On Sun, Dec 4, 2016 at 4:15 AM, David Oppenheimer notifications@github.com
wrote:
My understanding of the way Deployments work for rolling update is that a
second RS is created, initially with 0 replicas, and then the first RS is
scaled down as the second RS is scaled up. So the total number of replicas
across the two RSes is 2, except perhaps for transient conditions.
Initially both are in the "old version," then one is in the "new version"
and one is in the "old vesion", and finally both are in the "new vesion."—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/60#issuecomment-264692830,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p9bf5Dy7HTdkpRaiCseh01oJWtwTks5rEoSlgaJpZM4JTrhi
.
Ah, thanks for the explanation.
As a workaround, could you have a pod label in your podTemplate with key "version" (or "generation" or something like that) and a value that is initially 0, and a corresponding pod anti-affinity annotation with the same key/value pair, and each time you modify the podTemplate, you bump up both values (label and anti-affinity annotation)? The value could be the hash of everything in the podTemplate except this one field, in which case I think it's basically equivalent to the feature @jimmycuadra requested. (Though a simple version number you bump up on each modification is simpler.)
We will be moving this feature to beta in 1.6. Tracking issue is
kubernetes/kubernetes#25319
Current user guide documentation is here
Any chance of the use case I mentioned being part on the roadmap for the stable release? The suggested workaround might be prone to error. It'd be great to have the server aware of the user's intent. If not, would this be considered for a future iteration on this API? In that case, should I open a new issue somewhere to track it?
You can open a feature request in the kubernetes/kubernetes repo and link it to this issue. We could consider it if enough people want it. Personally I'd prefer if Deployment controller managed the label changes (i.e. automate the "workaround") and we didn't change the API for pod (anti-)affinity.
Given that inter-pod (anti)affinity is going to be beta soon, can we get back to kubernetes/kubernetes#34543 maybe? It's a quirk (unneeded dependency) that makes it hard to move inter-pod affinity to General Predicates for instance.
kubernetes/kubernetes#34543 (and moving to General Predicates) isn't an API change, so it's not strictly necessary for beta (i.e. it can be done after moving to beta).
Sorry we haven't reviewed that PR yet. We're working on getting more people up to speed on the scheduler code, but right now we only have the bandwidth to review things that are critical or trivial. I hope we'll get to it in the next couple of weeks.
Thanks for your patience...
@davidopp any update on this feature? Docs and release notes are required (please, provide them to the features spreadsheet.
Updated spreadsheet with release note and link to documentation.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
Prevent issues from auto-closing with an /lifecycle frozen
comment.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or @fejta
.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen
.
Mark the issue as fresh with /remove-lifecycle rotten
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
I have a use case for this feature which I don't think is covered by the current design, but please correct me if I'm wrong.
Imagine a simple example cluster with two nodes. I want to create deployments in this cluster with two pod replicas each. I want to require that the two pods are not on the same node. I can do this with pod affinity based on a label like app=foo, but when I edit the deployment, creating a new replica set, the new pods can't be scheduled, because each node already has a pod with the label app=foo. I would have to change the deployment's labels and affinity rules each time I deploy.
What I really want is a way to require that pods with the same labels and the same pod-template-hash don't end up on the same node, but I don't think there's a way to express that in the current affinity system because there's no operator for "equal to the value of that label for this pod". In other words, I'd have to know the value of the pod-template-hash in advance somehow.