/cc
This work is part of the SLO improvements we have for kubernetes - helping us meet the requirements of much denser clusters. The key goal for alpha is to prove that it will help us reach the next scale limit for Kubernetes, and will take work across several releases to get there.
@smarterclayton @kubernetes/sig-api-machinery-feature-requests @kubernetes/sig-scalability-feature-requests can you confirm that this feature targets 1.8?
If yes, please, update the features tracking spreadsheet with the feature data, otherwise, let's remove this item from 1.8 milestone.
Thanks
Yes, it was delivered for 1.8.
Beta for 1.9
Goals for 1.9 - expose in CLI for get.go
Goals for 1.10 - expose for all CLI commands, GA
@smarterclayton :wave: Please open a documentation PR and add a link to the 1.9 tracking spreadsheet. Thanks in advance!
@smarterclayton Bump for docs ☝️
/cc @idvoretskyi
@smarterclayton @kubernetes/sig-api-machinery-feature-requests @kubernetes/sig-scalability-feature-requests any updates on the docs status?
A friendly reminder on docs deadline tomorrow.
/cc @zacharysarah
Bah, I knew I forgot something yesterday. Ended up having to write more
API docs since there wasn't a graceful place to hang the topic.
https://github.com/kubernetes/website/pull/6540
On Thu, Nov 30, 2017 at 4:09 PM, Ihor Dvoretskyi notifications@github.com
wrote:
@smarterclayton https://github.com/smarterclayton
@kubernetes/sig-api-machinery-feature-requests
https://github.com/orgs/kubernetes/teams/sig-api-machinery-feature-requests
@kubernetes/sig-scalability-feature-requests
https://github.com/orgs/kubernetes/teams/sig-scalability-feature-requests
any updates on the docs status?A friendly reminder on docs deadline tomorrow.
/cc @zacharysarah https://github.com/zacharysarah
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/365#issuecomment-348321380,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p8hXe9e85TlEoQ876BfHH-TVeDQIks5s7xl2gaJpZM4OktK6
.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
This was beta in 1.9, we have not yet moved it to GA. Docs are complete, CLI commands support it.
@smarterclayton when you are estimating the GA stage?
1.11 unless we have a reason not to.
On Fri, Mar 9, 2018 at 1:31 PM, Ihor Dvoretskyi notifications@github.com
wrote:
@smarterclayton https://github.com/smarterclayton when you are
estimating the GA stage?—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/features/issues/365#issuecomment-371904715,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ABG_p_MmLq3d8PZ-uwmSm_J3tqS6zrntks5tcsqbgaJpZM4OktK6
.
@smarterclayton
Any plans for this in 1.11?
If so, can you please ensure the feature is up-to-date with the appropriate:
stage/{alpha,beta,stable}
sig/*
kind/feature
cc @idvoretskyi
@deads2k and I chatted briefly. I think we need to discuss in api-machinery whether we want to go one step further and expose this for end users (like user interfaces) that want to do human focused paging before we move to stable. So at minimum this is 1.12.
Thanks for the update, Clayton!
@smarterclayton --
It looks like this feature is currently in the Kubernetes 1.12 Milestone.
If that is still accurate, please ensure that this issue is up-to-date with ALL of the following information:
Set the following:
Please make sure all PRs for features have relevant release notes included as well.
Happy shipping!
/cc @justaugustus @kacole2 @robertsandoval @rajendar38
@smarterclayton Any plans to integrate this feature for extension-api-service (https://github.com/kubernetes/apiserver)?
As I see it also doesn't work for CRD objects..
EDIT: I found how to enable paging in extension api-server: flag Paging
in k8s.io/apiserver/pkg/storage/storagebackend/config.go
has to be set to true
But still don't know how to enable it for CRD models..
@smarterclayton --
Feature Freeze is today. Are we planning on graduating this feature in Kubernetes 1.12?
If so, can you make sure everything is up-to-date, so I can include it on the 1.12 Feature tracking spreadsheet?
Slack update:
deads2k [10:28 AM]
@justaugustus We wanted to solicit more comments first. We mentioned it in apimachinery here: https://docs.google.com/document/d/1x9RNaaysyO0gXHIr1y50QFbiL1x8OWnk2v3XnrdkT5Y/edit#heading=h.lgl8gmvpt98r and we should nail down conformance before doing so, though the api is backwards compatible
justaugustus [10:33 AM]
@deads2k so hold until 1.13 at least?
deads2k [10:47 AM]
I expect so
Removed from the 1.12 milestone.
Are there any plans to add pagination support to custom resources?
Are there any plans to add pagination support to custom resources?
Yes. It was an oversight they weren't enabled already. Opened https://github.com/kubernetes/kubernetes/pull/67861 to enable them
Hi
This enhancement has been tracked before, so we'd like to check in and see if there are any plans for this to graduate stages in Kubernetes 1.13. This release is targeted to be more ‘stable’ and will have an aggressive timeline. Please only include this enhancement if there is a high level of confidence it will meet the following deadlines:
Please take a moment to update the milestones on your original post for future tracking and ping @kacole2 if it needs to be included in the 1.13 Enhancements Tracking Sheet
Thanks!
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten
.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle rotten
Hello @smarterclayton @cben , I'm the Enhancement Lead for 1.15. Is this feature going to be graduating alpha/beta/stable stages in 1.15? Please let me know so it can be tracked properly and added to the spreadsheet. This will also require a KEP to be included.
Once coding begins, please list all relevant k/k PRs in this issue so they can be tracked properly.
There is a problem with https://github.com/kubernetes/kubernetes/pull/67861 , it was incomplete. It did not affect the sample-apiserver, and thus not anything derived from that.
I have been looking, and did not find a deep place where one common mod would cover all servers. In the main apiserver, for example, the pagination goes into the storage factory that appears in the pkg.master.ExtraConfig
; other servers have their own independent ExtraConfig
structs, and the one for sample-apiserver is empty. Inquiring minds want to know, why doesn't that have a StorageFactory
while the main apiserver's ExtraConfig does?
There is a problem with kubernetes/kubernetes#67861 , it was incomplete. It did not affect the sample-apiserver, and thus not anything derived from that.
thanks, will open a fix
FYI, I drafted a shallow fix in https://github.com/MikeSpreitzer/kubernetes/tree/sample-apiserver-paging
why doesn't that have a
StorageFactory
while the main apiserver's ExtraConfig does?
the main API server's storage is significantly more complex, and not in good ways... the storage factory allowed the kube apiserver to configure things like:
most aggregated servers serve a smaller, more coherent set of resources, and don't need all of that complexity (if they really want it, apiserver/pkg/server/storage#NewDefaultStorageFactory is there for the using)
opened https://github.com/kubernetes/kubernetes/pull/77278 to fix the defaults, and to honor the feature gate in the sample-apiserver
Hi @smarterclayton - I'm an Enhancements shadow for 1.16.
Is this feature going to be graduating alpha/beta/stable stages in 1.16? Please let me know so it can be added to the 1.16 Tracking Spreadsheet.
Once development begins or if it already has, please list all relevant k/k PRs in this issue so they can be tracked properly.
I noticed there's no KEP linked in the issue description; as a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.
As a reminder, 1.16 milestone dates are: Enhancement Freeze 7/30 and Code Freeze 8/29.
Thanks!
@liggitt do we want to promote this to GA? I see no reason not to,
@liggitt do we want to promote this to GA? I see no reason not to,
depends on whether resolving the watch cache paging issue is considered a blocker for GA.
Hello @smarterclayton , 1.17 Enhancement Shadow here! 🙂
I wanted to reach out to see *if this enhancement will be graduating to alpha/beta/stable in 1.17?
*
Please let me know so that this enhancement can be added to 1.17 tracking sheet.
Thank you!
🔔Friendly Reminder
A Kubernetes Enhancement Proposal (KEP) must meet the following criteria before Enhancement Freeze to be accepted into the release
implementable
stateAll relevant k/k PRs should be listed in this issue
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
@liggitt @smarterclayton I am assuming this is not being targeted for promotion in 1.18? Let me know either way so I can update the release team tracker.
given the issues being worked on in https://github.com/kubernetes/kubernetes/pull/86430, I would like to consider whether there are ways we need to (or can) improve this before GA
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Hey there @smarterclayton, 1.19 Enhancements shadow here. I wanted to check in and see if you think this Enhancement will be graduating in 1.19?
In order to have this part of the release:
The current release schedule is:
If you do, I'll add it to the 1.19 tracking sheet (http://bit.ly/k8s-1-19-enhancements). Once coding begins please list all relevant k/k PRs in this issue so they can be tracked properly. 👍
Thanks!
Hey @smarterclayton, I'm following up on my previous update on this Enhancement being part of the v1.19
release.
Do you happen to have any update on the possiblity of this being included in the release v1.19
?
Thanks again for your time and contributions. 🖖
Hey @smarterclayton, I'm following up on my previous update on this Enhancement being part of the v1.19
release.
Do you happen to have any update on the possiblity of this being included in the release v1.19
?
Thanks again for your time and contributions. 🖖
Hey @smarterclayton Is there any plans with regards to the inclusion of this enhancement for the v1.19
release?
I noticed there's no KEP linked in the issue description; as a reminder, every enhancement requires a KEP in an implementable state with Graduation Criteria explaining each alpha/beta/stable stages requirements.
Please note that _Enhancements freeze is on May 19_
Thanks.
Hey @smarterclayton, Unfortunately the deadline for the 1.19 Enhancement freeze has passed. For now this is being removed from the milestone and 1.19 tracking sheet. If there is a need to get this in, please file an enhancement exception.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale
.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close
.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Hi @liggitt & @smarterclayton
Any plans for this to graduate in 1.20?
Thanks,
Kirsten
Hi @liggitt & @smarterclayton
Circling back around any update on this?
Thanks!
Kirsten
Most helpful comment
Yes. It was an oversight they weren't enabled already. Opened https://github.com/kubernetes/kubernetes/pull/67861 to enable them