We are running several identical kubernetes clusters in which some pods need to do some maintenance on a regular schedule (like every 12 hours). But it would be nice if all the jobs didn't all start at exactly the same time since they put the receiving API's under considerable load.
It would be great if the kubernetes jobs offered some jitter settings similar to what other cron implementations offer [1].
Thanks.
[1] https://www.freebsd.org/cgi/man.cgi?query=cron&sektion=8
/sig api-machinery
/sig apps
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
@jgrobbel I think the reason this is not getting much traction is that an enhancement like this needs to be filed against the enhancements repo as a KEP.
CronJob is part of SIG Apps. Would you be interested in taking this to the SIG Apps mailing list or meeting?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Hi @jgrobbel and @mattf -- I am working on contribex issue triage, and wanted to circle back as to the status of this PR.
If you have any updates as to where it's at in the process, if it can be closed or frozen, that would be great. Did this get brought up at the SIG Apps meeting, or was a KEP filed?
If this issue needs to be escalated or there is someone else/another SIG I should loop in, please let me know so that I can get this moving again! Thank you so much!
Hi @jgrobbel and @mattf -- I am working on contribex issue triage, and wanted to circle back as to the status of this PR.
If you have any updates as to where it's at in the process, if it can be closed or frozen, that would be great. Did this get brought up at the SIG Apps meeting, or was a KEP filed?
If this issue needs to be escalated or there is someone else/another SIG I should loop in, please let me know so that I can get this moving again! Thank you so much!
@celanthe There is no PR that I am aware of, I have merely raised the issue. Looks like it is not getting much traction so we may as well close it.
FWIW I have worked around it by adding random sleeps in all my k8s job container entrypoints which is good enough for me.
Thanks so much for the response @jgrobbel! Done!
/close
@celanthe: Closing this issue.
In response to this:
Thanks so much for the response @jgrobbel! Done!
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
FYI we are tracking this as part of Cronjob GA KEP https://github.com/kubernetes/enhancements/pull/978/files#
Most helpful comment
@celanthe There is no PR that I am aware of, I have merely raised the issue. Looks like it is not getting much traction so we may as well close it.
FWIW I have worked around it by adding random sleeps in all my k8s job container entrypoints which is good enough for me.