At the end of every release cycle the minimum supported Kubernetes version must be bumped.
The number of touched files and changes overall is large for something, that happens late into the release cycle. The process needs to be simplified, possibly by using more consts and/or helper functions. Not doing so risks breakages late in the release cycle.
The idea behind all of those PRs is to bump up MinimumControlPlaneVersion and MinimumKubeletVersion. This still needs to be done by a PR at the end of the release cycle, but the idea here is to keep this PR tiny by making many hard coded versions (that we otherwise manually bump up now) into values, that somehow depend upon MinimumControlPlaneVersion and MinimumKubeletVersion.
The PR, that solves this issue is going to:
MinimumControlPlaneVersion and MinimumKubeletVersion.Sample of the last few PRs:
/kind cleanup
/area releasing
@rosti thanks for filling this issue.
What about adding few more details so this can qualify as good first issue?
/unassign
/good-first-issue
@rosti:
This request has been marked as suitable for new contributors.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-good-first-issue command.
In response to this:
/unassign
/good-first-issue
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
from my experience thus far, good-first-issue only qualifies for change requests that are super simple. :[
i will remove the good-first-issue and see if someone takes it as help-wanted.
We could probably inject from the build.
I'll take this.
/assign
Thanks @bart0sh
/lifecycle active
@bart0sh @neolit123 @rosti @yagonobre
now that #71946 merged and #72299 is on a good track, I would like to open the discussion on some possible further improvements of this effort.
Considering that version skew/bump is now driven by two constants: MinimumKubeletVersion and CurrentKubernetesVersion.
MinimumKubeletVersion computed dynamically from CurrentKubernetesVersion? This will require to add a sort of very simple "version math" (e.g. for computing previous minor)cmd/kubeadm/app/phases/upgrade/policy_test.go and cmd/kubeadm/app/preflight/checks_test.go. What about to use the same "version math" introduced above to get rid of those left overs of the previous manual version bump?CurrentKubernetesVersion injected from build as suggested by @timothysc and others? What about having MinimumKubeletVersion computed dynamically from CurrentKubernetesVersion? This will require to add a sort of very simple "version math" (e.g. for computing previous minor)
+1, but this might make these into variables.
There are still few hardcoded versions in cmd/kubeadm/app/phases/upgrade/policy_test.go and cmd/kubeadm/app/preflight/checks_test.go. What about to use the same "version math" introduced above to get rid of those left overs of the previous manual version bump?
+1 for limiting usage as much as possible.
What about having CurrentKubernetesVersion injected from build as suggested by @timothysc and others?
next to being used for help screens in the CLI, it's also a fallback in cases where the build is bogus as per this PR.
https://github.com/kubernetes/kubernetes/pull/72454
(not fully agreed if we should do that)
a bogus build also means that a version is not populated on build time at all.
so we can turn it into variable, give it a default value on each release but also have it populated to an actual value if the linker received it on build time.
+1 on all points of @fabriziopandini from me.
We have to be careful with the last point though. Fetching a version from git (or any other VCS for that matter) is a bit bogus and prone to errors, thus it's easy to do "bad builds" this way. Also, we won't be able to produce a valid build from source code tarballs.
What we need to have is some place, central for the K8s source base, that contains the current version components. Much like what the Linux kernel has been doing for years (like so).
This is the only reliable way to me, but I suppose that we should also get SIG Release excited about it, because some (ideally from the release team) needs to go and flip the numbers prior to a new release.
@fabriziopandini
What about having MinimumKubeletVersion computed dynamically from CurrentKubernetesVersion?
Sounds good to me. However, the implementation could be not that small and wouldn't worth an effort. For example what would be a MinimumKubeletVersion for current version '2.0.0-pre1' ?
There are still few hardcoded versions in cmd/kubeadm/app/phases/upgrade/policy_test.go and cmd/kubeadm/app/preflight/checks_test.go. What about to use the same "version math" introduced above to get rid of those left overs of the previous manual version bump?
Makes sense to me. Still need more explanations about the math.
What about having CurrentKubernetesVersion injected from build as suggested by @timothysc and others?
Agree with @rosti here. We need to be very careful. Having this constant in some central place for K8s sounds much better to me than messing up with build systems. Just my 0.02$
@rosti, @fabriziopandini Are we done with this? If not please let me know what else needs to be done to close this. thanks.
@rosti, @fabriziopandini ^^^^
@bart0sh this looks mostly done (at least in the issue's original intent). I'll revise the effort here at the beginning of next week (and in the context of kubernetes/kubernetes#72685) and close the issue if necessary.
We should refactor these 3 tests:
TestEtcdSupportedVersion
TestGetAvailableUpgrades
TestEnforceVersionPolicies
moving to 1.15. only a small portion left here.
@yagonobre @bart0sh - could you fill out what's left todo here?
This looks mostly done (at least in the issue's original intent). I'll revise the effort here at the beginning of next week (and in the context of kubernetes/kubernetes#72685) and close the issue if necessary.
@rosti any update on this? Should we just close this issue?
@yagonobre promised during office hours to add the remaining items that we need to update (some unit tests), but we are mostly done here.
for now we have 2 tests that depends on k8s version, so if we remove old etcd versions it'll break.
For me the big problem is TestGetAvailableUpgrades that depends on CoreDNS version of a given kubernetes version, me propose it to transform the CoreDNSVersion constant in a map like the etcd version.
TestGetAvailableUpgrades that depends on CoreDNS version of a given kubernetes version, me propose it to transform the CoreDNSVersion constant in a map like the etcd version.
+1 for CoreDNSVersion to be come a map.
same has to happen with KubeDNSVersion
for now we have 2 tests that depends on k8s version, so if we remove old etcd versions it'll break.
are those:
TestGetAvailableUpgrades
TestEnforceVersionPolicies
?
ideally if we remove old etcd versions (i.e. SupportedEtcdVersion map values) we should also be bumping CurrentKubernetesVersion and the other constants such as MinimumKubeletVersion, so the tests should not break.
@neolit123 these tests use hardcoded kubernetes version, I'll try to refactor it using a map for core/kubeDNSVersion after I finish the reset phase.
added the latest notes/update about the constants bump here: https://github.com/kubernetes/kubernetes/pull/80833
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifeycle stale
i'm going to close this at this point.
as can be seen in our per-cycle punch card and related PRs things are much simpler:
https://github.com/kubernetes/kubeadm/issues/1963
https://github.com/kubernetes/kubernetes/pull/83312/files
when kubeadm moves to k/kubeadm we can think of automating the population of version "constants", which may or may not be desired.
Most helpful comment
We should refactor these 3 tests:
TestEtcdSupportedVersion
TestGetAvailableUpgrades
TestEnforceVersionPolicies