# TODO: This is pinned to the 1.8 version of etcd. 1.9 was changed to
# 3.1.x. ETCD doesn't downgrade minor versions. So for downgrades, we
# pin this to the lower version. The long term fix is to change
# downgrade/upgrade to not upgrade/downgrade etcd.
- --env=TEST_ETCD_VERSION=3.0.17
So this is a fun comment to find in the v1.13 release cycle
/milestone v1.13
/priority important-soon
/kind cleanup
/sig release
/sig cluster-lifecycle
/sig gcp
These jobs seem to be ownerless, and it's unclear to me what the solution here is, hence why I'm opening an issue. Options:
Which upgrade / downgrade versions are we testing? If all of the versions in the current test matrix can use the same minor version of etcd, then maybe we can just remove this TODO and the flag altogether.
/milestone v1.14
I question the viability of never upgrading minor versions of etcd again. That said, perhaps this is a thing to be punted and revisited with etcdadm. I'd like for us to decide one way or the other this release cycle
/area jobs
/assign @krousey @jpbetz
to get your attention, ref: https://github.com/kubernetes/test-infra/pull/10225#issuecomment-441935814
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
/milestone clear
I was wondering if the right strategy for upgrade/downgrade tests would to pick the lower k8s versions supported etcd version. This strategy would be more accurate and manageable then pinning the version in config file.
@wenjiaswe @spiffxp
Just talked to @imkin offline, I agree this would be a better approach, if the upgrade/downgrade test could fetch the etcd version of the lower k8s version of the upgrade/downgrade pair, and use that in the test. Otherwise, there is no guarantee that the pinned old etcd version would work with all supported k8s version, right?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Looks like this is still true:
Do we have anyone to work on this?
/remove-lifecycle rotten
/milestone v1.18
/help
@justaugustus:
This request has been marked as needing help from a contributor.
Please ensure the request meets the requirements listed here.
If this request no longer meets these requirements, the label can be removed
by commenting with the /remove-help command.
In response to this:
Looks like this is still true:
- https://github.com/kubernetes/test-infra/blob/dc70aede1650507b99c05146163343b1d4914d93/config/jobs/kubernetes/sig-cloud-provider/gcp/gpu/gcp-gpu-upgrade-downgrade.yaml#L198
- https://github.com/kubernetes/test-infra/blob/dc70aede1650507b99c05146163343b1d4914d93/config/jobs/kubernetes/sig-cloud-provider/gcp/gpu/gcp-gpu-upgrade-downgrade.yaml#L229
- https://github.com/kubernetes/test-infra/blob/dc70aede1650507b99c05146163343b1d4914d93/config/jobs/kubernetes/sig-cloud-provider/gcp/upgrade-gce.yaml#L334
- https://github.com/kubernetes/test-infra/blob/dc70aede1650507b99c05146163343b1d4914d93/config/jobs/kubernetes/sig-cloud-provider/gcp/upgrade-gce.yaml#L364
Do we have anyone to work on this?
/remove-lifecycle rotten
/milestone v1.18
/help
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/remove-priority important-soon
/priority critical-urgent
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.