Controller Runtime is getting ready to release a new version based on Kubernetes and client-go v1.18. Once that gets released, we should update it in this repository as well.
This version bump will come with some breaking changes, and we should discuss to get it in v0.3.5 and how to communicate to infrastructure providers the possible breaking changes.
/area dependency
/milestone v0.3.x
Also might be good to add a policy document on when we should allow such dependency updates.
/cc
Tried to update this locally, got the following compilation errors:
# sigs.k8s.io/cluster-api/third_party/kubernetes-drain
third_party/kubernetes-drain/cordon.go:90:24: not enough arguments in call to client.Patch
have (string, types.PatchType, []byte)
want (context.Context, string, types.PatchType, []byte, "k8s.io/apimachinery/pkg/apis/meta/v1".PatchOptions, ...string)
third_party/kubernetes-drain/cordon.go:92:25: not enough arguments in call to client.Update
have (*"k8s.io/api/core/v1".Node)
want (context.Context, *"k8s.io/api/core/v1".Node, "k8s.io/apimachinery/pkg/apis/meta/v1".UpdateOptions)
third_party/kubernetes-drain/drain.go:133:53: not enough arguments in call to d.Client.CoreV1().Pods(pod.ObjectMeta.Namespace).Delete
have (string, *"k8s.io/apimachinery/pkg/apis/meta/v1".DeleteOptions)
want (context.Context, string, "k8s.io/apimachinery/pkg/apis/meta/v1".DeleteOptions)
third_party/kubernetes-drain/drain.go:150:69: not enough arguments in call to d.Client.PolicyV1beta1().Evictions(eviction.ObjectMeta.Namespace).Evict
have (*"k8s.io/api/policy/v1beta1".Eviction)
want (context.Context, *"k8s.io/api/policy/v1beta1".Eviction)
third_party/kubernetes-drain/drain.go:163:66: not enough arguments in call to d.Client.CoreV1().Pods("k8s.io/apimachinery/pkg/apis/meta/v1".NamespaceAll).List
have ("k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions)
want (context.Context, "k8s.io/apimachinery/pkg/apis/meta/v1".ListOptions)
third_party/kubernetes-drain/drain.go:208:47: not enough arguments in call to d.Client.CoreV1().Pods(namespace).Get
have (string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions)
want (context.Context, string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions)
third_party/kubernetes-drain/filters.go:175:62: not enough arguments in call to d.Client.AppsV1().DaemonSets(pod.ObjectMeta.Namespace).Get
have (string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions)
want (context.Context, string, "k8s.io/apimachinery/pkg/apis/meta/v1".GetOptions)
# sigs.k8s.io/cluster-api/test/framework
test/framework/deployment_helpers.go:116:109: not enough arguments in call to input.ClientSet.CoreV1().Pods(input.Deployment.ObjectMeta.Namespace).GetLogs(pod.ObjectMeta.Name, opts).Stream
have ()
want (context.Context)
I've been working on this a bit on https://github.com/kubernetes-sigs/cluster-api/pull/2877, just not against the cut release. Would be happy to take this issue on as part of that work.
ok sounds good, can we do a separate PR for it?
/assign @detiber
we should also update controller-tools to v0.3.0
Will definitely do a separate PR once I get things working and disentangled.
/lifecycle active
/milestone v0.4.x
@vincepri: The provided milestone is not valid for this repository. Milestones in this repository: [Next, v0.2.x, v0.3.7, v0.3.x, v0.4.0]
Use /milestone clear to clear the milestone.
In response to this:
/milestone v0.4.x
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/milestone v0.4.0
Hi :wave:!
Other than the context-adds in client-go v1.18, is there anything technically blocking the upgrade to v0.6.0?
Or are we just waiting as we haven't cut the release-0.3 branch yet to enable us to do breaking v0.4.x stuff on master?
@luxas yes, there were additional "breaking" changes to the controller-runtime API that would impact downstream consumers of the cluster-api API.
I believe plans are to branch release-0.3 and start allowing breaking changes shortly after the upcoming v0.3.7 release, but not sure if we are planning on waiting until after we've finished v1alpha4/v0.4.0 planning, though.
+1 to what @detiber mentioned, we're also hoping to get more changes in place in controller-runtime before adopting a new version. For example https://github.com/kubernetes-sigs/controller-runtime/issues/801 and updating its dependencies to Kubernetes v1.19 once it releases. It might take a bit though, given that we haven't even started v1alpha4 planning yet.
Thanks for the information! The context issue you linked is super interesting, maybe I can help getting some of those things forward. Let's see.
/close
Closing for now, we'll update to v0.7.x for v0.4.0 once it releases
@vincepri: Closing this issue.
In response to this:
/close
Closing for now, we'll update to v0.7.x once it releases
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
@luxas yes, there were additional "breaking" changes to the controller-runtime API that would impact downstream consumers of the cluster-api API.
I believe plans are to branch release-0.3 and start allowing breaking changes shortly after the upcoming v0.3.7 release, but not sure if we are planning on waiting until after we've finished v1alpha4/v0.4.0 planning, though.