What steps did you take and what happened:
kind create cluster./cmd/clusterctl/hack/local-overrides.pyclusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm-bootstrap:v0.3.0 --infrastructure aws:v0.5.0clusterctl init --core cluster-api:v0.3.0 --bootstrap kubeadm-bootstrap:v0.3.0 --infrastructure aws:v0.5.0 -v 4 --forceWe get the following error:
Updating: /v1, Kind=Service, capi-system/capi-webhook-service
Throttling request took 192.94293ms, request: PUT:https://127.0.0.1:59592/api/v1/namespaces/capi-system/services/capi-webhook-service
Error: failed to update provider object /v1, Kind=Service, capi-system/capi-webhook-service: Service "capi-webhook-service" is invalid: spec.clusterIP: Invalid value: "": field is immutable
This error makes sense because according to the docs clusterIP cannot be updated.
What did you expect to happen:
I expected the second update to succeed.
Environment:
kubectl version):/kind bug
/area clusterctl
@wfernandes
IMO, the cleanest approach to re-install, is to delete* first and then re-install, not to override using init --force(*unfortunately delete is still in pipeline).
NB. When clusterctl detect that you are trying to install a provider on top on an existing one (same provider/same namespace) it warns you with a message Installing provider %q can lead to a non-functioning management cluster (you can use --force to ignore this error): There is already an instance of the %q provider installed in the %q namespace
If you choose to force, clusterctl tries to do an upgrade instead of creating; TBH I don't know if it makes sense to try to make this smarter, given there is (there will be soon) a cleaner alternative (delete, re-install).
@vincepri @ncdc opinions ^^^
I agree with what you wrote @fabriziopandini. What if we also get rid of --force?
I'm ok with that, but we should consider that clusterctl "blocks" init in the following cases
I'm ok in blocking (without possibility to force in 1,2). Probably we should drop 3, because after that recent changes this is not a problem anymore
Agreed.
/lifecycle active
Most helpful comment
I'm ok with that, but we should consider that clusterctl "blocks" init in the following cases
I'm ok in blocking (without possibility to force in 1,2). Probably we should drop 3, because after that recent changes this is not a problem anymore