When I using clusterctl create cluster, got an error in update cluster object endpoint step. Here is the error message:
F1011 07:16:50.831072 10601 create_cluster.go:64] unable to update bootstrap cluster endpoint: unable to update cluster endpoint: the server could not find the requested resource (put clusters.cluster.k8s.io test1)
I'm sure this error occured in https://github.com/kubernetes-sigs/cluster-api/blob/master/cmd/clusterctl/clusterdeployer/clusterclient/clusterclient.go#L404.
There are high possibilities that this is provider side problem, but I can't find the root cause, I looking for help. Thanks a lot.
/kind bug
Copying from slack...
We had a similar problem where after the cluster gets pivoted, and we've not yet rebuilt the
status fields on the cluster, we don't have the endpoint available for clusterctl to grab.
Have hacked around it in https://github.com/kubernetes-sigs/cluster-api-provider-aws/pull/207, but we should make this retriable (and I think there was consensus on this in the meeting on 10/10/2018)
@randomvariable Hi, I think I found the root cause.
Seems like k8s didn't support crd subresources(status/scale) until 1.12, and clusterctl bootstrap cluster is in v1.10 by default. Don't know whether other people also encountered this problem.
https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#subresources
/assign
@jessicaochen @roberthbailey Can you help me take a look, thanks.
Strange. I didn't see this problem using the GCP provider (using minikube with a 1.10 bootstrap cluster).
This is one of the places where I didn't swap in the dynamic client during the CRD migration. I'm curious if using the dynamic client helps here, or if the behavior would be the same as what you are seeing with the generated client.
@roberthbailey Thanks for replaying, I was tested this problem by curl /apis/cluster.k8s.io/v1alpha1/namespaces/default/clusters/test1/status in both 1.12 and 1.10, and only got result in 1.12.
By since it worked for GCP provider, maybe I should do more test. . . :slightly_frowning_face:
Hi, @roberthbailey , we now pretty sure openstack provider in 1.12 cluster will work well. Is this any other reason could lead to this problem?
I've tested this with 1.10 and 1.12 and I can confirm the inability of doing PUT requests on a status/scale subresource on 1.10.
I'm not familiar with this part of the code/kubernetes but, is there a way we can workaround this without increasing the minimum required version?
is there a way we can workaround this without increasing the minimum required version?
CRD subresources are available in 1.10, but only via enabling a feature gate. They are enabled by default from 1.11.
@nikhita thanks for getting back to us. Would --feature-gates=CustomResourceSubresources=true be enough?
Would --feature-gates=CustomResourceSubresources=true be enough?
This + adding the .spec.subresources.status field in the CRD would work.
An example CRD with those fields can be found here: https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#subresources
@nikhita Thanks a lot. :+1:
But we can't make clusterctl create bootstrap cluster with featuregate CustomResourceSubresources=true, right? So maybe still need a pr to fix this issue.
So maybe still need a pr to fix this issue.
As long as that flag is forward compatible, that's fine. I'll be updating the AWS docs to tell people to set minikube to launch with kubernetes version 1.12.x
As long as that flag is forward compatible, that's fine. I'll be updating the AWS docs to tell people to set minikube to launch with kubernetes version 1.12.x
@Lion-Wei to be honest, I'd rather do what @randomvariable is going to do than enabling the feature on 1.10. I tested it with 1.12 and things seem to work(ish)
Looks like this is resolved by using minikube with k8s at 1.12.
Most helpful comment
CRD subresources are available in 1.10, but only via enabling a feature gate. They are enabled by default from 1.11.