As per https://github.com/kubernetes-sigs/cluster-api/pull/3364#discussion_r472267814
This code was slightly refactored in #3364, but in the long term we should get rid of this or find an alternative that doesn't require manual code changes at every cert-manager release bump
/kind cleanup
/milestone Next
/area clusterctl
I feel like this issue and https://github.com/kubernetes-sigs/cluster-api/issues/3491 _could be_ intertwined with each other.
Can this be a flag in the clusterctl? or what is the idea other ideas?
I think that a possible approach is to derive hard-coded info (version, manifest-hash) from the embedded manifest, so when you upgrade the manifest there is no more need for manual code changes at every cert-manager release bump
As part of the clusterctl version check #3484, we added a version.yaml file that could be used to hold the information that was read from the embedded cert-manager manifest if required for future use.
Also the version.yaml file is not meant to be modified by the user so it would be a good place to store it if necessary.
@wfernandes we don't have version.yaml in release
https://github.com/kubernetes-sigs/cluster-api/releases
and this line assume if the file is not there, then that's ok and recreate it later
https://github.com/kubernetes-sigs/cluster-api/blob/master/cmd/clusterctl/cmd/version_checker.go#L188
but the code is hard code ,so should we put the version.yaml into release then download
from release and use it , is this the desired way to go?
@jichenjc The version checker is run after any clusterctl command is executed. So if the file doesn't exist in the beginning, it will be created after.
However, after thinking about it some more, it probably doesn't make sense to store this hash. Because the hash will probably be used only when ApplyUpgrade is called. So may be it just makes sense to compute the hash every time since I doubt clusterctl apply upgrade will be called frequently. That is, storing the hash probably doesn't save us very much.
I don't think putting the version.yaml file in the release is necessary because it is a file that is created by clusterctl and used to notify the users if they are using an older version of clusterctl.
Hope this helps.
@fabriziopandini @vincepri any comments on the info provided by @wfernandes above?
seems nothing we can do now?
I agree with @wfernandes that it does not make sense to store additional info in version .yaml, but instead, we should derive hard-coded info (version, manifest-hash) from the embedded manifest whenever necessary
so we need find some other places to store such info ? e.g hash.yaml ?
@jichenjc if I got right the original intent of the comment was to get rid of these following const in the code, and instead derive that info from the embedded manifest.
As per @wfernandes comment, the idea of storing that info somewhere else does not make really sense.
@fabriziopandini thanks, so derive that info from the embedded manifest. is the suggestion .
I don't know embedded manifest. means so I am working on learning how to use it ,if you happen to know, please help to point me , thanks~
embedded manifest is the manifest under cmd/clusterctl/config/assets which is the converted into bindata and shipped in the binary.
The generated bindata file is cmd/clusterctl/config/zz_generated.bindata.go, and an example of how this is used is
https://github.com/kubernetes-sigs/cluster-api/blob/0154e3dba372ed49324b8664e52828334f15044d/cmd/clusterctl/client/cluster/cert_manager.go#L331-L334
@jichenjc if I got right the original intent of the comment was to get rid of these following const in the code, and instead derive that info from the embedded manifest.
+1, we've also discussed that we should be able to remove the embedded manifests (bindata), and instead use the upstream cert-manager repository to retrieve the assets and install them. If there are any patches that need to be performed, those should be in the form of json-patch
If we are including in the scope removing the embedded manifests, I suggest to take some more time before starting to implement this issue, because the current concept of repository used for providers can't be easily adapted to cert-manager (something that is not a provider) and we need something similar to support air-gapped environments
Let's focus this issue on removing the embedded constants, rather than restructuring how we manage cert-manager, what do you think?
@vincepri @fabriziopandini do you think create a configmap will solve the problem?
basically, define a config map at cmd/clusterctl/config/assets/cert-manager.yaml then
parse it during startup and replace hard code info with the configmap info ?
@jichenjc we basically need two informations, cert-manager version and the hash.
What about trying to compute those info from the embedded cert-manager manifest?
/close
given https://github.com/kubernetes-sigs/cluster-api/pull/3918 merged
@fabriziopandini: Closing this issue.
In response to this:
/close
given https://github.com/kubernetes-sigs/cluster-api/pull/3918 merged
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
Let's focus this issue on removing the embedded constants, rather than restructuring how we manage cert-manager, what do you think?