1. What kops version are you running? The command kops version, will display
this information.
Version 1.10.0 (git-8b52ea6d1)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-18T11:37:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.6", GitCommit:"a21fdbd78dde8f5447f5f6c331f7eb6f80bd684e", GitTreeState:"clean", BuildDate:"2018-07-26T10:04:08Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
Create a cluster, then make a change to the network provider's configuration in the cluster manifest.
Example changes:
apiVersion: kops/v1alpha2
kind: Cluster
spec:
...
networking:
amazonvpc:
# uncomment this after the cluster has been created
# imageName: 602401143452.dkr.ecr.us-west-2.amazonaws.com/amazon-k8s-cni:1.1.0
or
apiVersion: kops/v1alpha2
kind: Cluster
spec:
...
networking:
calico:
# uncomment this after the cluster has been created
# logSeverityScreen: WARNING
crossSubnet: true
prometheusMetricsEnabled: true
Apply the changes:
kops update cluster --yes
kops rolling-update cluster --yes --force
5. What happened after the commands executed?
The instance groups were rolled however the network provider's configuration was not updated.
6. What did you expect to happen?
The networking provider's DaemonSet should have been updated with the new configuration change.
9. Anything else do we need to know?
I'm not sure if this is an issue in protokube or channels.
Here is what I believe is the relevant portion of the protokube logs:
{"log":"I0823 17:56:17.347666 1830 channels.go:31] checking channel: \"s3://KOPS_STATE_STORE/CLUSTER_NAME/addons/bootstrap-channel.yaml\"\n","stream":"stderr","time":"2018-08-23T17:56:17.368337068Z"}
{"log":"I0823 17:56:17.347734 1830 channels.go:45] Running command: channels apply channel s3://KOPS_STATE_STORE/CLUSTER_NAME/addons/bootstrap-channel.yaml --v=4 --yes\n","stream":"stderr","time":"2018-08-23T17:56:17.36836546Z"}
{"log":"I0823 17:56:17.689504 1830 channels.go:34] apply channel output was: I0823 17:56:17.377322 8747 addons.go:38] Loading addons channel from \"s3://KOPS_STATE_STORE/CLUSTER_NAME/addons/bootstrap-channel.yaml\"\n","stream":"stderr","time":"2018-08-23T17:56:17.692491816Z"}
I exec'ed into the protokube container and reran that channels command and it reported that no update was required even though the daemonset is still out of date. I confirmed that the manifest at s3://KOPS_STATE_STORE/CLUSTER_NAME/addons/networking.amazon-vpc-routed-eni/k8s-1.7.yaml has the updated imageName from the first example in 4., but it seems that channels is not applying it.
Related: #4348 #5055
I think this has always been the case that you can't just modify the manifest file in the state store. You have to bump the version number in bootstrapchannelbuilder.go otherwise the version comparison shows no updates needed:
https://github.com/kubernetes/kops/blob/5e1a9315d38be65d49688c21a9f51e6f65909b7e/channels/pkg/channels/addon.go#L88
I suppose you can try modifying the bootstrap-channel.yaml file directly but next kops update run will overwrite.
@buddyledungarees I realize that its always been the case, but I would argue that this is a poor user experience and might warrant some change in that logic. As a cluster administrator, if I make a change to my ClusterSpec, I would expect the change to be applied to the cluster in an update or rolling-update but that isn't happening. I don't think bumping a version in source code and recompiling the kops binary is a reasonable solution for this type of scenario.
Perhaps kops could automatically bump an addon's version anytime it is changed via the ClusterSpec? Or always reapply every addon during an update or rolling-update (perhaps with a special command line argument) ?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
I have a similar issue when i try to update the image used by instances groups. I modify the manifest file apply it then update and rolling-update but the image is not updated.
is there a workaround for this one until that is released?
I've always just edited the kubernetes resource manually to match the updated ClusterSpec.
also @rifelpet any ideas how to automate that will be great :)
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/reopen Are there any plans for channels to notice the manifest has changed ? A workaround I have been using is to increase the addon version number every time I need to modify the addon manifest.
In newer versions of kops this should work just fine.
Newer version of kops (I believe 1.14 and up) we check the sha256 so this shouldn't be necessary.
Most helpful comment
@buddyledungarees I realize that its always been the case, but I would argue that this is a poor user experience and might warrant some change in that logic. As a cluster administrator, if I make a change to my ClusterSpec, I would expect the change to be applied to the cluster in an
updateorrolling-updatebut that isn't happening. I don't think bumping a version in source code and recompiling the kops binary is a reasonable solution for this type of scenario.Perhaps kops could automatically bump an addon's version anytime it is changed via the ClusterSpec? Or always reapply every addon during an
updateorrolling-update(perhaps with a special command line argument) ?