This is a...
Problem:
Patch commands fail in section _Rolling update_ because the patched values are currently the same value.
Example:
$ kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
statefulset "web" not patched
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.8"}]'
statefulset "web" not patched
If we verify updateStrategy.type in web StatefulSet we'll see they're the same value:
$ kubectl get sts web --output=yaml
...
updateStrategy:
rollingUpdate:
partition: 0
type: RollingUpdate
...
The second patch command fails because the current Pods' image is k8s.gcr.io/nginx-slim:0.8 as we can see below:
$ kubectl describe pod
...
Containers:
nginx:
Container ID: docker://5611c031f0bef3514dd29ca417f1c55b7d42cdd05d76650c5a1bd81a70dc5c1a
Image: k8s.gcr.io/nginx-slim:0.8
Image ID: docker-pullable://k8s.gcr.io/nginx-slim@sha256:8b4501fe0fe221df663c22e16539f399e89594552f400408303c42f3dd8d0e52
...
Proposed Solution:
In the first case(patching updateStrategy.type) I haven't a clear solution.
For the other one, I would just change the image value to version 0.9. Example:
$ kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/template/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.9"}]'
statefulset "web" patched
Page to Update:
http://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.2", GitCommit:"5fa2db2bd46ac79e5e00a4e6ed24191080aa463b", GitTreeState:"clean", BuildDate:"2018-01-18T21:12:46Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"0b9efaeb34a2fc51ff8e4d34ad9bc6375459c4a4", GitTreeState:"clean", BuildDate:"2017-11-29T22:43:34Z", GoVersion:"go1.9.1", Compiler:"gc", Platform:"linux/amd64"}
I can open a PR to fix the second patch command but I don't know how to fix the first patch command.
Regards
Could it be because I'm using server version 1.8 instead of the latest(currently 1.9)?
@rgo I think it's not the version thing.
I launched a cluster using local-up-cluster.sh of Kubernetes v1.8.0 and used the yaml in this page. The results are as follows (both the client and server version are v1.8.0):
# kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
statefulset "web" not patched
#kubectl patch statefulset web --type='json' -p='[{"op": "replace", "path": "/spec/tem
plate/spec/containers/0/image", "value":"k8s.gcr.io/nginx-slim:0.8"}]'
statefulset "web" patched
In v1.8.0 the image is gcr.io/google_containers/nginx-slim:0.8, so the patch of image k8s.gcr.io/nginx-slim:0.8 works.
And I tested with Kubernetes v1.9.2 (both the client and server version are v1.9.2) and use the yaml in this page. The results are the same as what you got, both are not patched.
@islinwb if you agree I'm going to open a PR changing the image source as it was in the version v1.8.0 of the documentation, ok?
Have you got any idea how to fix the other not patched?
Thanks
@rgo 馃憢 Thanks in advance for opening a PR! Please reference this issue in the OP so we can be sure to track its progress.
@rgo fix it, pls.
No idea. I tried to change the updateStrategy to OnDelete but it reported error.
@islinwb Anyone can submit a PR to the docs, no need for admin status.
@zacharysarah Sorry I didn't make it clear.
@rgo just submit your PRs, no need to ask. :smiley:
@islinwb It's OK! I'm encouraging you to go for it! 馃槃
For this pending case:
kubectl patch statefulset web -p '{"spec":{"updateStrategy":{"type":"RollingUpdate"}}}'
statefulset "web" not patched
Maybe we can add a note after the command:
_Note:_ If patching update strategy to RollingUpdate fails it could be because update strategy is already set as RollingUpdate.
What do you think(~format and~ content)?
Update: About the _Note_ format I've seen an example in the tutorial "Example: Deploying WordPress and MySQL with Persistent Volumes". I'll mimic it if content is valid.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Most helpful comment
For this pending case:
Maybe we can add a note after the command:
_Note:_ If patching update strategy to
RollingUpdatefails it could be because update strategy is already set asRollingUpdate.What do you think(~format and~ content)?
Update: About the _Note_ format I've seen an example in the tutorial "Example: Deploying WordPress and MySQL with Persistent Volumes". I'll mimic it if content is valid.