Charts: Replication set not deleted when running helm upgrade

Created on 25 Jan 2018  路  3Comments  路  Source: helm/charts

Is this a request for help?: No

Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT

Version of Helm and Kubernetes:

kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.3", GitCommit:"f0efb3cb883751c5ffdbe6d515f3cb4fbe7b7acd", GitTreeState:"clean", BuildDate:"2017-11-08T18:39:33Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.3-rancher3", GitCommit:"772c4c54e1f4ae7fc6f63a8e1ecd9fe616268e16", GitTreeState:"clean", BuildDate:"2017-11-27T19:51:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

helm version
Client: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.7.2", GitCommit:"8478fb4fc723885b155c924d1c8c410b7a9444e6", GitTreeState:"clean"}

Which chart:
incubator/vault

What happened:
Replication set not deleted when running helm upgrade

What you expected to happen:
Old replication set to be deleted

How to reproduce it (as minimally and precisely as possible):

helm upgrade -f values.yaml incubator/vault
helm upgrade -f values.yaml wondering-fish incubator/vault

Anything else we need to know:
Pods are deleted but the replication set remains showing green with 0/0 pods. A new replication set is created with the desired number set in values.yaml.

lifecyclstale

Most helpful comment

Hola @mickengland;

This sounds like standard Kubernetes behaviour? See https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#revision-history-limit

It would be possible to implement the revision limit as specified in the above link, but I feel like this sort of task is better delegated to the cluster administrator or some other control loop (outside helm)

All 3 comments

Hola @mickengland;

This sounds like standard Kubernetes behaviour? See https://kubernetes.io/docs/concepts/workloads/controllers/deployment/#revision-history-limit

It would be possible to implement the revision limit as specified in the above link, but I feel like this sort of task is better delegated to the cluster administrator or some other control loop (outside helm)

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

This issue is being automatically closed due to inactivity.

Was this page helpful?
0 / 5 - 0 ratings