Is this a request for help?:
Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT
Version of Helm and Kubernetes:
Helm: 2.6.1
Kubernetes: 1.7.4
Which chart:
redis
What happened:
When deploying redis, the logic in secrets.yaml causes a password to be randomly generated if a password was not manually defined. When redeploying an existing redis instance, the redis chart generates a new password, which updates the value of the secret, but does not update the password redis uses to control authentication.
What you expected to happen:
When doing a deploy of redis, if the secret already exists, redis should use that value rather than generating a new value.
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Happens to me too when trying to helm upgrade a Sentry installation.
Is there any workaround to avoid this?
@ctrom https://github.com/kubernetes/helm/blob/master/docs/charts_tips_and_tricks.md#automatically-roll-deployments-when-configmaps-or-secrets-change Will this be helpfull ?
@navinkbe7 that doesn't fix the issue as Redis is (generally) an in memory cache. Rolling the deployment when the password is updated would clear the cache. It would "fix" the issue, but it would also clear the cache, which shouldn't happen just because a password changed.
@ctrom Redis is a persistent memory cache... shouldnt rolling the deployment start a new redis pod but use the same persistent volume (thus, no data loss) ?
@ekampf That may end up being how we address this. Right now (due to the nature of the data we are storing in Redis) we are not doing any sort of persistence. Not enabling persistence has allowed us to reduce costs and operational overhead and the hope was that we would avoid rolling the deployment frequently and avoid having to prime the cache at least as much as possible.
All that aside, the simple fact is this chart does not support upgrades and I view that as the real issue here. Whatever practices are used for persistence, performing an upgrade should not break an active installation.
Should you happen to accidentally lock your services out of Redis here's how to restore access.
kubectl -n mynamespace exec -ti sentry-redis-XXX bashgrep requirepass /opt/bitnami/redis/etc/redis.confThis is similar to #2636
In the redis chart the persistence applies to both data and config, see: #4965 for a workaround this problem.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
/remove-lifecycle stale
@Miouge1 I tried the workaround, if I understand correctly, that is to set
path: /bitnami/redis/data
subPath: redis/data
in persistence so that the config is not stored in the volume.
I'm still having an issue with the password changing with each upgrade though.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Any further update will cause the issue/pull request to no longer be considered stale. Thank you for your contributions.
Related to https://github.com/helm/helm/issues/3053
You need to explicitly declare redis.password to workaround this fuck-up.
Also setting concrete Secret in k8s should work (not tested yet).
@Behoston are you still seeing this issue in the latest version of the stable/redis chart?
@jlegrone yup, today I was tried to install sentry and ensure that we can upgrade it. Each time I do the upgrade, sentry gets brand-new-random-password-for-imaginary-redis and redis password doesn't changed.
@Behoston it looks like the Sentry chart is currently using v3.8.1 of the redis chart:
This was patched in version 3.9.0:
https://github.com/helm/charts/pull/7619/files#diff-69232e73032d3153bb76c53271f3ff58L2
Please file a new issue (or feel free to open a pr!) against the sentry chart to upgrade to the latest version of redis. This should resolve the issue you're seeing. 馃槃
Most helpful comment
@ekampf That may end up being how we address this. Right now (due to the nature of the data we are storing in Redis) we are not doing any sort of persistence. Not enabling persistence has allowed us to reduce costs and operational overhead and the hope was that we would avoid rolling the deployment frequently and avoid having to prime the cache at least as much as possible.
All that aside, the simple fact is this chart does not support upgrades and I view that as the real issue here. Whatever practices are used for persistence, performing an upgrade should not break an active installation.