Is this a request for help?: Yes
Is this a BUG REPORT or FEATURE REQUEST? (choose one): BUG REPORT
Version of Helm and Kubernetes:
Client: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.10.0", GitCommit:"9ad53aac42165a5fadc6c87be0dea6b115f93090", GitTreeState:"clean"}
Client Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.2", GitCommit:"17c77c7898218073f14c8d573582e8d2313dc740", GitTreeState:"clean", BuildDate:"2018-10-30T21:39:16Z", GoVersion:"go1.11.1", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.1", GitCommit:"4ed3216f3ec431b140b1d899130a69fc671678f4", GitTreeState:"clean", BuildDate:"2018-10-05T16:36:14Z", GoVersion:"go1.10.4", Compiler:"gc", Platform:"linux/amd64"}
Which chart:
stable/kong
What happened:
when installing kong with ingress controller, there are 2 migration jobs, the one names migration-on-upgrade always fails and thus helm install fails. The migration job's pod result in this error
Error: [postgres error] could not retrieve current migrations: [postgres error] FATAL: password authentication failed for user "kong"
I checked the secret too it contains the password. But when I try to do psql from a pod with the password in secret it fails.
What you expected to happen:
This job should not fail and the installation should work fine. The password in secret should be correct.
How to reproduce it (as minimally and precisely as possible):
Install helm with ingress controller, first time it works fine and later on when next deployment is done it start failing. After deleting the helm release and deploying helm works fine.
Anything else we need to know:
I think both the migration scripts are same so another question is do we need both the migration scripts?
@prakharjoshi One migration Job is used at for the migrations at the start to bootstrap and run the initial set of migration.
The other one is used with a upgrade hook, meaning that the migration runs every-time you upgrade Kong or the helm chart.
Regarding,
Install helm with ingress controller, first time it works fine and later on when next deployment is done it start failing. After deleting the helm release and deploying helm works fine.
Could you elaborate on what you mean by "when next deployment is done it start failing"?
Are you trying to deploy another Kong app in the same cluster or something else?
@hbagdi It was the issue with miss configuration for Postgres chart, I was not defining the password for Postgres and every deployment the secret created in the chart was creating a new password and hence was causing failure for migration jobs as the jobs are picking password from secret.
I guess an update in doc will be handy here to avoid such situations :)
PR welcome!
On Wed, Dec 5, 2018, 4:19 AM prakhar <[email protected] wrote:
@hbagdi https://github.com/hbagdi It was the issue with miss
configuration for Postgres chart, I was not defining the password for
Postgres and every deployment the secret created in the chart was creating
a new password and hence was causing failure for migration jobs as the jobs
are picking password from secret.I guess an update in doc will be handy here to avoid such situations :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/helm/charts/issues/9543#issuecomment-444466271, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AEeiOTfEwHg8pTNrksYBwnL7jQB_r2YWks5u17nOgaJpZM4Yy_Bf
.
It's still an issue, just running:
helm upgrade kong stable/kong \
--install \
--set ingressController.enabled=true
Twice will fail.
@rafalwrzeszcz
I had the same issue, it comes with the fact that doing helm upgrade kong, it will also do an upgrade of postgresql.
Best way to debug is to compare outputs of:
helm upgrade -f values.yaml kong stable/kong --dry-run
helm get kong
You will see that the section with kind: Secret is changed.
Quick fix is to make sure that Helm upgrade is producing the same "Secret" as the previous release. One way to do it is:
helm upgrade -f values.yaml --set postgresql.postgresqlPassword=XXXX kong stable/kong
https://github.com/helm/charts/issues/6859
https://github.com/helm/charts/issues/5167
PS: Using postgresql.existingSecret does not work, since it won't produce a resource of kind: Secret, which will result in deleting the secret on K8
Thank @SeriousJul
Quick fix is to make sure that Helm upgrade is producing the same "Secret" as the previous release. One way to do it is:
helm upgrade -f values.yaml --set postgresql.postgresqlPassword=XXXX kong stable/kong
PR welcome to document this in the chart itself.
Most helpful comment
@rafalwrzeszcz
I had the same issue, it comes with the fact that doing helm upgrade kong, it will also do an upgrade of postgresql.
Best way to debug is to compare outputs of:
helm upgrade -f values.yaml kong stable/kong --dry-runhelm get kongYou will see that the section with
kind: Secretis changed.Quick fix is to make sure that Helm upgrade is producing the same "Secret" as the previous release. One way to do it is:
helm upgrade -f values.yaml --set postgresql.postgresqlPassword=XXXX kong stable/konghttps://github.com/helm/charts/issues/6859
https://github.com/helm/charts/issues/5167
PS: Using
postgresql.existingSecretdoes not work, since it won't produce a resource ofkind: Secret, which will result in deleting the secret on K8