Since Helm doesn't allow to install a new release if the old one with the same name is not purged, helm diff shows errors. Thus helmfile apply returns errors too. So, we end up with something like this:
worker 2/17 finished
Error: "shared-alpha-elasticsearch" has no deployed releases
"shared-alpha-elasticsearch" has no deployed releases
Error: plugin "diff" exited with error
worker 6/17 finished
Error: "shared-beta-rabbitmq" has no deployed releases
"shared-beta-rabbitmq" has no deployed releases
Error: plugin "diff" exited with error
worker 3/17 finished
Error: "shared-dev-elasticsearch" has no deployed releases
"shared-dev-elasticsearch" has no deployed releases
Error: plugin "diff" exited with error
Do you any thoughts on how to work around this?
The only fix I came up with is to do a delete --purge before reapplying but it's not an ideal solution.
Edit: just found your post, https://github.com/databus23/helm-diff/issues/121#issuecomment-462047382 , not sure if my answer is really relevant :/
Yep, I just want to know whether maintainers have any thoughts about more elegant solutions or not. This behavior is a bit annoying. And probably we'll see the same weirdness when one of the releases is in the FAILED state. I've hadn't a chance to check this out.
Probably it could be an additional flag for helmfile delete which checks out whether the release is purged or not and then removes it completely (since helm status works ok). So that we could run this command in conjunction with helmfile apply to achieve idempotency.
It seems helmfile works ok for releases with the FAILED state.
But when the INITIAL release gets the FAILED status helmfile diff returns the error:
Error: "s3" has no deployed releases
"s3" has no deployed releases
Error: plugin "diff" exited with error
and helmfile apply also fails.
I'm not sure this is the cause, but it seems like the options for the helm command are out of order. Running helmfile in debug mode, I see:
helm upgrade --install --reset-values hello-python dummy/hello-python --namespace staging --values /var/folders/d3/yxwv5yn5715fk31lq36lnk780000gn/T/values951440509 --values /var/folders/d3/yxwv5yn5715fk31lq36lnk780000gn/T/values351191480 --values /var/folders/d3/yxwv5yn5715fk31lq36lnk780000gn/T/values322829751 --set rbac.create=false
I think that the release and chart needs to come first, followed by the flags.
I am using helm version 2.12.3
@pdutta777 I don't believe that's the case. The error you are getting looks like what @andrewnazarov spoke of. If you install a chart and the very first install fails you will get this error message from helm "shared-alpha-elasticsearch" has no deployed releases
I solved my issue by making sure that the previous install was purged completely: helm delete hello-python --purge. After that, helmfile is deploying successfully
@pdutta777 , but this is a manual action. The same cannot be achieved in an automatic fashion. But I find helmfile as a great tool for unattended workflows. Or probably want it to be).
Helm's --atomic flag introduced in 2.13 will purge an initial release if it fails for whatever reason meaning subsequent installs will succeed
https://github.com/roboll/helmfile/pull/491 aims to support this as a configurable option for Helmfile releases
Thanks for contributing the --atomic feature, @Evesy!
As the feature has been merged and released, I'm wondering what we should do in favor of this issue next.
Can we, perhaps, safely close this as resolved if we could make --atomic the default behavior of helmfile?
cc @andrewnazarov @pdutta777 @sstarcher
I'm not strongly opinionated but I believe Helmfile should mirror the default behaviour of Helm when it comes to the actual execution of Helm binary commands unless explicitly specified otherwise, unless this pattern has already been broken with any other flags.
Should people expect behavioural differences when moving a Helm release into a barebones Helmfile structure, or should any non-default behaviour need to be provided via helmDefaults/releases?
I do think that Helm itself should purge a release if it fails on the first ever run by default, since aside for debugging purposes there's no use for it to sick around as there's no upgrade path from that state.
I believe this should be resolved with the atomic flag.
@Evesy You've made a good point!
I believe Helmfile should mirror the default behaviour of Helm
I basically agree.
For example, we have a slightly related discussion in #511 for helmfile delete, saying helmfile delete should be consistent with helm delete.
My suggestion here and there is, use different terminology where helmfile is opinionated. That's why helmfile sync and helmfile apply isn't helmfile upgrade. As it isn't helmfile upgrade, I thought it'd be ok to turn atomic on by default.
WDYT?
--atomic flag looks good, but it won't fix an issue when the release is deployed without it and fails.
If the initial release fails then even though you add --atomic flag later to your helmfile config you will end up with the failed release.
Probably the idea to set it to true by default will help. There won't be a silver bullet, I think.
I do think that Helm itself should purge a release if it fails on the first ever run by default
Totally agree.
Is there any fix for this? The current setup is not resulting in idempotent deployment. How can we overcome this?
Nevermind, it is working idempotently by switching to apply
So I experienced a timeout error on my apply which was creating multiple releases -- one service and two deploys. When I ran destroy only the release for the service (the only resource successfully created) was destroyed. A subsequent apply could only create the service, and returned the error that my release has no deployed releases.
Would it make sense when this error is populated for helmfile to suggest running helm del --purge $RELEASE_NAME?
I only ask because it's not intuitive since it's saying a release doesn't exist, so you wouldn't think that the solution to the problem is to purge the release with regular helm especially after running destroy.
As an aside, I am pretty sure I ran into the same issue when I was getting up and running with helmfile for the first time.
Most helpful comment
I solved my issue by making sure that the previous install was purged completely:
helm delete hello-python --purge. After that,helmfileis deploying successfully