A "zombie workflow" is one that starts but does not complete. Pods are scheduled and run to completion, but the workflow is not subsequently updated.
It is as if the workflow controller never sees the pod changes.
Impacted users:
All users have been running very large workflows.
Typically:
insignificant pod change is not seen in the controller logs.Deadline exceeded is seen in the logs. Increasing the CPU and memory on the Kubernetes master node may fix this.Things that don't appear to work (rejected hypothesis):
--burst or --qps, QPS settings.--workflow-workflow or --pod-workers settings. This only impacts concurrent processing.ALL_POD_CHANGES_SIGNIFICANT=true. Hypothesis: we're missing significant pod changes.INFORMER_WRITE_BACK=false. Questions:
Users should try the following:
argoproj/workflow-controller:v2.11.7 - this is faster than v2.11.6 and all previous versions. Suitable for production.argoproj/workflow-controller:latest.argoproj/workflow-controller:mot with env MAX_OPERATION_TIME=30s. Make sure it logs defaultRequeueTime=30s maxOperationTime=30s. Hypothesis: we need more time to schedule pods.argoproj/workflow-controller:easyjson. Hypothesis: JSON marshaling is very slow.If none of this works then we need to investigate deeper.
Related:
Testing this on :latest.
withItems is used on a step template, expecting to spawn 8172 nodes with children, for a total of ~40K nodes in the workflowfurther checks (about every 10 minutes) show that node count is going up slowly, and pauses for several minutes with the same symptom [1]
eventually the work stalls around item 1993, ~with no new pods being created for this workflow, even though the cluster has ample capacity~. Update: it does progress further, but very very slowly. at unit 2044 right now, 1 hour after submission. Lots of stalled nodes, like in [1], at various stages. I expect it to fully stall eventually.
Deadline exceeded messages are still being logged
[1]
โ โโโโโ get-work-unit
โโโ run-pipeline(1988:1988)
โ โโโโโ get-work-unit
โโโ run-pipeline(1989:1989)
โ โโโโโ get-work-unit
โโโ run-pipeline(1990:1990)
โ โโโโโ get-work-unit
โโโ run-pipeline(1991:1991)
โ โโโโโ get-work-unit
โโโ run-pipeline(1992:1992)
โ โโโโโ get-work-unit
โโโ run-pipeline(1993:1993)
โโโโโ get-work-unit
Testing this on
:latest.
Thank you. Can you try this?
Run argoproj/workflow-controller:mot.with env MAX_OPERATION_TIME=30s. Make sure it logs defaultRequeueTime=30s maxOperationTime=30s. Hypothesis: we need more time to schedule pods.
Firstly, can you try INFORMER_WRITE_BACK=false?
I did have INFORMER_WRITE_BACK=false both set and not set, as I'm also testing #4565 in parallel :). It didn't make a difference.
Seeing same behaviour on :no-sig. I can try :mot now as you describe above.
By the way, i do have a couple of "zombie" workflows sitting here since yesterday, in hopes that one of these tests brings them back to life (i don't care about them otherwise, but they can be useful in this way). So far they haven't budged.
Here's another perhaps relevant point: a workflow that is both: 1) stuck, AND 2) has a pod that's running, but is "frozen" (for a reason unrelated to Argo, e.g. the Python process in the container locks up), I can get the workflow "unstuck" temporarily, to some degree, by forcing the pod to fail (exec -it, kill python). The step has a retryPolicy. It is retried, and at the same time a few more of those "stuck" branches get started up. But definitely not all of them. (UPD: the pods have all completed while i was typing this, now there are no frozen pods, but the workflows still have many "stuck" branches).

I confirmed this hack to work on 4 of my stuck workflows. All 4 of them had exactly one frozen pod (likely a coincidence). If a stuck workflow doesn't have any frozen pods, I have no way to manually kick it this way. To be sure, I am not suggesting that frozen containers are causing the workflows to get stuck, this is merely a "lucky" workaround.
@alexec I'm currently on your :grace engineering build as per #4565 testing - that is probably orthogonal, but just mentioning for completeness.
I'm going to try :mot now.
on :mot, with MAX_OPERATION_TIME=30s, confirmed in logs: defaultRequeueTime=30s maxOperationTime=30s
everything just lit up like a christmas tree! :heart_eyes_cat:
all "stuck" steps started up. (even from a 2-day-old workflow)
@alexec not to jinx it, but i think we have a winner!
Can I play this back:
:mot with 30s fixed your problem.:grace did not fix the problemCorrect!
We were able to replicate the findings here - can verify that the CPU usage is way down with hundreds of concurrent workflows running.
@tomgoren can you confirm which engineering build you ran please?
@alexec:
workflow-controller: mot+4998b2d.dirty
BuildDate: 2020-11-19T17:56:18Z
GitCommit: 4998b2d6574adfe039b9c037251ecc717e7f1996
GitTreeState: dirty
GitTag: latest
GoVersion: go1.13.15
Compiler: gc
Platform: linux/amd64
time="2020-11-20T21:16:29Z" level=info defaultRequeueTime=30s maxOperationTime=30s
I've created a POC engineering build that uses S3 to offload and archive workflows to instead of MySQL or Postgres. My hypothesis is that offloading there maybe faster for users running large (5009+ node) workflows, or that archiving maybe more to many users. On top of this, it maybe cheaper for many users. I challenge you to prove me wrong. https://github.com/argoproj/argo/pull/4582
Offloading to S3 would be very interesting and welcome. Will try it when I can! Over the past weekend we ended up running workflows of enormous size, on the :latest build, and highest node count we were able to get to, even with MAX_OPERATION_TIME=600s, was about 50,000 nodes. It's probably unreasonable to expect Argo to handle more at this point, but if anyone was curious about testing the limits, here you go :) (we ended up having to split our workflows into much smaller chunks, for sanity and observability's sake).
50,000 nodes.
Excellent. Can you share a screenshot?
We don't have those workflows in the cluster anymore, and I didn't take a screenshot! I'll try to run one of the larger workflows again over the holidays to take the screenshot
I think this is now fixe.d
Most helpful comment
@alexec: