I am attempting to run pods on Argo Workflow with AWS Fargate.
And I got the same message below when I used Fargate Profile, even if I use Volume Mount for ConfigMap or Secret or NOT.
$ kubectl get pod -o yaml
...
spec:
...
volumes:
- downwardAPI:
defaultMode: 420
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.annotations
path: annotations
name: podmetadata
- hostPath:
path: /var/run/docker.sock
type: Socket
name: docker-sock
- name: argo-token-nrjft
secret:
defaultMode: 420
secretName: argo-token-nrjft
status:
conditions:
- lastProbeTime: null
lastTransitionTime: "2020-03-24T07:21:53Z"
message: 'Pod not supported on Fargate: volumes not supported: docker-sock'
reason: POD_UNSUPPORTED_ON_FARGATE
status: "False"
type: PodScheduled
phase: Pending
qosClass: BestEffort
...
However I could successfully apply my manifest file and workflow was running, the STATUS of the pod was Pending and never started running.
I'm guessing that Fargate automatically and always use hostPath when developers use Argo workflow with Fargate.
Is there anyone who had encountered and has solved it ?
FIY, this is my environment.
Kubernetes: 1.15
argo: v2.7.0-rc1
And this is my manifest file I tried to apply.
apiVersion: argoproj.io/v1alpha1
kind: Workflow
metadata:
name: test
generateName: test-
namespace: test
spec:
serviceAccountName: argo
entrypoint: test
templates:
- name: test
container:
image: nginx:alpine
command: [sh,-c]
args: ["echo test"]
Successfully run workflow by using Fargate.
@hayato1121 I'm facing the same issue on EKS Fargate. were you able to find a workaround ?
@AkhilKavuri
No. Because of this issue, I gave up using Fargate and instead of it use EC2.
@hayato1121 : Change the workflow executor to K8sAPI : https://github.com/argoproj/argo/blob/master/docs/workflow-executors.md
The default mechanism used by the wait container to get pod logs and pod execution status (to make sure a step completes) is to mount the docker socket and that does not work on EKS by the default seccomp profile
@sfc-gh-pkrishnamurthy I see your recommendation to use k8sapi; does that mean you've successfully tested it in that way?
@sfc-gh-pkrishnamurthy nvm, I see your other issue here: https://github.com/argoproj/argo/issues/3265
@weisjohn i would say that switching to k8sApi was functionally successful . It works.
I had problems when the cluster was fairly overloaded. It causes issues like https://github.com/argoproj/argo/issues/3007 and https://github.com/argoproj/argo/issues/3265
The wait container retries for 5 times in a 10 ms interval which is not great. but this is a problem only under load
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.