As new to Argo I am trying the install on Openshift (crc v4.2.2) with the usual actions to weaken security within the argo project. OpenShift v4 has removed docker and is using CRI-O
The basic argo install looked ok (although I can't reach the GUI from the route I manually created), but when trying the basic workflows I am seeing the following errors:
Pcoinflip-c29nz-981967317
NamespaceNSargo
a minute ago
Generated from kubelet on crc-shdl4-master-0
9 times in the last 3 minutes
MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file
Pcoinflip-c29nz-981967317
NamespaceNSargo
a minute ago
Generated from kubelet on crc-shdl4-master-0
Unable to mount volumes for pod "coinflip-c29nz-981967317_argo(97255e6a-0d11-11ea-b49a-4aec3af0f6d9)": timeout expired waiting for volumes to attach or mount for pod "argo"/"coinflip-c29nz-981967317". list of unmounted volumes=[docker-sock]. list of unattached volumes=[podmetadata docker-sock argo-staging default-token-blhg2]
Just noticed that there is an Operator for Argo: https://github.com/jmckind/argocd-operator/blob/master/docs/guides/install-openshift.md
Now trying this approach. I will close this issue if it works :-)
OK, this for is Argo CD, which looks cool, but I still can't get workflows working?
I have installed argo workflow in the same argocd project as created by the operator.
I have run: oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:argocd:argo
so that the workflow controller runs, but I am still seeing errors I have given above when trying to create basic workflows:
Phello-world-v9hmd
NamespaceNSargocd
less than a minute ago
Generated from kubelet on crc-shdl4-master-0
9 times in the last 3 minutes
MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file
a minute ago
Generated from kubelet on crc-shdl4-master-0
Unable to mount volumes for pod "hello-world-v9hmd_argocd(22280a8f-0d1f-11ea-b3de-4aec3af0f6d9)": timeout expired waiting for volumes to attach or mount for pod "argocd"/"hello-world-v9hmd". list of unmounted volumes=[docker-sock]. list of unattached volumes=[podmetadata docker-sock default-token-rt4c6]
I have just tried something mentioned on another issue:
apiVersion: v1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
data:
config: |
containerRuntimeExecutor: cri-o
No difference
More Googling and I was on the right path. I needed to change cri-o above to k8sapi and all is well,
although I needed to add:
oc adm policy add-cluster-role-to-user cluster-admin system:serviceaccount:argocd:default
I'm using kind k8s cluster and met the same problem.
Editing config map like @stevef1uk said, the configmap yaml list as follow
# config-executor.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
data:
config: |
containerRuntimeExecutor: k8sapi
Then kubectl -n argo apply -f config-executor.yaml,
and the /var/run/docker.sock is not a socket file problem goes away.
Most helpful comment
I'm using
kindk8s cluster and met the same problem.Editing config map like @stevef1uk said, the configmap yaml list as follow
Then
kubectl -n argo apply -f config-executor.yaml,and the /var/run/docker.sock is not a socket file problem goes away.