Argo: Can't run example "hello-world" on OpenShift due to hostPath error

Created on 20 Mar 2019  路  10Comments  路  Source: argoproj/argo

Is this a BUG REPORT or FEATURE REQUEST?: BUG REPORT

What happened:
Following the "Getting started section I performed the following steps:

brew install argoproj/tap/argo
oc new-project argo
oc apply -n argo -f https://raw.githubusercontent.com/argoproj/argo/v2.2.1/manifests/install.yaml
oc create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default

argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml

Argo showed an error message:

pods "hello-world-jdbzm" is forbidden: unable to validate against any security context constraint: [spec.volumes[1]: Invalid value: "hostPath": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: "hostPath": hostPath volumes are not allowed to be used]

After looking at the workflow template I wonder that there's neither a volume nor a hostPath defined. Other examples (coinflip) aborted with the same error.

What you expected to happen:

An output that the hello-world workflow has been processed successfully.

How to reproduce it (as minimally and precisely as possible):

  • Install OpenShift 3.11
  • Follow the steps mentioned above

Anything else we need to know?:

Environment:

  • Argo version:
$ argo version
argo: v2.2.1
  BuildDate: 2018-10-11T16:25:59Z
  GitCommit: 3b52b26190163d1f72f3aef1a39f9f291378dafb
  GitTreeState: clean
  GitTag: v2.2.1
  GoVersion: go1.10.3
  Compiler: gc
  Platform: darwin/amd64
  • Kubernetes version :
$ kubectl version -o yaml
clientVersion:
  buildDate: 2018-06-27T20:17:28Z
  compiler: gc
  gitCommit: 91e7b4fd31fcd3d5f436da26c980becec37ceefe
  gitTreeState: clean
  gitVersion: v1.11.0
  goVersion: go1.10.2
  major: "1"
  minor: "11"
  platform: darwin/amd64
serverVersion:
  buildDate: 2019-02-08T23:07:29Z
  compiler: gc
  gitCommit: d4cacc0
  gitTreeState: clean
  gitVersion: v1.11.0+d4cacc0
  goVersion: go1.10.3
  major: "1"
  minor: 11+
  platform: linux/amd64

Other debugging information (if applicable):

  • workflow result:
$ argo get <workflowname>
  • executor logs:
$ kubectl logs <failedpodname> -c init
$ kubectl logs <failedpodname> -c wait
  • workflow-controller logs:
$ kubectl logs -n kube-system $(kubectl get pods -l app=workflow-controller -n kube-system -o name)
time="2019-03-20T09:47:44Z" level=info msg="Processing workflow" namespace=argo workflow=hello-world-thv6l
time="2019-03-20T09:47:44Z" level=info msg="Updated phase  -> Running" namespace=argo workflow=hello-world-thv6l
time="2019-03-20T09:47:44Z" level=info msg="Failed to create pod hello-world-thv6l (hello-world-thv6l): pods \"hello-world-thv6l\" is forbidden: unable to validate against any security context constraint: [spec.volumes[1]: Invalid value: \"hostPath\": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: \"hostPath\": hostPath volumes are not allowed to be used]" namespace=argo workflow=hello-world-thv6l
time="2019-03-20T09:47:44Z" level=info msg="Pod node hello-world-thv6l (hello-world-thv6l) initialized Error (message: pods \"hello-world-thv6l\" is forbidden: unable to validate against any security context constraint: [spec.volumes[1]: Invalid value: \"hostPath\": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: \"hostPath\": hostPath volumes are not allowed to be used])" namespace=argo workflow=hello-world-thv6l
time="2019-03-20T09:47:44Z" level=info msg="Updated phase Running -> Error" namespace=argo workflow=hello-world-thv6l
time="2019-03-20T09:47:44Z" level=info msg="Updated message  -> pods \"hello-world-thv6l\" is forbidden: unable to validate against any security context constraint: [spec.volumes[1]: Invalid value: \"hostPath\": hostPath volumes are not allowed to be used spec.volumes[2]: Invalid value: \"hostPath\": hostPath volumes are not allowed to be used]" namespace=argo workflow=hello-world-thv6l
time="2019-03-20T09:47:44Z" level=info msg="Marking workflow completed" namespace=argo workflow=hello-world-thv6l
time="2019-03-20T09:47:44Z" level=info msg="Workflow update successful" namespace=argo workflow=hello-world-thv6l
t

Most helpful comment

Hi @VaibhavPage ,

thanks for your help, adding the containerRuntimeExecutor: kubelet worked for me.

Cheers!

All 10 comments

Its because the workflow pod trying to mount /var/run/docker.sock for better performance on cmd like docker cp and your service account doesn't have permissions to mount host paths. Either grant your service account hostmount-anyuid and that should resolve the issue. Or simply edit the workflow-controller configmap and add containerRuntimeExecutor: kubelet or containerRuntimeExecutor: k8sapi

Hi @VaibhavPage ,

thanks for your help, adding the containerRuntimeExecutor: kubelet worked for me.

Cheers!

@VaibhavPage , @devops-42 , i have added the "containerRuntimeExecutor: kubelet", but still the issue is not getting resolved.

please find my configmap:

apiVersion: v1
data:
ContainerRuntimeExecutor: kubelet
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"ConfigMap","metadata":{"annotations":{},"name":"workflow-controller-configmap","namespace":"argo"}}
creationTimestamp: "2019-05-13T06:22:38Z"
name: workflow-controller-configmap
namespace: argo
resourceVersion: "1701032"
selfLink: /api/v1/namespaces/argo/configmaps/workflow-controller-configmap
uid: 81a2bc94-7547-11e9-b5f3-00505699ae46

can you kindly check anything missing in my configmap?

The ConfigMap approach didn't work for me either on OCP 3.11

EDIT:
@gchandramohan found out what the problem was for both of us, the ContainerRuntimeExecutor: kubelet should be inside of a config, i.e:

apiVersion: v1
data:
  config: |
    ContainerRuntimeExecutor: kubelet
kind: ConfigMap
...  # left out for brevity

Since I am having the same issue, it would be great if you could show us the solution. How do we
exactly fix this?

See here for the adding ControllerRuntimeExecutor: kubelet
https://github.com/amalic/argo/commit/b3339659a0060259b50597d28d0072909cdea2d0

and here for k8s-api
https://github.com/amalic/argo/commit/198767cb9cd0d1be02d6fc952a1fe368b7721cbb

Both worked for me, but I am sticking to k8s-api.

Hi, sorry for bringing this up again, but changing the runtime executor to "kubelet" in the namespace install manifest does not work for me?

Latest argo, k8s 1.15.9.

diff --git a/manifests/namespace-install.yaml b/manifests/namespace-install.yaml
index 9ab771b5..74dc2568 100644
--- a/manifests/namespace-install.yaml
+++ b/manifests/namespace-install.yaml
@@ -251,6 +251,9 @@ apiVersion: v1
 kind: ConfigMap
 metadata:
   name: workflow-controller-configmap
+data:
+  config: |
+    ContainerRuntimeExecutor: kubelet
 ---
 apiVersion: v1
 kind: Service

@pisymbol use k8sapi instead of kubelet

@CermakM Tried both. Still no worky. I got it to work by giving my namespace a no restriction psp with respect to volumes but I thought the k8sapi would cause the controller not to use hostPath, right?

@pisymbol yes, k8sapi executor will not use hostPath

@CermakM Is there something I have to do special then outside of changing the namespace-install yaml's ConfigMap as per above and reinstall? That's what I did and I still get the hostPath error? That's why I am confused (this was a complete fresh install too - think k delete -f namespace-install.yaml && k install -f namespace-install.yaml -n argo etc.),

Was this page helpful?
0 / 5 - 0 ratings