Argo: Hello Wold example stuck at ContainerCreating with microk8s

Created on 1 Apr 2020  路  3Comments  路  Source: argoproj/argo

What happened:

I installed microk8s 1.18.0 on Ubuntu 18.04. Enabled dns, storage, ingress.
Then I went through https://github.com/argoproj/argo/blob/master/docs/getting-started.md.
I executed following

kubectl create namespace argo
kubectl apply -n argo -f https://raw.githubusercontent.com/argoproj/argo/stable/manifests/install.yaml
kubectl create rolebinding default-admin --clusterrole=admin --serviceaccount=default:default

I installed argo binary, then I did microk8s.kubectl config view --raw > $HOME/.kube/config
Then I tried argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml which is stuck at ContainerCreating

When I run kubectl describe pods, I can see info like this one

  ----     ------       ----              ----                    -------                                                                                                                      
  Normal   Scheduled    18s               default-scheduler       Successfully assigned default/hello-world-vxd48 to ubuntu-server                                                             
  Warning  FailedMount  2s (x6 over 18s)  kubelet, ubuntu-server  MountVolume.SetUp failed for volume "docker-sock" : hostPath type check failed: /var/run/docker.sock is not a socket file

What you expected to happen:

I expected example to run

How to reproduce it (as minimally and precisely as possible):

repeat the steps described in the issue

Anything else we need to know?:

Environment:

  • Argo version:
$ argo version
argo: v2.7.0
  BuildDate: 2020-03-31T23:35:04Z
  GitCommit: 4d1175eb68f6578ed5d599f877be9b4855d33ce9
  GitTreeState: clean
  GitTag: v2.7.0
  GoVersion: go1.13.4
  Compiler: gc
  Platform: linux/amd64

  • Kubernetes version :
$ kubectl version -o yaml
clientVersion:
  buildDate: "2020-03-25T14:58:59Z"
  compiler: gc
  gitCommit: 9e991415386e4cf155a24b1da15becaa390438d8
  gitTreeState: clean
  gitVersion: v1.18.0
  goVersion: go1.13.8
  major: "1"
  minor: "18"
  platform: linux/amd64
serverVersion:
  buildDate: "2020-03-25T14:50:46Z"
  compiler: gc
  gitCommit: 9e991415386e4cf155a24b1da15becaa390438d8
  gitTreeState: clean
  gitVersion: v1.18.0
  goVersion: go1.13.8
  major: "1"
  minor: "18"
  platform: linux/amd64

Other debugging information (if applicable):

  • workflow result:
argo get <workflowname>
  • executor logs:
kubectl logs <failedpodname> -c init
kubectl logs <failedpodname> -c wait
  • workflow-controller logs:
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name)

Logs

argo get <workflowname>
kubectl logs <failedpodname> -c init
kubectl logs <failedpodname> -c wait
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name)


Message from the maintainers:

If you are impacted by this bug please add a 馃憤 reaction to this issue! We often sort issues this way to know what to prioritize.

bug

Most helpful comment

ok, I found a way to make it work with latest microk8s. Seems that the default executor needs to be changed (https://github.com/argoproj/argo/blob/master/docs/workflow-executors.md). The default seems to be docker which doesn't work with latest microk8s.
I added containerRuntimeExecutor: k8sapi to the hello world example (https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml
)
for workflow-controller-configmap, so that part looks like this now

apiVersion: apps/v1                                                                                                                                                                            
kind: Deployment                                                                                                                                                                               
metadata:                                                                                                                                                                                      
  name: workflow-controller-configmap                                                                                                                                                          
data:
  config: |
    containerRuntimeExecutor: k8sapi

It worked with k8sapi

All 3 comments

You should check your docker sock file path

microk8s uses containerd. I installed version 1.13 which uses docker and then manually applied these

sudo ln -s /var/snap/microk8s/common/var/lib/docker /var/lib/docker
sudo ln -s /var/snap/microk8s/current/docker.sock /var/run/docker.sock

as mentioned in this issue https://github.com/kubeflow/kubeflow/issues/2347
after which it started to work.

My question is how I get it to work with latest microk8s, if possible.

ok, I found a way to make it work with latest microk8s. Seems that the default executor needs to be changed (https://github.com/argoproj/argo/blob/master/docs/workflow-executors.md). The default seems to be docker which doesn't work with latest microk8s.
I added containerRuntimeExecutor: k8sapi to the hello world example (https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml
)
for workflow-controller-configmap, so that part looks like this now

apiVersion: apps/v1                                                                                                                                                                            
kind: Deployment                                                                                                                                                                               
metadata:                                                                                                                                                                                      
  name: workflow-controller-configmap                                                                                                                                                          
data:
  config: |
    containerRuntimeExecutor: k8sapi

It worked with k8sapi

Was this page helpful?
0 / 5 - 0 ratings