Argo: failed to save outputs: read /argo/secret: is a directory

Created on 6 Apr 2020  ·  7Comments  ·  Source: argoproj/argo

Checklist:

  • [X] I've included the version.
  • [X] I've included reproduction steps.
  • [X] I've included the workflow YAML.
  • [X] I've included the logs.

What happened:
Whem I'm trying to use artifact passing the output can't be saved (using https://raw.githubusercontent.com/argoproj/argo/master/examples/artifact-passing.yaml for instance)

My ConfigMap:

    artifactRepository:
      s3:
        bucket: XXXX
        endpoint: s3.eu-west-1.amazonaws.com
        accessKeySecret:
          name: argo-artifacts
          key: accesskey
          secretKeySecret:
          name: argo-artifacts
          key: secretkey

I've verified that my AWS credentials are valid, they appear in the dashboard, and was added that way:

kubectl create secret generic argo-artifacts --from-file=./accesskey --from-file=./secretkey

What you expected to happen:
The generated artifacts uploaded on the s3 bucket

Environment:

  • Argo version:
argo: v2.5.1
  BuildDate: 2020-02-20T18:19:45Z
  GitCommit: fb496a244383822af5d4c71431062cebd6de0ee4
  GitTreeState: clean
  GitTag: v2.5.1
  GoVersion: go1.13.4
  Compiler: gc
  Platform: linux/amd64
  • Kubernetes version :
clientVersion:
  buildDate: "2020-03-25T14:58:59Z"
  compiler: gc
  gitCommit: 9e991415386e4cf155a24b1da15becaa390438d8
  gitTreeState: clean
  gitVersion: v1.18.0
  goVersion: go1.13.8
  major: "1"
  minor: "18"
  platform: linux/amd64
serverVersion:
  buildDate: "2020-03-25T14:50:46Z"
  compiler: gc
  gitCommit: 9e991415386e4cf155a24b1da15becaa390438d8
  gitTreeState: clean
  gitVersion: v1.18.0
  goVersion: go1.13.8
  major: "1"
  minor: "18"
  platform: linux/amd64

Logs

Name:                artifact-passing-98hkq
Namespace:           default
ServiceAccount:      default
Status:              Failed
Message:             child 'artifact-passing-98hkq-1179724389' failed
Created:             Mon Apr 06 11:47:44 +0200 (1 minute ago)
Started:             Mon Apr 06 11:47:44 +0200 (1 minute ago)
Finished:            Mon Apr 06 11:47:48 +0200 (1 minute ago)
Duration:            4 seconds

STEP                                          PODNAME                            DURATION  MESSAGE
 ✖ artifact-passing-98hkq (artifact-example)                                               child 'artifact-passing-98hkq-1179724389' failed
 └---⚠ generate-artifact (whalesay)           artifact-passing-98hkq-1179724389  4s        failed to save outputs: read /argo/secret: is a directory


Message from the maintainers:

If you are impacted by this bug please add a 👍 reaction to this issue! We often sort issues this way to know what to prioritize.

bug

Most helpful comment

My bad, it was an indentation problem in the workflow-controller-configmap.
Closing the issue, to avoid that make sure you respect the yaml syntax.

All 7 comments

Can you provide the pods logs for wait container ?

Sure

time="2020-04-06T09:47:45Z" level=info msg="Starting Workflow Executor" version=vv2.7.0+4d1175e.dirty
time="2020-04-06T09:47:45Z" level=info msg="Creating a docker executor"
time="2020-04-06T09:47:45Z" level=info msg="Executor (version: vv2.7.0+4d1175e.dirty, build_date: 2020-04-01T00:13:15Z) initialized (pod: default/artifact-passing-98hkq-1179724389) with template:\n{\"name\":\"whalesay\",\"arguments\":{},\"inputs\":{},\"outputs\":{\"artifacts\":[{\"name\":\"hello-art\",\"path\":\"/tmp/hello_world.txt\"}]},\"metadata\":{},\"container\":{\"name\":\"\",\"image\":\"docker/whalesay:latest\",\"command\":[\"sh\",\"-c\"],\"args\":[\"sleep 1; cowsay hello world | tee /tmp/hello_world.txt\"],\"resources\":{}},\"archiveLocation\":{\"s3\":{\"endpoint\":\"s3.eu-west-1.amazonaws.com\",\"bucket\":\"photosets.myeggo.com\",\"accessKeySecret\":{\"name\":\"argo-artifacts\",\"key\":\"secretkey\"},\"secretKeySecret\":{\"key\":\"\"},\"key\":\"cosmos/artifact-passing-98hkq/artifact-passing-98hkq-1179724389\"}}}"
time="2020-04-06T09:47:45Z" level=info msg="Waiting on main container"
time="2020-04-06T09:47:47Z" level=info msg="main container started with container ID: 8da34b20e8ed6251c21061603b851fb9cc65742e5fd2577791a31cfbffc546cf"
time="2020-04-06T09:47:47Z" level=info msg="Starting annotations monitor"
time="2020-04-06T09:47:47Z" level=info msg="docker wait 8da34b20e8ed6251c21061603b851fb9cc65742e5fd2577791a31cfbffc546cf"
time="2020-04-06T09:47:47Z" level=info msg="Starting deadline monitor"
time="2020-04-06T09:47:47Z" level=info msg="Main container completed"
time="2020-04-06T09:47:47Z" level=info msg="No output parameters"
time="2020-04-06T09:47:47Z" level=info msg="Saving output artifacts"
time="2020-04-06T09:47:47Z" level=info msg="Staging artifact: hello-art"
time="2020-04-06T09:47:47Z" level=info msg="Copying /tmp/hello_world.txt from container base image layer to /tmp/argo/outputs/artifacts/hello-art.tgz"
time="2020-04-06T09:47:47Z" level=info msg="Archiving 8da34b20e8ed6251c21061603b851fb9cc65742e5fd2577791a31cfbffc546cf:/tmp/hello_world.txt to /tmp/argo/outputs/artifacts/hello-art.tgz"
time="2020-04-06T09:47:47Z" level=info msg="sh -c docker cp -a 8da34b20e8ed6251c21061603b851fb9cc65742e5fd2577791a31cfbffc546cf:/tmp/hello_world.txt - | gzip > /tmp/argo/outputs/artifacts/hello-art.tgz"
time="2020-04-06T09:47:47Z" level=info msg="Annotations monitor stopped"
time="2020-04-06T09:47:48Z" level=info msg="Archiving completed"
time="2020-04-06T09:47:48Z" level=error msg="executor error: read /argo/secret: is a directory"
time="2020-04-06T09:47:48Z" level=info msg="Killing sidecars"
time="2020-04-06T09:47:48Z" level=info msg="Alloc=3867 TotalAlloc=10745 Sys=70080 NumGC=4 Goroutines=9"
time="2020-04-06T09:47:48Z" level=fatal msg="read /argo/secret: is a directory"

@sylvainblot
Hi there,
we encountered this bug deploying two argo in the same cluster (different namespaces).
Did you do that too?

Each worfklow-controller was configured to use a specific namespace. (as explained here https://github.com/argoproj/argo/issues/508)
The first installation is working. The second one gives the error above

@ProvoK I've a single installation, just retried the install on a new machine with minikube and got the same error

@sylvainblot
Got it. We were using minikube too yesterday, may be something related to it?
We did again the same procedure on GKE cluster and that problem did not occur.

👍 That what I was thinking too, I dont have the issue with a cluster on premise deployed using Nvidia deepops

My bad, it was an indentation problem in the workflow-controller-configmap.
Closing the issue, to avoid that make sure you respect the yaml syntax.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

ddseapy picture ddseapy  ·  3Comments

hden picture hden  ·  3Comments

kounoike picture kounoike  ·  4Comments

vicaire picture vicaire  ·  4Comments

alexlatchford picture alexlatchford  ·  3Comments