Checklist:
What happened:
error: error when retrieving current configuration of:
Resource: "argoproj.io/v1alpha1, Resource=workflows", GroupVersionKind: "argoproj.io/v1alpha1, Kind=Workflow"
Name: "", Namespace: "default"
from server for: "ci-example.yml": resource name may not be empty
What you expected to happen:
https://github.com/argoproj/argo/blob/master/examples/ci.yaml
How to reproduce it (as minimally and precisely as possible):
https://github.com/argoproj/argo/blob/master/examples/ci.yaml
Anything else we need to know?:
Environment:
$ argo version
$ kubectl version -o yaml
Other debugging information (if applicable):
argo get <workflowname>
kubectl logs <failedpodname> -c init
kubectl logs <failedpodname> -c wait
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name)
Logs
argo get <workflowname>
kubectl logs <failedpodname> -c init
kubectl logs <failedpodname> -c wait
kubectl logs -n argo $(kubectl get pods -l app=workflow-controller -n argo -o name)
Message from the maintainers:
If you are impacted by this bug please add a 馃憤 reaction to this issue! We often sort issues this way to know what to prioritize.
This works for me:
$ argo submit examples/ci.yaml
Name: ci-example-rkwcz
Namespace: default
ServiceAccount: default
Status: Pending
Created: Thu Apr 16 08:39:58 -0700 (now)
Parameters:
revision: master
Are you please able to provide more details? Which version are you using for example?
This works for me:
$ argo submit examples/ci.yaml Name: ci-example-rkwcz Namespace: default ServiceAccount: default Status: Pending Created: Thu Apr 16 08:39:58 -0700 (now) Parameters: revision: masterAre you please able to provide more details? Which version are you using for example?
okay, i was trying kubectl apply -f ci.yaml, should we only try your examples with argo cli ?
correct
You can submit using kubectl, but you must add apiVersion and kind fields.
Is this still a problem please?
@alexec I'm having the same problem. What do you mean by "but you must add apiVersion and kind fields." They are there already: https://github.com/argoproj/argo/blob/master/examples/ci.yaml#L1
Do you mean somewhere else? Thanks
@MrAmbiG @georgekaz
the error message says:
error: error when retrieving current configuration of:
Resource: "argoproj.io/v1alpha1, Resource=workflows", GroupVersionKind: "argoproj.io/v1alpha1, Kind=Workflow"
Name: "", Namespace: "default"
from server for: "ci-example.yml": resource name may not be empty
That last phrase of the error message is key: resource name may not be empty. Every resource in k8s must have a metadata.name value.
The resource you're trying to apply to your cluster has a generateName value, not a name value: https://github.com/argoproj/argo/blob/master/examples/ci.yaml#L4
@weisjohn I'm aware of this but that's in the example so I expect that must have worked for someone, and generateName is a valid k8s concept: https://kubernetes.io/docs/reference/using-api/api-concepts/#generated-values. It should be creating a name for us. What is apparent though is that you can not use generateName with kubectl apply,but you can use it with kubectl create. I'm trying to apply this manifest via ArgoCD, so maybe the problem is actually with ArgoCD choosing the wrong method to apply this resource kind. ArgoCD does allow you to create Jobs using generateName as described here: https://argoproj.github.io/argo-cd/user-guide/resource_hooks/ . I tested that and it works. In the end I've changed to using name in the workflow manifest though, to get past the problem for now. It should work with generateName though.
@georgekaz interesting, so if generateName is only applicable if using kubectl create, then ArgoCD must be using kubectl create under the hood?
@weisjohn , when i've used generateName in a Job spec as per the resource hooks, then it's worked so I guess it must be using create. However when I'm using it with a workflow spec it's failing, so it must be using apply. There must be some logic that ArgoCD is using to determine, but in this case it's not working. Under the hood though it must be interacting with the k8s api using create and apply.
This looks fixed. Please re-open if not.
This appears to still be a problem when using generateName. Changing to using name seems to be a workaround, but shouldn't be necessary. Unless generateName isn't valid in this context, then it should probably be documented somewhere.
error when retrieving current configuration of: Resource: "tekton.dev/v1beta1, Resource=pipelineruns", GroupVersionKind: "tekton.dev/v1beta1, Kind=PipelineRun" Name: "", Namespace: "gitops-workshop" Object: &{map["apiVersion":"tekton.dev/v1beta1" "kind":"PipelineRun" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "generateName":"gitops-workshop-pipeline-run-" <... SNIP ...> from server for: "/dev/shm/216691940": resource name may not be empty
Most helpful comment
This appears to still be a problem when using generateName. Changing to using name seems to be a workaround, but shouldn't be necessary. Unless generateName isn't valid in this context, then it should probably be documented somewhere.
error when retrieving current configuration of: Resource: "tekton.dev/v1beta1, Resource=pipelineruns", GroupVersionKind: "tekton.dev/v1beta1, Kind=PipelineRun" Name: "", Namespace: "gitops-workshop" Object: &{map["apiVersion":"tekton.dev/v1beta1" "kind":"PipelineRun" "metadata":map["annotations":map["kubectl.kubernetes.io/last-applied-configuration":""] "generateName":"gitops-workshop-pipeline-run-" <... SNIP ...> from server for: "/dev/shm/216691940": resource name may not be empty