When I do PipelineRun the resource is created... and it stays there, and if there are errors then the resource is still there so I can debug the error more easily.
If I try to execute the pipeline below, relying on the defaults with no special settings, the pipelinerun will be created for about ~2 seconds, then everything cleaned up.
I get an error in the events that looks like this:
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "build-test-run-1599785445507-generate-bui-gktkw": Error response from daemon: OCI runtime c │
│ reate failed: container_linux.go:349: starting container process caused "process_linux.go:449: container init caused \"process_linux.go:438: writing syncT 'resume' caused \\\"write init-p: broken pipe\\\"\"": unknown
and:
Error: cannot find volume "tekton-internal-scripts" to mount into container "place-scripts"
I know there is no issue with the task because TaskRuns work fine.
Client Version: version.Info{Major:"1", Minor:"19", GitVersion:"v1.19.0", GitCommit:"e19964183377d0ec2052d1f1fa930c4d7575bd50", GitTreeState:"clean", BuildDate:"2020-08-26T14:30:33Z", GoVersion:"go1.15", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17+", GitVersion:"v1.17.9-eks-4c6976", GitCommit:"4c6976793196d70bc5cd29d56ce5440c9473648e", GitTreeState:"clean", BuildDate:"2020-07-17T18:46:04Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Client version: 0.12.0
Pipeline version: v0.16.0
Triggers version: v0.8.0
apiVersion: tekton.dev/v1beta1
kind: Pipeline
metadata:
name: build-test
namespace: tekton-pipelines
spec:
tasks:
- name: generate-build-id
taskRef:
name: generate-build-id
kind: ClusterTask
params:
- name: base-version
value: "test"
apiVersion: tekton.dev/v1beta1
kind: ClusterTask
metadata:
name: generate-build-id
labels:
app.kubernetes.io/version: "0.1"
annotations:
tekton.dev/pipelines.minVersion: "0.12.1"
tekton.dev/tags: build-tool
tekton.dev/displayName: "buildid"
spec:
description: >-
Given a base version, this task generates a unique build id by appending
the base-version to the current timestamp.
params:
- name: base-version
description: Base product version
type: string
default: "1.0"
results:
- name: timestamp
description: Current timestamp
- name: build-id
description: ID of the current build
steps:
- name: get-timestamp
image: bash:latest
script: |
#!/usr/bin/env bash
ts=`date "+%Y%m%d-%H%M%S"`
echo "Current Timestamp: ${ts}"
echo ${ts} | tr -d "\n" | tee $(results.timestamp.path)
- name: get-buildid
image: bash:latest
script: |
#!/usr/bin/env bash
ts=`cat $(results.timestamp.path)`
buildId=$(inputs.params.base-version)-${ts}
echo ${buildId} | tr -d "\n" | tee $(results.build-id.path)
tkn pipeline start build-test -n tekton-pipelines --use-param-defaults --showlogPipelineRun started: build-test-run-hq6pz
Waiting for logs to be available...
Error: pipelineruns.tekton.dev "build-test-run-hq6pz" not found
Or I just create it in the GUI with similar settings. I have also tried with service-accounts too, but the same output.
It has something todo with ArgoCD provisioning the Pipeline resource.
If ArgoCD provisions the pipeline resource then other things cannot run the pipeline.
If ArgoCD provisions both the pipeline and the pipeline run it works (but its constantly syncing).
I think its because ArgoCD is adding the label: "app.kubernetes.io/instance" to the Pipeline, which seems to cause the pipeline run to be instantly deleted.
It looks like its related to ArgoCD's autopruning due to the propagation of labels.
If you pass the ownerreference information it seems to work.
Ie:
kubectl get Pipeline build-test -n tekton-pipelines -o json | jq ".metadata.uid" -rCreate PipelineRun from CLI:
tkn pipeline start build-test -n tekton-pipelines --use-param-defaults --dry-run
Modify PipelineRun yaml to have, inside metadata:
ownerReferences:
- apiVersion: tekton.dev/v1beta1
blockOwnerDeletion: true
controller: true
kind: Pipeline
name: build-test
uid: my-uid
kubectl create -f ...Might be nice for the cli command to have an option to add the ownerReferences itself, I guess this is not a ticket for this repo though?
The same issue exists in the GUI.
I have created follow-up issues to fix this on the clientside.
I am not sure what the preferred solution to this issue is, but it would be nice for ArgoCD and Tekton to play nicely as I believe ArgoCD is a really strong CD tool and Tekton is a really strong CI tool.
@mhaddon I had a similar issue, which I believed to be connected to that Argo prunes the resources it doesn't know about.
My reasoning was that Argo should ignore these "dynamic" resource types, i.e. pipelineruns & taskruns.
I achieve that by configuring the Argo default project to ignore the runs:
apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
name: default
namespace: argocd
spec:
clusterResourceWhitelist:
- group: '*'
kind: '*'
destinations:
- namespace: '*'
server: '*'
namespaceResourceBlacklist:
- group: tekton.dev
kind: TaskRun
- group: tekton.dev
kind: PipelineRun
sourceRepos:
- '*'
Then, they aren't taken into account, also not for comparing the sync status.
See https://github.com/sdaschner/tekton-argocd-example/tree/main/argocd for a full example.
I ran into the same issue but fixed it by excluding the pipelinerun from argocd entirely. In my opinion dynamic resources like this should be ignored by ArgoCD completely.
Used below patch with kustomize to completely ignore the objects.
apiVersion: v1
kind: ConfigMap
metadata:
name: argocd-cm
data:
resource.exclusions: |
- apiGroups:
- "*"
kinds:
- "PipelineRun"
- "TaskRun"
clusters:
- "*"
Most helpful comment
I ran into the same issue but fixed it by excluding the pipelinerun from argocd entirely. In my opinion dynamic resources like this should be ignored by ArgoCD completely.
Used below patch with kustomize to completely ignore the objects.