What should be cleaned up or changed:
flags are introduced by dependencies.
Ideally, we may want to do some refactoring in the dependency's repo.
Provide any links for context:
$ git status
On branch master
Your branch is up to date with 'origin/master'.
nothing to commit, working tree clean
$ git log --oneline -1
2ebe6361e (HEAD -> master, origin/master, origin/HEAD) Merge pull request #14192 from petr-muller/proper-bz-resoluitons
$ go mod why github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/
# github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1
k8s.io/test-infra/prow/apis/prowjobs/v1
github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1
$ GOPROXY=https://proxy.golang.org go mod vendor
$ grep -irn "flag.String(" ./vendor/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/
./vendor/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/git_resource.go:33: gitImage = flag.String("git-image", "override-with-git:latest",
./vendor/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/artifact_pvc.go:30: bashNoopImage = flag.String("bash-noop-image", "override-with-bash-noop:latest", "The container image containing bash shell")
./vendor/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/gcs_resource.go:30: gsutilImage = flag.String("gsutil-image", "override-with-gsutil-image:latest", "The container image containing gsutil")
./vendor/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/build_gcs_resource.go:30: buildGCSFetcherImage = flag.String("build-gcs-fetcher-image", "gcr.io/cloud-builders/gcs-fetcher:latest",
./vendor/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/build_gcs_resource.go:32: buildGCSUploaderImage = flag.String("build-gcs-uploader-image", "gcr.io/cloud-builders/gcs-uploader:latest",
./vendor/github.com/tektoncd/pipeline/pkg/apis/pipeline/v1alpha1/cluster_resource.go:32: kubeconfigWriterImage = flag.String("kubeconfig-writer-image", "override-with-kubeconfig-writer:latest", "The container image containing our kubeconfig writer binary.")
Those flags are added via var section, such as git-image. Any cmd depending on it will have to do workaround like flag.NewFlagSet to exlude them.
$ grep -irn "flag.NewFlagSet" ./prow/cmd/ | grep -v "_test.go"
./prow/cmd/deck/main.go:282: o := gatherOptions(flag.NewFlagSet(os.Args[0], flag.ExitOnError), os.Args[1:]...)
./prow/cmd/mkpj/main.go:197: fs := flag.NewFlagSet(os.Args[0], flag.ExitOnError)
./prow/cmd/hook/main.go:96: o := gatherOptions(flag.NewFlagSet(os.Args[0], flag.ExitOnError), os.Args[1:]...)
@cjwagner @fejta @Katharine @clarketm
This is another reason we should reconsider the tekton dependencies :|
$ docker run -it gcr.io/k8s-prow/peribolos:v20190911-918053970 --help
Usage of /app/prow/cmd/peribolos/app.binary:
-bash-noop-image string
The container image containing bash shell (default "override-with-bash-noop:latest")
-build-gcs-fetcher-image string
The container image containing our GCS fetcher binary. (default "gcr.io/cloud-builders/gcs-fetcher:latest")
-build-gcs-uploader-image string
The container image containing our GCS uploader binary. (default "gcr.io/cloud-builders/gcs-uploader:latest")
boo!
Yeah that's definitely not great. Tekton is at least one thing that prevents us from doing minor module upgrades (since it transitively depends on k8s.io/kubernetes)
Filed an issue about this over in the pipeline repo.
/cc @cjwagner @bobcatfish @wbrefvem
Thoughts?
This isn't something we can fix on our end short of removing the Prow + Pipelines integration right? If thats the case it would be great to see this fixed by the Pipelines project, especially since we're probably not the only ones with these problems.
We can (and IMO should) just copy-paste the types we need from tekton into the third_party/ folder and snip our dependencies. It's causing us so many headaches importing them right now.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
boo!