Test-infra: mkpj outputs noisy warning(s)

Created on 13 Mar 2019  路  10Comments  路  Source: kubernetes/test-infra

What happened: mkpj gives a warning when using it without a github token

go run ./prow/cmd/mkpj/main.go \
    --job-config-path=./config/jobs/ \
    --config-path=./prow/config.yaml \
    --job=ci-kubernetes-e2e-gce-new-master-upgrade-cluster 

WARN[0000] empty -github-token-path, will use anonymous github client 
apiVersion: prow.k8s.io/v1
kind: ProwJob
metadata:
  annotations:
    prow.k8s.io/job: ci-kubernetes-e2e-gce-new-master-upgrade-cluster
  creationTimestamp: null
  labels:
    created-by-prow: "true"
    preset-k8s-ssh: "true"
    preset-service-account: "true"
    prow.k8s.io/job: ci-kubernetes-e2e-gce-new-master-upgrade-cluster
    prow.k8s.io/type: periodic
  name: 8a7ac664-45b5-11e9-9b79-a08cfdecc127
spec:
  agent: kubernetes
  cluster: default
  job: ci-kubernetes-e2e-gce-new-master-upgrade-cluster
  namespace: test-pods
  pod_spec:
    containers:
    - args:
      - --timeout=920
      - --bare
      - --scenario=kubernetes_e2e
      - --
      - --check-leaked-resources
      - --check-version-skew=false
      - --env=STORAGE_MEDIA_TYPE=application/vnd.kubernetes.protobuf
      - --env=TEST_ETCD_VERSION=3.0.17
      - --env=KUBE_ENABLE_CLUSTER_MONITORING=standalone
      - --extract=ci/latest
      - --extract=ci/k8s-stable1
      - --gcp-node-image=gci
      - --gcp-zone=us-west1-b
      - --provider=gce
      - --test_args=--ginkgo.focus=\[Slow\]|\[Serial\]|\[Disruptive\] --ginkgo.skip=\[Flaky\]|\[Feature:.+\]
        --kubectl-path=../../../../kubernetes_skew/cluster/kubectl.sh --minStartupPods=8
      - --timeout=900m
      - --upgrade_args=--ginkgo.focus=\[Feature:ClusterUpgrade\] --upgrade-target=ci/latest
      env:
      - name: GOOGLE_APPLICATION_CREDENTIALS
        value: /etc/service-account/service-account.json
      - name: E2E_GOOGLE_APPLICATION_CREDENTIALS
        value: /etc/service-account/service-account.json
      - name: USER
        value: prow
      - name: JENKINS_GCE_SSH_PRIVATE_KEY_FILE
        value: /etc/ssh-key-secret/ssh-private
      - name: JENKINS_GCE_SSH_PUBLIC_KEY_FILE
        value: /etc/ssh-key-secret/ssh-public
      image: gcr.io/k8s-testimages/kubekins-e2e:v20190301-76bc03340-master
      name: ""
      resources: {}
      volumeMounts:
      - mountPath: /etc/service-account
        name: service
        readOnly: true
      - mountPath: /etc/ssh-key-secret
        name: ssh
        readOnly: true
    volumes:
    - name: service
      secret:
        secretName: service-account
    - name: ssh
      secret:
        defaultMode: 256
        secretName: ssh-key-secret
  type: periodic
status:
  startTime: "2019-03-13T17:29:22Z"
  state: triggered

What you expected to happen: mkpj should only print out the prowjob for piping to another tool.

How to reproduce it (as minimally and precisely as possible): use mkpj

Please provide links to example occurrences, if any:

Anything else we need to know?:

/area prow

areprow kinbug lifecyclrotten

All 10 comments

passing /dev/null is a work around.

I thought that the logging went to stderr?

even if it does, why are we creating a github client at all? I wouldn't expect kicking of a CI job to involve API calls

It is very useful to fill out refs not manually. I personally prefer using git ls-remote but the author and other reviewers wanted to do it this way ...

Makes sense. Will try to look soon and see if we can at least defer creating it if we don't need it.

Or perhaps a log level flag for mkpj

All of those sound like good options to me

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings