Test-infra: start deck failed

Created on 18 Jul 2019  路  12Comments  路  Source: kubernetes/test-infra

{"component":"deck","file":"prow/kube/config.go:143","func":"k8s.io/test-infra/prow/kube.LoadClusterConfigs","level":"info","msg":"Loading cluster contexts...","time":"2019-07-18T15:26:42Z"}
{"component":"deck","error":"open /etc/cookie/secret: no such file or directory","file":"prow/cmd/deck/main.go:316","func":"main.main","level":"fatal","msg":"Could not read cookie secret file","time":"2019-07-18T15:26:42Z"}

when i run kubectl apply -f https://github.com/kubernetes/test-infra/blob/master/prow/cluster/starter.yaml?raw=, found deck pod status is CrashLoopBackOff

areprow kinbug lifecyclrotten

Most helpful comment

This should've been fixed by #13521, but tackle probably needs to be updated appropriately to avoid relying on deprecated configuration.

All 12 comments

I have the same problem when setting up Prow with tackle. Running bazel run //prow/cmd/tackle gives me some logs similar to #13492

...
Checking github credentials...
Store your GitHub token in a file e.g. echo $TOKEN > /path/to/github/token
Input /path/to/github/token to upload into cluster: go/src/github.com/test-infra/secret
INFO[0029] User()                                        client=github
Prow will act as yufan-bot on github
Applying github token into oauth-token secret...secret/oauth-token created
Ensuring hmac secret exists at hmac-token...INFO[0030] Creating new hmac-token secret with random data... 
exists
Looking for prow's hook ingress URL... FATA[0031] Could not get ingresses                       error="the server could not find the requested resource"

I thought I didn't give tackle enough time to wait for ingress's setup so I retried and wanna wait until ingress is created but found the following error log when inspecting the log with kubectl logs service/deck

time="2019-07-19T00:59:04Z" level=info msg="Spyglass registered viewer buildlog with title Build Log."
time="2019-07-19T00:59:04Z" level=info msg="Spyglass registered viewer coverage with title Coverage."
time="2019-07-19T00:59:04Z" level=info msg="Spyglass registered viewer junit with title JUnit."
time="2019-07-19T00:59:04Z" level=info msg="Spyglass registered viewer metadata with title Metadata."
{"component":"deck","file":"prow/kube/config.go:143","func":"k8s.io/test-infra/prow/kube.LoadClusterConfigs","level":"info","msg":"Loading cluster contexts...","time":"2019-07-19T00:59:04Z"}
{"component":"deck","error":"open /etc/cookie/secret: no such file or directory","file":"prow/cmd/deck/main.go:316","func":"main.main","level":"fatal","msg":"Could not read cookie secret file","time":"2019-07-19T00:59:04Z"}

/cc @Katharine

This should've been fixed by #13521, but tackle probably needs to be updated appropriately to avoid relying on deprecated configuration.

/area prow
(may want to consider a tackle label)

/assign @Katharine @mirandachrist

Can we start setting this value explicitly so we stop having this error?

{
 insertId:  "45vvifg5o9py2k"  
 jsonPayload: {
  component:  "deck"   
  file:  "prow/cmd/deck/main.go:123"   
  func:  "main.(*options).Validate"   
  level:  "error"   
  msg:  "You haven't set --cookie-secret-file, but you're assuming it is set to '/etc/cookie/secret'. Add --cookie-secret-file=/etc/cookie/secret to your deck instance's arguments. Your configuration will stop working at the end of October 2019."   
 }
 labels: {鈥  
 logName:  "projects/k8s-prow/logs/deck"  
 receiveTimestamp:  "2019-08-08T20:22:56.217449554Z"  
 resource: {鈥  
 severity:  "ERROR"  
 timestamp:  "2019-08-08T20:22:16Z"  
}

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

This is fixed right Katharine?

It may actually still be broken in tackle, I haven't checked.

It's fixed in production everything.

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

spiffxp picture spiffxp  路  3Comments

cjwagner picture cjwagner  路  3Comments

xiangpengzhao picture xiangpengzhao  路  3Comments

spzala picture spzala  路  4Comments

MrHohn picture MrHohn  路  4Comments