Kops: Custom secrets from S3 are not populated to `known_tokens.csv` with Kops 1.9

Created on 2 May 2018  Â·  14Comments  Â·  Source: kubernetes/kops

  1. What kops version are you running? The command kops version, will display
    this information.

Version 1.9.0 (git-cccd71e67)

  1. What Kubernetes version are you running? kubectl version will print the
    version if a cluster is running or provide the Kubernetes version specified as
    a kops flag.
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.10", GitCommit:"044cd262c40234014f01b40ed7b9d09adbafe9b1", GitTreeState:"clean", BuildDate:"2018-03-19T17:44:09Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
  1. What cloud provider are you using?

AWS

  1. What commands did you run? What is the simplest way to reproduce this issue?

Change binary of kops from 1.8.1 to 1.9. Update cluster and rollout.

In our S3 bucket we have custom secrets that were populated to /srv/kubernetes/known_tokens.csv when cluster was updated with kops < 1.9
After rolling update /srv/kubernetes/known_tokens.csv was lacking our secrets and we have to rollout cluster with kops 1.8.1

  1. What happened after the commands executed?

Commands are okay but logic of populating kops secrets has changed.
We have found this PR merged which is causing us trouble
https://github.com/kubernetes/kops/pull/3835/files#diff-a7e5ed2b01f8673379c76c3d0b880c8cR270

  1. What did you expect to happen?

I expected that upgrading kops won't break kubernetes functionality and secrets will be populated like in 1.8.1 version

  1. Anything else do we need to know?

This should be in ugprade information because it's breaking change for people using secrets from S3.


If there is a way to populate secretes with 1.9 we can upgrade kops but before that it's blocking for me.

lifecyclrotten

Most helpful comment

any news? did somebody find a workaround?

All 14 comments

@justinsb This is the issue with custom auth tokens we've encountered in kops 1.9, I was talking to you about during lunch at KubeCon.

Hitting the same issue here, looking into a possible systemd unit solution to make it happen, will report back if i make progress.

any news? did somebody find a workaround?

is it working in kops 1.10?

Any updates to this? We have hit the exact same issue

No updates just workarounds

The workaround being?

@followsound
Either use kops-1.8 or create user certifcates.
I find both of them not satisfying.

Haha, yep I've gone with user certs

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

so no intention to bring secrets back?

On Tue, Nov 6, 2018 at 11:12 AM fejta-bot notifications@github.com wrote:

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually
close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta
https://github.com/fejta.
/lifecycle stale

—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kops/issues/5090#issuecomment-436181764,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ADjXTblIxZ-quB9LZi1CtCe7viY9DBIOks5usVKLgaJpZM4TvD_f
.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

minasys picture minasys  Â·  3Comments

DocValerian picture DocValerian  Â·  4Comments

justinsb picture justinsb  Â·  4Comments

lnformer picture lnformer  Â·  3Comments

olalonde picture olalonde  Â·  4Comments