I'm trying to add dockerhub credentials by following the docs at https://github.com/kubernetes/kops/blob/master/docs/security.md#docker-configuration but can't get the update to apply
$ kops create secret --name kube1.k8s.my.tld dockerconfig -f ~/.docker/config.json --force
\ && echo $?
0
$ kops get secret dockerconfig -oplaintext
Using cluster from kubectl context: kube1.k8s.my.tld
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "XXXXXXX"
}
}
}
So far so good, but:
$ kops rolling-update cluster --name kube1.k8s.my.tld --yes
NAME STATUS NEEDUPDATE READY MIN MAX NODES
master-eu-west-1a Ready 0 1 1 1 1
master-eu-west-1b Ready 0 1 1 1 1
master-eu-west-1c Ready 0 1 1 1 1
nodes Ready 0 3 3 3 3
No rolling-update required.
Seems to work if I add --force to the rolling-update
Kops version: 1.9.0
kubectl version
Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T12:22:21Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
@justinsb is this a good first issue?
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
I am facing the same issue. Only --force seems to work.
But it takes hell lot of time to reboot the whole cluster and hence not a good idea to force update.
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
This bug still exists in master.
/reopen
/remove-lifecycle rotten
@tsuna: You can't reopen an issue/PR unless you authored it or you are a collaborator.
In response to this:
This bug still exists in master.
/reopen
/remove-lifecycle rotten
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
/open
I also encountered this bug as well
/reopen
@olemarkus: Reopened this issue.
In response to this:
/reopen
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
We would need to put a hash of (or other data derived from) the secret into the bootstrap script, possibly by inserting it into the NodeUpConfig.
Secrets currently don't store any metadata, so it would have to be a hash. That unfortunately leaks information about the secret and would allow brute-force attacks. Better would be to add the modification time or a randomly-generated ID to the fi.Secret struct and use that.
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Most helpful comment
This bug still exists in master.
/reopen
/remove-lifecycle rotten