What kops version are you running? The command kops version, will display
this information.
1.8.0-beta.2
What cloud provider are you using?
AWS
What commands did you run? What is the simplest way to reproduce this issue?
spec:
sshKeyName: mykey
kops create -f ./manifests/kops/${K8S_NAME}.yaml
kops update cluster ${K8S_NAME} --yes
SSH public key must be specified when running with AWS (create with `kops create secret --name clustername sshpublickey admin -i ~/.ssh/id_rsa.pub`)
What did you expect to happen?
Existing AWS keypair is used for the public key, and the kops secret is automatically created
Anything else do we need to know?
I'm not sure the intention of #3215 . Currently, it works as advertised but it's unclear that the user still needs to supply the public key locally. Perhaps just a note in the docs that there's no reason to set that in the spec unless the specific use case described in https://github.com/kubernetes/kops/issues/2309#issuecomment-293244610 applies would suffice.
Hi @afirth, thanks for raising an issue! I believe this is related to issue #3882.
@chrislovecnm what is the current behaviour, is the secret created in kops on cluster update automatically now (or uses sshKeyName if specified)? Can the kops sshpublickey secret be removed altogether?
Hi @KashifSaadat thanks for looking. I opened this based on a slack thread with @justinsb , the behaviour I was seeing was not what Chris describes, so I can only assume I've got a mistake in my spec file.
Excerpt:
spec:
api:
elb: {}
[...]
sshAccess:
- 0.0.0.0/0
sshKeyName: sillyname
where sillyname is the name of an existing ec2 keypair in the account
Someone suggested I try the ARN too but that didn't seem to work either (same error) sshKeyName: arn:aws:ec2:us-east-1:123456789012:key-pair/sillyname
I am not certain the behavior, since there have been several code changes. It use to be
create -f mycluster.yaml
create secret
update cluster
Had to do those three in order. We need to test off of master, as I we just merged another pr into master. And document ;)
I was surprised to see that I still needed to create an sshpublickey secret when specifying the sshKeyName in the cluster yaml. Glad to see this is slated for 1.8.1.
I have the same problem. I'm also getting this error when using terraform to create the cluster and specifying an existing key with sshKeyName:
I would expect terraform to lookup the existing key or expect that the key ID would be required in the kops spec.
FYI this will go into 1.9
Hi, I have the same issue in 1.9, the issue still exists.
Confirmed. 1.9.0 still has this issue. AWS Key is specified in the cluster YAML. Still get SSH public key must be specified when issuing kops update.
SSH public key must be specified when running with AWS (create with kops create secret --name mykeyname sshpublickey admin -i ~/.ssh/id_rsa.pub)
kops version
Version 1.9.0 (git-cccd71e67)
spec:
sshKeyName: mykeyname
Confirmed. 1.9.0 still has this issue
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
I have the same problem. I'm also getting this error when using terraform to create the cluster and specifying an existing key with sshKeyName:
status code: 400, request id: 123456789
I would expect terraform to lookup the existing key or expect that the key ID would be required in the kops spec.