kops create cluster --ssh-public-key flag seems to only work with ~/.ssh/id_rsa.pub only

Created on 4 May 2017  Â·  13Comments  Â·  Source: kubernetes/kops

I've been trying to specify existing pem file for aws access when creating cluster. However when I use anything other than ~/.ssh/id_rsa.pub for the keyfile, I get an error:

MBP2:~ blee$ cp pol.pub ~/.ssh/
MBP2:~ blee$ kops create cluster --zones=us-east-1c --ssh-public-key=~/.ssh/pol.pub --master-size=t2.micro --node-size=t2.micro us1.dev1.abcd.com --yes
I0503 19:09:27.008744 20407 create_cluster.go:493] Inferred --cloud=aws from zone "us-east-1c"
I0503 19:09:27.008838 20407 create_cluster.go:630] Using SSH public key: /Users/blee/.ssh/pol.pub
I0503 19:09:27.418360 20407 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-east-1c

error addding SSH public key: error parsing public key: ssh: no key found

Same command, using ~/.ssh/id_rsa.pub, works fine:

MBP2:~ blee$ kops create cluster --zones=us-east-1c --ssh-public-key=~/.ssh/id_rsa.pub --master-size=t2.micro --node-size=t2.micro us1.dev1.abc.com --yes
I0503 19:07:23.232168 20393 create_cluster.go:493] Inferred --cloud=aws from zone "us-east-1c"
I0503 19:07:23.232457 20393 create_cluster.go:630] Using SSH public key: /Users/blee/.ssh/id_rsa.pub
I0503 19:07:23.586156 20393 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-east-1c


A new kubernetes version is available: 1.5.4
Upgrading is recommended (try kops upgrade cluster)

More information: https://github.com/kubernetes/kops/blob/master/permalinks/upgrade_k8s.md#1.5.4


error doing DNS lookup for NS records for "dev1.abcd.com": lookup dev1.abcd.com on 10.0.0.1:53: no such host

Same result when using kops create secret:

MBP2:~ blee$ kops create secret --name us1.dev1.abcd.com sshpublickey admin -i pol.pub

error adding SSH public key: error parsing public key: ssh: no key found

MBP2:~ blee$ kops create secret --name us1.dev1.abcd.com sshpublickey admin -i ~/.ssh/id_rsa.pub
MBP2:~ blee$

lifecyclrotten

Most helpful comment

I was having this issue and then converted my public key to rsa-ssh format and it worked as expected.

Converted via: ssh-keygen -f my-key.pub -i -mPKCS8 > my-key2.pub

All 13 comments

I am facing same issue.

NKS-DEV:kube NKS$ kops update cluster --ssh-public-key ~/.ssh/id_rsa.pub kopsclusters4.dev.company.net
--ssh-public-key on update is deprecated - please use kops create secret --name kopsclusters4.dev.company.net sshpublickey admin -i ~/.ssh/id_rsa.pub instead

error addding SSH public key: error parsing public key: ssh: no key found
NKS-DEV:kube NKS$
NKS-DEV:kube NKS$
NKS-DEV:kube NKS$ kops create secret --name kopsclusters4.dev.company.net sshpublickey admin -i ~/aws/keys/kops.pub -v 10
I0504 17:05:42.081759 31875 s3context.go:157] Found bucket "kopsclusters.dev.company.net" in region "us-east-1"
I0504 17:05:42.081799 31875 s3fs.go:173] Reading file "s3://kopsclusters.dev.company.net/kopsclusters4.dev.company.net/config"

error adding SSH public key: error parsing public key: ssh: no key found
NKS-DEV:kube NKS$

Same issue for me

joeg-mac:~ jgard$ kops create cluster --zones eu-west-1a $NAME --ssh-public-key=~/.ssh/shared_rsa.pub
I0602 11:31:47.687927    5564 create_cluster.go:331] Inferred --cloud=aws from zone "eu-west-1a"
I0602 11:31:47.688038    5564 cluster.go:391] Assigned CIDR 172.20.32.0/19 to zone eu-west-1a
I0602 11:31:48.914050    5564 populate_cluster_spec.go:196] Defaulting DNS zone to: ZMJYBLAHG96E1HD
W0602 11:31:48.924273    5564 channel.go:84] Multiple matching images in channel for cloudprovider "aws"
W0602 11:31:48.924307    5564 channel.go:84] Multiple matching images in channel for cloudprovider "aws"

error addding SSH public key: error parsing public key: ssh: no key found

And when I add the secret...

joeg-mac:~ jgard$ kops create secret --name dev.k8s.emea.company.run sshpublickey admin -i ~/.ssh/shared_rsa.pub

error adding SSH public key: error parsing public key: ssh: no key found

Scratch that, I had a badly formed key. Working now.

I am facing the same issue.
`➜ orchestrate-k8s-pub git:(master) ✗ kops create cluster ${NAME} \
--cloud aws \
--master-zones "us-west-1a" \
--master-count 1 \
--zones $ZONES \
--dns-zone $(terraform output public_zone_id) \
--topology public \
--ssh-public-key ./ssh_keys/k8s_keys.pub \
--admin-access "XXX.XXX.XXX.XXX/XX" \
--target=terraform \
--out=. \
--encrypt-etcd-storage
I0705 16:58:49.362501 8572 create_cluster.go:841] Using SSH public key: ./ssh_keys/k8s_keys.pub
I0705 16:58:52.193341 8572 subnets.go:183] Assigned CIDR 172.20.32.0/19 to subnet us-west-1a
I0705 16:58:52.193360 8572 subnets.go:183] Assigned CIDR 172.20.64.0/19 to subnet us-west-1b
I0705 16:59:01.829802 8572 executor.go:91] Tasks: 0 done / 65 total; 34 can run
I0705 16:59:01.831636 8572 dnszone.go:236] Check for existing route53 zone to re-use with name ""
I0705 16:59:02.129450 8572 dnszone.go:243] Existing zone "mywebsite.mydomain.com." found; will configure TF to reuse
I0705 16:59:03.670258 8572 vfs_castore.go:422] Issuing new certificate: "kops"
I0705 16:59:03.760232 8572 vfs_castore.go:422] Issuing new certificate: "kube-controller-manager"
I0705 16:59:03.862197 8572 vfs_castore.go:422] Issuing new certificate: "kube-scheduler"
I0705 16:59:03.877366 8572 vfs_castore.go:422] Issuing new certificate: "kubelet"
I0705 16:59:03.970995 8572 vfs_castore.go:422] Issuing new certificate: "kube-proxy"
I0705 16:59:04.049860 8572 vfs_castore.go:422] Issuing new certificate: "kubecfg"
I0705 16:59:04.131456 8572 vfs_castore.go:422] Issuing new certificate: "master"
I0705 16:59:07.649433 8572 executor.go:91] Tasks: 34 done / 65 total; 13 can run
I0705 16:59:07.651173 8572 executor.go:91] Tasks: 47 done / 65 total; 16 can run
I0705 16:59:10.314403 8572 executor.go:91] Tasks: 63 done / 65 total; 2 can run
I0705 16:59:10.314752 8572 executor.go:91] Tasks: 65 done / 65 total; 0 can run
I0705 16:59:10.322944 8572 target.go:269] Terraform output is in .
I0705 16:59:10.909472 8572 update_cluster.go:229] Exporting kubecfg for cluster
Kops has set your kubectl context to mywebsite.mydomain.com

Terraform output has been placed into .
Run these commands to apply the configuration:
cd .
terraform plan
terraform apply

Suggestions:

  • validate cluster: kops validate cluster
  • list nodes: kubectl get nodes --show-labels
  • ssh to the master: ssh -i ~/.ssh/id_rsa [email protected]
    The admin user is specific to Debian. If not using Debian please use the appropriate user based on your OS.
  • read about installing addons: https://github.com/kubernetes/kops/blob/master/docs/addons.md
    Why do I still need to refer to id_rsa to connect to ssh. the fingerprint of keypair is also different than the key specified. ssh to the master: ssh -i ~/.ssh/id_rsa [email protected] create/delete/update every single time: kops delete secret --name mywebsite.mydomain.com sshpublickey admin
    kops create secret --name mywebsite.mydomain.com sshpublickey admin -i ssh_keys/k8s_keys.pub
    kops update cluster --yes
    kops rolling-update cluster --name mywebsite.mydomain.com --yes
    `
    Are there any plans to address this? Or any workaround?

Same issue here. Also I got an error error reading SSH key file "/.ssh/id_rsa.pub": open /.ssh/id_rsa.pub: no such file or directory. Note how it doesn't start with ~. I did not specify the --ssh-public-key flag.

I was having this issue and then converted my public key to rsa-ssh format and it worked as expected.

Converted via: ssh-keygen -f my-key.pub -i -mPKCS8 > my-key2.pub

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

/reopen

@miguelbernadi: you can't re-open an issue/PR unless you authored it or you are assigned to it.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

It seems .pem format does not work. Are there plans to support this?

same error any help appreciated

SSH public key must be specified when running with AWS (create with kops create secret --name advith.k8s.local sshpublickey admin -i ~/.ssh/id_rsa.pub)

Was this page helpful?
0 / 5 - 0 ratings

Related issues

minasys picture minasys  Â·  3Comments

joshbranham picture joshbranham  Â·  3Comments

lnformer picture lnformer  Â·  3Comments

justinsb picture justinsb  Â·  4Comments

yetanotherchris picture yetanotherchris  Â·  3Comments