Kops: sshKeyName throws Secret Error

Created on 23 Oct 2017  路  20Comments  路  Source: kubernetes/kops

Thanks for submitting an issue!

-------------BUG REPORT --------------------

  1. Fill in as much of the template below as you can.
apiVersion: kops/v1alpha2
kind: Cluster
metadata:
    name: johnd-kops.k8s.local
    creationTimestamp: 2017-10-23T21:07:46Z
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    RBAC: {}
  channel: stable
  cloudLabels:
    Team: conductor-testing
  cloudProvider: aws
  configBase: s3://conductor-testing-kops-state/johnd-kops.k8s.local
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-us-east-1d
      name: d
    name: main
  - etcdMembers:
    - instanceGroup: master-us-east-1d
      name: d
    name: events
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: 1.7.8
  masterInternalName: blue-johnd-kops.k8s.local
  masterPublicName: johnd-kops.k8s.local
  networkCIDR: 172.31.0.0/16
  networkID: omitted
  networking:
    calico: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshKeyName: john-testing-key
  sshAccess:
  - omitted
  subnets:
  - cidr: 172.31.100.0/24
    name: us-east-1d
    type: Private
    zone: us-east-1d
  - cidr: 172.31.100.0/24
    name: utility-us-east-1d
    type: Utility
    zone: us-east-1d
  topology:
    dns:
      type: Public
    masters: private
    nodes: private
  fileAssets:
  - name: bootstrap.yaml
    # Note if not path is specificied the default path it /srv/kubernetes/assets/<name>
    path: /etc/kubernetes/manifests/boostrap.yml
    roles: [Master]
    content: |
      apiVersion: v1
      kind: Namespace
      metadata:
        name: something

---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-10-23T19:15:02Z
  name: nodes
  labels:
    kops.k8s.io/cluster: johnd-kops.k8s.local  
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170721
  machineType: t2.medium
  maxSize: 3
  minSize: 3
  role: Node
  subnets:
  - us-east-1d

---
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-10-23T19:15:02Z
  name: master-us-east-1d
  labels:
    kops.k8s.io/cluster: johnd-kops.k8s.local 
spec:
  image: 099720109477/ubuntu/images/hvm-ssd/ubuntu-xenial-16.04-amd64-server-20170721
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - us-east-1d
  1. What kops version are you running? use kops version
    1.8.0
  2. What Kubernetes version are you running? use kubectl version
    1.8
  3. What cloud provider are you using?
    aws
  4. What commands did you execute (Please provide cluster manifest kops get --name my.example.com, if available) and what happened after commands executed?
    kops update cluster $NAME --yes

Output
SSH public key must be specified when running with AWS (create with `kops create secret --name johnd-kops.k8s.local sshpublickey admin -i ~/.ssh/id_rsa.pub`)

  1. What you expected to happen:
    Cluster to be created
  2. How can we to reproduce it (as minimally and precisely as possible):
    kops create -f [myspecfromabove.yml]
  3. Anything else do we need to know:
    I think it's just a exception that needs to be tuned. Happy to contribute any other information.
lifecyclrotten

Most helpful comment

This is still an issue. I thought this was going to get fixed in 1.9.0.

All 20 comments

I have a pr in to specify a ssh name on the create, but it has not gone through.

The is not a bug, but as designed three step process to create with yaml

I'm hitting the same thing. I'm specifying sshKeyName as a key I have in aws and it seems to ignore this, asks me to use the same create secret command to point it at the public key on the box and then in aws I have the usual auto-generated keyname on the nodes rather than my predefined aws key.

Also when doing a kops get <cluster> -o yaml I can see that the sshKeyName field isn't included.

So is your pull request to make the sshKeyName field valid when creating a cluster? Is it currently not working on create?

I am guessing you are on a version that does not have ssh key name released. It was put into the code base recently, and I am not certain if it is in the 1.8 alpha

I can verify that if you run (using kops 1.8.0.beta.1) the kops create secret --name $NAME sshpublickey admin -i ~/.ssh/id_rsa.pub that cluster creation will complete successfully and will not create a new key inside AWS, but use the existing keypair defined in sshKeyName.

Can someone clarify whether the kops create secret command should be necessary, or if its simply a remnant of previous behavior and should be something that will eventually be cleaned up?

I am running against what is on master and get this issue too. @chrislovecnm

```kops version
Version 1.8.0-beta.2 (git-23319a097)

I built latest because i thought your fix from a couple of days ago would work.

My cluster spec looks something like when i do ```kops get <cluster> -o yaml```

cluster spec:
...
sshAccess:

  • 0.0.0.0/0
    sshKeyName: a14b50515-test-main-k8s
When i create a new cluster, and then run ```kops cluster update --yes``` I get the following error:

SSH public key must be specified when running with AWS (create with kops create secret --name a14b50515-test-main-k8s.main.test.<domain>.io sshpublickey admin -i ~/.ssh/id_rsa.pub)
```

@chrislovecnm I know you have an issue to clear up how this should work...
If you specify the existing aws ec2 key name in the cluster spec/cli when you create the cluster, should you still have to create the secret with the public key in?

Just following up. I am using v1.8 as mentioned, even in the GA release this is still present. I want to only use my sshKey which is specified in my AWS Account, I shouldn't have to specify the id_rsa.pub to pass validation.

Following up on the above posters' issue, I am having the same problem. Pass in sshKeyName in my yaml, go to run update, and get SSH public key must be specified when running with AWS.

EDIT: As a follow up, doing what @twang-rs mentions above fixes the issue for me as well. I'm assuming a flag just didn't get set somewhere.

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

This is still an issue. I thought this was going to get fixed in 1.9.0.

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

/remove-lifecycle rotten

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

This is still present in version 1.10.0

This is still present in version 1.11.0

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

justinsb picture justinsb  路  4Comments

RXminuS picture RXminuS  路  5Comments

chrislovecnm picture chrislovecnm  路  3Comments

joshbranham picture joshbranham  路  3Comments

owenmorgan picture owenmorgan  路  3Comments