Kops: Go panic when deleting IG

Created on 16 May 2019  Â·  4Comments  Â·  Source: kubernetes/kops

1. What kops version are you running? The command kops version, will display
this information.

âž­ kops version
Version 1.12.0

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

âž­ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-19T22:12:47Z", GoVersion:"go1.12.4", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:41:47Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

kops delete ig blue

5. What happened after the commands executed?

Go Panic

6. What did you expect to happen?

I expected the IG to be deleted without a panic.

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2017-07-14T22:39:40Z
  name: <redacted>
spec:
  api:
    dns: {}
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://<redacted>
  docker:
    logDriver: json-file
    storage: overlay2
  etcdClusters:
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
    manager:
      image: kopeio/etcd-manager:latest
    name: main
  - etcdMembers:
    - encryptedVolume: true
      instanceGroup: master-us-west-2a
      name: a
    manager:
      image: kopeio/etcd-manager:latest
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeAPIServer:
    runtimeConfig:
      batch/v2alpha1: "true"
  kubelet:
    anonymousAuth: false
  kubernetesApiAccess:
  <redacted>
  kubernetesVersion: 1.12.8
  masterInternalName: api.internal.<redacted>
  masterPublicName: api.<redacted>
  networkCIDR: 172.20.0.0/16
  networking:
    kubenet: {}
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  <redacted>
  subnets:
  - cidr: 172.20.32.0/19
    name: us-west-2a
    type: Public
    zone: us-west-2a
  - cidr: 172.20.64.0/19
    name: us-west-2b
    type: Public
    zone: us-west-2b
  - cidr: 172.20.96.0/19
    name: us-west-2c
    type: Public
    zone: us-west-2c
  topology:
    dns:
      type: Public
    masters: public
    nodes: public

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-03-03T02:41:31Z
  labels:
    kops.k8s.io/cluster: <redacted>
  name: blue
spec:
  image: <redacted>
  machineType: m5.large
  maxSize: 4
  minSize: 4
  nodeLabels:
    kops.k8s.io/instancegroup: blue
  role: Node
  rootVolumeSize: 50
  subnets:
  - us-west-2a

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-05-15T16:12:37Z
  labels:
    kops.k8s.io/cluster: <redacted>
  name: green
spec:
  image: <redacted>
  machineType: m5.large
  maxSize: 4
  minSize: 4
  nodeLabels:
    kops.k8s.io/instancegroup: green
  role: Node
  rootVolumeSize: 50
  subnets:
  - us-west-2a

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2017-07-14T22:39:41Z
  labels:
    kops.k8s.io/cluster: <redacted>
  name: master-us-west-2a
spec:
  image: <redacted>
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  role: Master
  subnets:
  - us-west-2a

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

âž­ kops -v 10 delete ig blue
I0516 09:48:45.735320    4162 loader.go:359] Config loaded from file /Users/drew/.kube/config
Using cluster from kubectl context: secondary.k8s.vreal.engineering

Do you really want to delete instance group "blue"? This action cannot be undone. (y/N)
y
I0516 09:48:47.573892    4162 factory.go:68] state store s3://vreal-k8s-config
I0516 09:48:47.575374    4162 s3context.go:338] GOOS="darwin", assuming not running on EC2
I0516 09:48:47.575395    4162 s3context.go:170] defaulting region to "us-east-1"
I0516 09:48:48.005129    4162 s3context.go:210] found bucket in region "us-west-2"
I0516 09:48:48.005325    4162 s3fs.go:220] Reading file "s3://vreal-k8s-config/secondary.k8s.vreal.engineering/config"
I0516 09:48:48.325344    4162 s3fs.go:220] Reading file "s3://vreal-k8s-config/secondary.k8s.vreal.engineering/instancegroup/blue"
I0516 09:48:48.414171    4162 aws_cloud.go:1209] Querying EC2 for all valid zones in region "us-west-2"
I0516 09:48:48.415152    4162 request_logger.go:45] AWS request: ec2/DescribeAvailabilityZones
InstanceGroup "blue" found for deletion
I0516 09:48:48.705138    4162 aws_cloud.go:479] Listing all Autoscaling groups matching cluster tags
I0516 09:48:48.706966    4162 request_logger.go:45] AWS request: autoscaling/DescribeTags
I0516 09:48:49.081586    4162 request_logger.go:45] AWS request: autoscaling/DescribeAutoScalingGroups
I0516 09:48:49.216985    4162 cloud_instance_group.go:66] unable to find node for instance: i-0117cc3f607e088d5
I0516 09:48:49.217004    4162 cloud_instance_group.go:66] unable to find node for instance: i-0338c9274211d5ea6
I0516 09:48:49.217007    4162 cloud_instance_group.go:66] unable to find node for instance: i-052dba221bde594d4
I0516 09:48:49.217011    4162 cloud_instance_group.go:66] unable to find node for instance: i-05d24d1e9ce8ce9d7
I0516 09:48:49.217212    4162 delete.go:51] Deleting "blue"
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x8 pc=0x2ba3d01]

goroutine 1 [running]:
k8s.io/kops/upup/pkg/fi/cloudup/awsup.deleteGroup(0x4eac920, 0xc0001896c0, 0xc000189730, 0x7649a13, 0xc000727810)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/upup/pkg/fi/cloudup/awsup/aws_cloud.go:353 +0xa1
k8s.io/kops/upup/pkg/fi/cloudup/awsup.(*awsCloudImplementation).DeleteGroup(0xc0001896c0, 0xc000189730, 0x47a3a37, 0xb)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/upup/pkg/fi/cloudup/awsup/aws_cloud.go:345 +0x88
k8s.io/kops/pkg/instancegroups.(*DeleteInstanceGroup).DeleteInstanceGroup(0xc000ed3c50, 0xc001118000, 0x48085fc, 0x24)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/pkg/instancegroups/delete.go:53 +0x3d2
main.RunDeleteInstanceGroup(0xc000cfbce0, 0x4ddd780, 0xc0000f0000, 0xc000debe00, 0x1, 0xc00005c2a0)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/cmd/kops/delete_instancegroup.go:163 +0x3c9
main.NewCmdDeleteInstanceGroup.func1(0xc0007e3180, 0xc000e18bd0, 0x1, 0x3)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/cmd/kops/delete_instancegroup.go:101 +0xe8
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).execute(0xc0007e3180, 0xc000e0a980, 0x3, 0x4, 0xc0007e3180, 0xc000e0a980)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:760 +0x2ae
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).ExecuteC(0x78d4800, 0x790ab28, 0x0, 0x0)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:846 +0x2ec
k8s.io/kops/vendor/github.com/spf13/cobra.(*Command).Execute(...)
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/vendor/github.com/spf13/cobra/command.go:794
main.Execute()
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/cmd/kops/root.go:97 +0x95
main.main()
    /private/tmp/kops-20190515-64118-7iwvtq/kops-1.12.0/src/k8s.io/kops/cmd/kops/main.go:25 +0x20

9. Anything else do we need to know?

Most helpful comment

Thanks for reporting @drewfisher314 - I'll get a 1.12.1 up sharpish!

All 4 comments

Thanks for reporting @drewfisher314 - I'll get a 1.12.1 up sharpish!

This was fixed in 1.12.1

/close

@granular-ryanbonham: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings