kops validate cluster fails after kubectl config rename-context

Created on 26 Oct 2019  路  7Comments  路  Source: kubernetes/kops

1. What kops version are you running? The command kops version, will display
this information.

Version 1.14.0 (git-d5078612f)
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:13:49Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.6", GitCommit:"96fac5cd13a5dc064f7d9f4f23030a6aeface6cc", GitTreeState:"clean", BuildDate:"2019-08-19T11:05:16Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"

3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?

kubectl config rename-context k8s.example.com test
kops validate cluster

5. What happened after the commands executed?

Using cluster from kubectl context: k8s.example.com

Validating cluster k8s.example.com


Cannot load kubecfg settings for "k8s.example.com": context "k8s.example.com" does not exist

6. What did you expect to happen?
Expected Kops to pick up the new name of the context rather than trying to find a context that matches the name of the cluster, since cluster name is stored separately from context name as shown here by the results of kubectl config get-contexts:

CURRENT   NAME                 CLUSTER              AUTHINFO             NAMESPACE
*         test                 k8s.example.com      k8s.example.com    

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2019-10-11T15:44:13Z
  generation: 14
  name: k8s.example.com
spec:
  additionalPolicies:
    node: |
      [
        {
          "Effect": "Allow",
          "Action": [
            "sts:AssumeRole"
          ],
          "Resource": [
            "arn:aws:iam:::role/example-*"
          ]
        }
      ]
  api:
    loadBalancer:
      sslCertificate: secret
      type: Internal
  authorization:
    rbac: {}
  channel: stable
  cloudProvider: aws
  configBase: s3://kops.example.internal/k8s.example.com
  dnsZone: k8s.example.com
  etcdClusters:
  - cpuRequest: 200m
    etcdMembers:
    - instanceGroup: master-us-east-1a
      name: a
    - instanceGroup: master-us-east-1d
      name: d
    - instanceGroup: master-us-east-1f
      name: f
    memoryRequest: 100Mi
    name: main
    provider: Manager
  - cpuRequest: 100m
    etcdMembers:
    - instanceGroup: master-us-east-1a
      name: a
    - instanceGroup: master-us-east-1d
      name: d
    - instanceGroup: master-us-east-1f
      name: f
    memoryRequest: 100Mi
    name: events
    provider: Manager
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeDNS:
    provider: CoreDNS
  kubelet:
    anonymousAuth: false
    authenticationTokenWebhook: true
    authorizationMode: Webhook
  kubernetesApiAccess:
  - secret
  kubernetesVersion: 1.14.6
  masterInternalName: internal.k8s.example.com
  masterPublicName: k8s.example.com
  networkCIDR: secret
  networkID: secret
  networking:
    weave:
      mtu: 8912
  nonMasqueradeCIDR: secret
  sshAccess:
  - secret
  subnets:
  - cidr: secret
    egress: secret
    id: secret
    name: us-east-1a
    type: Private
    zone: us-east-1a
  - cidr: secret
    egress: secret
    id: secret
    name: us-east-1d
    type: Private
    zone: us-east-1d
  - cidr: secret
    egress: secret
    id: secret
    name: us-east-1f
    type: Private
    zone: us-east-1f
  - cidr: secret
    id: secret
    name: utility-us-east-1a
    type: Utility
    zone: us-east-1a
  - cidr: secret
    id: secret
    name: utility-us-east-1d
    type: Utility
    zone: us-east-1d
  - cidr: secret
    id: secret
    name: utility-us-east-1f
    type: Utility
    zone: us-east-1f
  topology:
    dns:
      type: Private
    masters: private
    nodes: private

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-10-11T15:44:13Z
  generation: 4
  labels:
    kops.k8s.io/cluster: k8s.example.com
  name: master-us-east-1a
spec:
  image: kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16
  machineType: m5.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1a
  role: Master
  subnets:
  - us-east-1a

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-10-11T15:44:13Z
  generation: 4
  labels:
    kops.k8s.io/cluster: k8s.example.com
  name: master-us-east-1d
spec:
  image: kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16
  machineType: m5.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1d
  role: Master
  subnets:
  - us-east-1d

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-10-11T15:44:13Z
  generation: 4
  labels:
    kops.k8s.io/cluster: k8s.example.com
  name: master-us-east-1f
spec:
  image: kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16
  machineType: m5.large
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: master-us-east-1f
  role: Master
  subnets:
  - us-east-1f

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2019-10-11T15:44:13Z
  generation: 9
  labels:
    kops.k8s.io/cluster: k8s.example.com
  name: nodes
spec:
  image: kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16
  machineType: m5.xlarge
  maxSize: 6
  minSize: 6
  nodeLabels:
    kops.k8s.io/instancegroup: nodes
  role: Node
  subnets:
  - us-east-1a
  - us-east-1d
  - us-east-1f

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

I1025 18:10:05.343606   13022 loader.go:359] Config loaded from file /home/user/.kube/config
Using cluster from kubectl context: k8s.example.com

I1025 18:10:05.343655   13022 factory.go:68] state store s3://kops.example.internal
I1025 18:10:05.343680   13022 s3context.go:325] unable to read /sys/devices/virtual/dmi/id/product_uuid, assuming not running on EC2: open /sys/devices/virtual/dmi/id/product_uuid: permission denied
I1025 18:10:05.343688   13022 s3context.go:170] defaulting region to "us-east-1"
I1025 18:10:05.457588   13022 s3context.go:210] found bucket in region "us-east-1"
I1025 18:10:05.457607   13022 s3fs.go:220] Reading file "s3://kops.example.internal/k8s.example.com/config"
I1025 18:10:05.487513   13022 s3fs.go:257] Listing objects in S3 bucket "kops.example.internal" with prefix "k8s.example.com/instancegroup/"
I1025 18:10:05.529487   13022 s3fs.go:285] Listed files in s3://kops.example.internal/k8s.example.com/instancegroup: [s3://kops.example.internal/k8s.example.com/instancegroup/master-us-east-1a s3://kops.example.internal/k8s.example.com/instancegroup/master-us-east-1d s3://kops.example.internal/k8s.example.com/instancegroup/master-us-east-1f s3://kops.example.internal/k8s.example.com/instancegroup/nodes]
I1025 18:10:05.529568   13022 s3fs.go:220] Reading file "s3://kops.example.internal/k8s.example.com/instancegroup/master-us-east-1a"
I1025 18:10:05.551204   13022 s3fs.go:220] Reading file "s3://kops.example.internal/k8s.example.com/instancegroup/master-us-east-1d"
I1025 18:10:05.617388   13022 s3fs.go:220] Reading file "s3://kops.example.internal/k8s.example.com/instancegroup/master-us-east-1f"
I1025 18:10:05.634653   13022 s3fs.go:220] Reading file "s3://kops.example.internal/k8s.example.com/instancegroup/nodes"
Validating cluster k8s.example.com

I1025 18:10:05.656357   13022 validate_cluster.go:113] instance group: kops.InstanceGroupSpec{Role:"Master", Image:"kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16", MinSize:(*int32)(0xc000b0881c), MaxSize:(*int32)(0xc000b08810), MachineType:"m5.large", RootVolumeSize:(*int32)(nil), RootVolumeType:(*string)(nil), RootVolumeIops:(*int32)(nil), RootVolumeOptimization:(*bool)(nil), Volumes:[]*kops.VolumeSpec(nil), VolumeMounts:[]*kops.VolumeMountSpec(nil), Subnets:[]string{"us-east-1a"}, Zones:[]string(nil), Hooks:[]kops.HookSpec(nil), MaxPrice:(*string)(nil), AssociatePublicIP:(*bool)(nil), AdditionalSecurityGroups:[]string(nil), CloudLabels:map[string]string(nil), NodeLabels:map[string]string{"kops.k8s.io/instancegroup":"master-us-east-1a"}, FileAssets:[]kops.FileAssetSpec(nil), Tenancy:"", Kubelet:(*kops.KubeletConfigSpec)(nil), Taints:[]string(nil), MixedInstancesPolicy:(*kops.MixedInstancesPolicySpec)(nil), AdditionalUserData:[]kops.UserData(nil), SuspendProcesses:[]string(nil), ExternalLoadBalancers:[]kops.LoadBalancer(nil), DetailedInstanceMonitoring:(*bool)(nil), IAM:(*kops.IAMProfileSpec)(nil), SecurityGroupOverride:(*string)(nil), InstanceProtection:(*bool)(nil)}

I1025 18:10:05.656487   13022 validate_cluster.go:113] instance group: kops.InstanceGroupSpec{Role:"Master", Image:"kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16", MinSize:(*int32)(0xc000b793dc), MaxSize:(*int32)(0xc000b793d0), MachineType:"m5.large", RootVolumeSize:(*int32)(nil), RootVolumeType:(*string)(nil), RootVolumeIops:(*int32)(nil), RootVolumeOptimization:(*bool)(nil), Volumes:[]*kops.VolumeSpec(nil), VolumeMounts:[]*kops.VolumeMountSpec(nil), Subnets:[]string{"us-east-1d"}, Zones:[]string(nil), Hooks:[]kops.HookSpec(nil), MaxPrice:(*string)(nil), AssociatePublicIP:(*bool)(nil), AdditionalSecurityGroups:[]string(nil), CloudLabels:map[string]string(nil), NodeLabels:map[string]string{"kops.k8s.io/instancegroup":"master-us-east-1d"}, FileAssets:[]kops.FileAssetSpec(nil), Tenancy:"", Kubelet:(*kops.KubeletConfigSpec)(nil), Taints:[]string(nil), MixedInstancesPolicy:(*kops.MixedInstancesPolicySpec)(nil), AdditionalUserData:[]kops.UserData(nil), SuspendProcesses:[]string(nil), ExternalLoadBalancers:[]kops.LoadBalancer(nil), DetailedInstanceMonitoring:(*bool)(nil), IAM:(*kops.IAMProfileSpec)(nil), SecurityGroupOverride:(*string)(nil), InstanceProtection:(*bool)(nil)}

I1025 18:10:05.656557   13022 validate_cluster.go:113] instance group: kops.InstanceGroupSpec{Role:"Master", Image:"kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16", MinSize:(*int32)(0xc000b0918c), MaxSize:(*int32)(0xc000b09180), MachineType:"m5.large", RootVolumeSize:(*int32)(nil), RootVolumeType:(*string)(nil), RootVolumeIops:(*int32)(nil), RootVolumeOptimization:(*bool)(nil), Volumes:[]*kops.VolumeSpec(nil), VolumeMounts:[]*kops.VolumeMountSpec(nil), Subnets:[]string{"us-east-1f"}, Zones:[]string(nil), Hooks:[]kops.HookSpec(nil), MaxPrice:(*string)(nil), AssociatePublicIP:(*bool)(nil), AdditionalSecurityGroups:[]string(nil), CloudLabels:map[string]string(nil), NodeLabels:map[string]string{"kops.k8s.io/instancegroup":"master-us-east-1f"}, FileAssets:[]kops.FileAssetSpec(nil), Tenancy:"", Kubelet:(*kops.KubeletConfigSpec)(nil), Taints:[]string(nil), MixedInstancesPolicy:(*kops.MixedInstancesPolicySpec)(nil), AdditionalUserData:[]kops.UserData(nil), SuspendProcesses:[]string(nil), ExternalLoadBalancers:[]kops.LoadBalancer(nil), DetailedInstanceMonitoring:(*bool)(nil), IAM:(*kops.IAMProfileSpec)(nil), SecurityGroupOverride:(*string)(nil), InstanceProtection:(*bool)(nil)}

I1025 18:10:05.656627   13022 validate_cluster.go:113] instance group: kops.InstanceGroupSpec{Role:"Node", Image:"kope.io/k8s-1.14-debian-stretch-amd64-hvm-ebs-2019-08-16", MinSize:(*int32)(0xc000b79d5c), MaxSize:(*int32)(0xc000b79d50), MachineType:"m5.xlarge", RootVolumeSize:(*int32)(nil), RootVolumeType:(*string)(nil), RootVolumeIops:(*int32)(nil), RootVolumeOptimization:(*bool)(nil), Volumes:[]*kops.VolumeSpec(nil), VolumeMounts:[]*kops.VolumeMountSpec(nil), Subnets:[]string{"us-east-1a", "us-east-1d", "us-east-1f"}, Zones:[]string(nil), Hooks:[]kops.HookSpec(nil), MaxPrice:(*string)(nil), AssociatePublicIP:(*bool)(nil), AdditionalSecurityGroups:[]string(nil), CloudLabels:map[string]string(nil), NodeLabels:map[string]string{"kops.k8s.io/instancegroup":"nodes"}, FileAssets:[]kops.FileAssetSpec(nil), Tenancy:"", Kubelet:(*kops.KubeletConfigSpec)(nil), Taints:[]string(nil), MixedInstancesPolicy:(*kops.MixedInstancesPolicySpec)(nil), AdditionalUserData:[]kops.UserData(nil), SuspendProcesses:[]string(nil), ExternalLoadBalancers:[]kops.LoadBalancer(nil), DetailedInstanceMonitoring:(*bool)(nil), IAM:(*kops.IAMProfileSpec)(nil), SecurityGroupOverride:(*string)(nil), InstanceProtection:(*bool)(nil)}

I1025 18:10:05.660187   13022 loader.go:359] Config loaded from file /home/user/.kube/config

Cannot load kubecfg settings for "k8s.example.com": context "k8s.example.com" does not exist

lifecyclstale

Most helpful comment

Sorry taking so long to respond to this one. Kops expect a context named the same as the cluster, so you cannot rename it to something else. kops export kubecfg --name <cluster name> will fix the config file for you by readding the context.

All 7 comments

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

/remove-lifecycle rotten

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Sorry taking so long to respond to this one. Kops expect a context named the same as the cluster, so you cannot rename it to something else. kops export kubecfg --name <cluster name> will fix the config file for you by readding the context.

Thanks @olemarkus ! I was eventually able to figure this out on my own, but totally forgot to update the issue. :sweat_smile: Now that it is documented here, maybe at least it will help someone else in their search!

kops export kubecfg --name

Exactly simply adding, in my ~/.kube/config file, a new context with the exact same name as my cluster make my command kops validate cluster --state=s3://BUCKET-NAME pass.

Thank you @olemarkus @austinorth !

Was this page helpful?
0 / 5 - 0 ratings

Related issues

olalonde picture olalonde  路  4Comments

argusua picture argusua  路  5Comments

georgebuckerfield picture georgebuckerfield  路  4Comments

DocValerian picture DocValerian  路  4Comments

drewfisher314 picture drewfisher314  路  4Comments