Kops: KubernetesCluster tag is being created on new clusters using 1.10.0 beta 1

Created on 14 Aug 2018  路  17Comments  路  Source: kubernetes/kops

1. What kops version are you running? The command kops version, will display
this information.

Version 1.10.0-beta.1

2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.

v1.10.5

3. What cloud provider are you using?

AWS

4. What commands did you run? What is the simplest way to reproduce this issue?

  • kops create -f file.yml
  • kops create secret ...
  • kops update cluster cluster.name --yes

5. What happened after the commands executed?

cluster was created with deprecated KubernetesCluster tag

6. What did you expect to happen?

no deprecated tag

7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.

apiVersion: kops/v1alpha2
kind: Cluster
metadata:
  creationTimestamp: 2018-08-13T10:03:59Z
  name: k8s.XXX.org
spec:
  api:
    loadBalancer:
      type: Public
  authorization:
    rbac: {}
  channel: stable
  cloudLabels:
    Owner: me-i-guess
    Team: devops
  cloudProvider: aws
  configBase: s3://xxx-dev-kops/k8s.xxxx.org
  etcdClusters:
  - etcdMembers:
    - instanceGroup: master-ap-northeast-1a
      name: a
    - instanceGroup: master-ap-northeast-1c
      name: c
    - instanceGroup: master-ap-northeast-1d
      name: d
    image: k8s.gcr.io/etcd:3.2.14
    name: main
  - etcdMembers:
    - instanceGroup: master-ap-northeast-1a
      name: a
    - instanceGroup: master-ap-northeast-1c
      name: c
    - instanceGroup: master-ap-northeast-1d
      name: d
    image: k8s.gcr.io/etcd:3.2.14
    name: events
  iam:
    allowContainerRegistry: true
    legacy: false
  kubeDNS:
    provider: CoreDNS
  kubernetesApiAccess:
  - 0.0.0.0/0
  kubernetesVersion: v1.10.5
  masterPublicName: api.k8s.xxxx.org
  networkCIDR: 172.17.0.0/16
  networkID: vpc-xxxxx
  networking:
    calico:
      crossSubnet: true
  nonMasqueradeCIDR: 100.64.0.0/10
  sshAccess:
  - 0.0.0.0/0
  subnets:
  - cidr: 172.17.36.0/22
    name: ap-northeast-1a
    type: Private
    zone: ap-northeast-1a
  - cidr: 172.17.40.0/22
    name: ap-northeast-1c
    type: Private
    zone: ap-northeast-1c
  - cidr: 172.17.44.0/22
    name: ap-northeast-1d
    type: Private
    zone: ap-northeast-1d
  - cidr: 172.17.32.0/25
    name: utility-ap-northeast-1a
    type: Utility
    zone: ap-northeast-1a
  - cidr: 172.17.32.128/25
    name: utility-ap-northeast-1c
    type: Utility
    zone: ap-northeast-1c
  - cidr: 172.17.33.0/25
    name: utility-ap-northeast-1d
    type: Utility
    zone: ap-northeast-1d
  topology:
    bastion:
      bastionPublicName: bastion.k8s.xxxx.org
    dns:
      type: Private
    masters: private
    nodes: private

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2018-08-13T10:04:01Z
  labels:
    kops.k8s.io/cluster: k8s.xxxx.org
  name: bastion
spec:
  image: ami-xxxx
  machineType: t2.micro
  maxSize: 1
  minSize: 1
  nodeLabels:
    kops.k8s.io/instancegroup: bastions
  role: Bastion
  rootVolumeSize: 8
  subnets:
  - utility-ap-northeast-1a
  - utility-ap-northeast-1c
  - utility-ap-northeast-1d

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2018-08-13T10:04:01Z
  labels:
    kops.k8s.io/cluster: k8s.xxxx.org
  name: master-ap-northeast-1a
spec:
  associatePublicIp: false
  image: ami-xxxx
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    hbdata.machine.class: t2.medium
    kops.k8s.io/instancegroup: master-ap-northeast-1a
  role: Master
  rootVolumeSize: 100
  subnets:
  - ap-northeast-1a

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2018-08-13T10:04:01Z
  labels:
    kops.k8s.io/cluster: k8s.xxxx.org
  name: master-ap-northeast-1c
spec:
  associatePublicIp: false
  image: ami-xxxx
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    hbdata.machine.class: t2.medium
    kops.k8s.io/instancegroup: master-ap-northeast-1c
  role: Master
  rootVolumeSize: 100
  subnets:
  - ap-northeast-1c

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2018-08-13T10:04:02Z
  labels:
    kops.k8s.io/cluster: k8s.xxxx.org
  name: master-ap-northeast-1d
spec:
  associatePublicIp: false
  image: ami-xxx
  machineType: t2.medium
  maxSize: 1
  minSize: 1
  nodeLabels:
    hbdata.machine.class: t2.medium
    kops.k8s.io/instancegroup: master-ap-northeast-1d
  role: Master
  rootVolumeSize: 100
  subnets:
  - ap-northeast-1d

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2018-08-13T10:04:02Z
  labels:
    kops.k8s.io/cluster: k8s.xxxx.org
  name: node-t2.medium
spec:
  associatePublicIp: false
  image: ami-xxxx
  machineType: t2.medium
  maxSize: 2
  minSize: 2
  nodeLabels:
    hbdata.class: t2.medium
    kops.k8s.io/instancegroup: nodes
  role: Node
  rootVolumeSize: 150
  subnets:
  - ap-northeast-1a
  - ap-northeast-1c

---

apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
  creationTimestamp: 2018-08-13T10:04:02Z
  labels:
    kops.k8s.io/cluster: k8s.xxx.org
  name: node-t2.small
spec:
  associatePublicIp: false
  image: ami-xxxx
  machineType: t2.small
  maxSize: 2
  minSize: 2
  nodeLabels:
    hbdata.class: t2.small
    kops.k8s.io/instancegroup: nodes
  role: Node
  rootVolumeSize: 100
  subnets:
  - ap-northeast-1a
  - ap-northeast-1c

8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.

one log line that illistrates the point, search for KubernetesCluster:

I0814 10:52:21.603221   14486 executor.go:178] Executing task "LoadBalancerAttachment/api-master-ap-northeast-1d": *awstasks.LoadBalancerAttachment {"Name":"api-master-ap-northe\
ast-1d","Lifecycle":"Sync","LoadBalancer":{"Name":"api.k8s.XXXX.org","Lifecycle":"Sync","LoadBalancerName":"api-k8s-XXXX-org-dncn3m","DNSName":"api-k8s-XXXX-org-dncn3m-146753430\
1.ap-northeast-1.elb.amazonaws.com","HostedZoneId":"Z14GRHDCWA56QT","Subnets":[{"Name":"utility-ap-northeast-1c.k8s.XXXX.org","Lifecycle":"Sync","ID":"subnet-01500a6dd984ee6f6",\
"VPC":{"Name":"k8s.XXXX.org","Lifecycle":"Sync","ID":"vpc-d2bdf2b5","CIDR":"172.17.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":true,"Shared":true,"Tags":null},"Availabi\
lityZone":"ap-northeast-1c","CIDR":"172.17.32.128/25","Shared":false,"Tags":{"KubernetesCluster":"k8s.XXXX.org","Name":"utility-ap-northeast-1c.k8s.XXXX.org","SubnetType":"Utili\
ty","kubernetes.io/cluster/k8s.XXXX.org":"owned","kubernetes.io/role/elb":"1"}},{"Name":"utility-ap-northeast-1a.k8s.XXXX.org","Lifecycle":"Sync","ID":"subnet-06ac75476425e6da4"\
,"VPC":{"Name":"k8s.XXXX.org","Lifecycle":"Sync","ID":"vpc-d2bdf2b5","CIDR":"172.17.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":true,"Shared":true,"Tags":null},"Availab\
ilityZone":"ap-northeast-1a","CIDR":"172.17.32.0/25","Shared":false,"Tags":{"KubernetesCluster":"k8s.XXXX.org","Name":"utility-ap-northeast-1a.k8s.XXXX.org","SubnetType":"Utilit\
y","kubernetes.io/cluster/k8s.XXXX.org":"owned","kubernetes.io/role/elb":"1"}},{"Name":"utility-ap-northeast-1d.k8s.XXXX.org","Lifecycle":"Sync","ID":"subnet-081633b8242489b15",\
"VPC":{"Name":"k8s.XXXX.org","Lifecycle":"Sync","ID":"vpc-d2bdf2b5","CIDR":"172.17.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":true,"Shared":true,"Tags":null},"Availabi\
lityZone":"ap-northeast-1d","CIDR":"172.17.33.0/25","Shared":false,"Tags":{"KubernetesCluster":"k8s.XXXX.org","Name":"utility-ap-northeast-1d.k8s.XXXX.org","SubnetType":"Utility\
","kubernetes.io/cluster/k8s.XXXX.org":"owned","kubernetes.io/role/elb":"1"}}],"SecurityGroups":[{"Name":"api-elb.k8s.XXXX.org","Lifecycle":"Sync","ID":"sg-05fd06a7aefda45dc","D\
escription":"Security group for api ELB","VPC":{"Name":"k8s.XXXX.org","Lifecycle":"Sync","ID":"vpc-d2bdf2b5","CIDR":"172.17.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":\
true,"Shared":true,"Tags":null},"RemoveExtraRules":["port=443"],"Shared":null,"Tags":{"KubernetesCluster":"k8s.XXXX.org","Name":"api-elb.k8s.XXXX.org","kubernetes.io/cluster/k8s\
.XXXX.org":"owned"}}],"Listeners":{"443":{"InstancePort":443,"SSLCertificateID":""}},"Scheme":null,"HealthCheck":{"Target":"SSL:443","HealthyThreshold":2,"UnhealthyThreshold":2,\
"Interval":10,"Timeout":5},"AccessLog":null,"ConnectionDraining":null,"ConnectionSettings":{"IdleTimeout":300},"CrossZoneLoadBalancing":null,"SSLCertificateID":""},"AutoscalingG\
roup":{"Name":"master-ap-northeast-1d.masters.k8s.XXXX.org","Lifecycle":"Sync","MinSize":1,"MaxSize":1,"Subnets":[{"Name":"ap-northeast-1d.k8s.XXXX.org","Lifecycle":"Sync","ID":\
"subnet-01a766febc2740858","VPC":{"Name":"k8s.XXXX.org","Lifecycle":"Sync","ID":"vpc-d2bdf2b5","CIDR":"172.17.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":true,"Shared":\
true,"Tags":null},"AvailabilityZone":"ap-northeast-1d","CIDR":"172.17.44.0/22","Shared":false,"Tags":{"KubernetesCluster":"k8s.XXXX.org","Name":"ap-northeast-1d.k8s.XXXX.org","S\
ubnetType":"Private","kubernetes.io/cluster/k8s.XXXX.org":"owned","kubernetes.io/role/internal-elb":"1"}}],"Tags":{"KubernetesCluster":"k8s.XXXX.org","Name":"master-ap-northeast\
-1d.masters.k8s.XXXX.org","Owner":"me-i-guess","Team":"devops","k8s.io/cluster-autoscaler/node-template/label/hbdata.machine.class":"t2.medium","k8s.io/cluster-autoscaler/node-t\
emplate/label/kops.k8s.io/instancegroup":"master-ap-northeast-1d","k8s.io/role/master":"1"},"Granularity":"1Minute","Metrics":["GroupDesiredCapacity","GroupInServiceInstances","\
GroupMaxSize","GroupMinSize","GroupPendingInstances","GroupStandbyInstances","GroupTerminatingInstances","GroupTotalInstances"],"LaunchConfiguration":{"Name":"master-ap-northeas\
t-1d.masters.k8s.XXXX.org","Lifecycle":"Sync","UserData":{"Name":"","Resource":{}},"ImageID":"ami-XXXX","InstanceType":"t2.medium","SSHKey":{"Name":"kubernetes.k8s.XXXX.org-a8:5\
5:83:7c:5c:af:d7:0f:0c:bb:29:1c:e0:76:59:ea","Lifecycle":"Sync","PublicKey":{"Name":"","Resource":"ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC9oewEkynAKKWLiTUw1oXszkndZpYcrpsIEPlk+e\
GGrjSYpZ5NEl9v5LDEieNjiKqPQeBHYrTNdAz4oss5UBzmrbnLnknKqP/I32fM/PSD/8cVItG6rJ8rNWK0s4cEcnu7Vk3xgLI0TbPD3QZi0Uw/lzab7hSRU+TJ7B2VeRs7ZgTUcOeVqjK4ZhWG/PyYIr8ahOAHll3ADSqjNWte1MsJhbs\
gGPqvgQAwQVJ6vRh+mcXTAKaJhUzfahD/yTePnSRMdTcxZ6HElbSGcTZ5FN2k769wzaDNRbGFGsp8dnEOvL0rfYueWW86iFFnicfa/2GFNkQz8d2ZpbYvCQ3D4u90a+tM7+6fUDIoF4WS5PZQaK2I5OW/4OxWCzN/nCVHzhbiL6uifELA\
h8TUElZ0R3oTNPpN++Zq4ebJv3sAptBXZR8vpFjwYtB3Yuned1CD9JmvlJxiksUDBv9vdbzHEUzbxI+bolZ2dMUkOckOf3Pl8DmM2BsvFH9w6diR+rggDL+lDs6WlWYnDwruPadSm1pK4W4D4plyGeLqVDTpK1s+zrGulazNowTxsuH+v\
R5mWoLCQhts75EHBFtShkbO2A0ZimvnnnOxzPgxfUwz5vl1UOxBOxCm7SGvFfQNSTt8+nQiGazyCgTmwGlzEnCbeGGUYLEGnTW9kvlmSci4s5ITxw== [email protected]\n"},"KeyFingerprint":"fb:d2:0\
c:31:87:84:1f:22:27:c9:08:f8:93:3b:48:a7"},"SecurityGroups":[{"Name":"masters.k8s.XXXX.org","Lifecycle":"Sync","ID":"sg-0dccea9447ce9e8c5","Description":"Security group for mast\
ers","VPC":{"Name":"k8s.XXXX.org","Lifecycle":"Sync","ID":"vpc-d2bdf2b5","CIDR":"172.17.0.0/16","EnableDNSHostnames":null,"EnableDNSSupport":true,"Shared":true,"Tags":null},"Rem\
oveExtraRules":["port=22","port=443","port=2380","port=2381","port=4001","port=4002","port=4789","port=179"],"Shared":null,"Tags":{"KubernetesCluster":"k8s.XXXX.org","Name":"mas\
ters.k8s.XXXX.org","kubernetes.io/cluster/k8s.XXXX.org":"owned"}}],"AssociatePublicIP":false,"IAMInstanceProfile":{"Name":"masters.k8s.XXXX.org","Lifecycle":"Sync","ID":"AIPAJ5U\
45NMTLWD6SUICW","Shared":false},"InstanceMonitoring":null,"RootVolumeSize":100,"RootVolumeType":"gp2","RootVolumeIops":null,"RootVolumeOptimization":null,"SpotPrice":"","ID":"ma\
ster-ap-northeast-1d.masters.k8s.XXXX.org-20180814025207","Tenancy":null},"SuspendProcesses":null},"Subnet":null,"Instance":null}

9. Anything else do we need to know?

reference https://github.com/kubernetes/kops/blob/master/docs/run_in_existing_vpc.md

lifecyclrotten

All 17 comments

So we do still set the legacy tag in the case where the resource is owned:
https://github.com/kubernetes/kops/blob/master/pkg/model/context.go#L228-L240

The thought was that some people might be relying on it, so we didn't want immediately to stop setting it. In the case where the resource is owned we know there will only be one cluster using it, so it's safe to set the legacy tag (this is why we switched to the new tag format in the first place, because we can't have KubernetesCluster=cluster1 and KubernetesCluster=cluster2 on the same resource)

Not sure if we should change that and stop setting the legacy tag entirely. I worry that there's likely someone out there relying on it!

Does kops have a written deprecation policy? If it does not then it needs one. It can be as simple as:

Kops does the following to deprecate a feature/breaking change:

  • Make a loud note in release docs that feature is deprecated and when it shall be removed
  • Kops spits out warning about using deprecated features

    • A feature gate is needed to turn it on would be nice, but more work

  • In the next or second release, after this one, pick, the feature will be removed.
  • Follow this procedure

Open a ticket with a tag cleanup-for-X or warn-for-Y so that when master becomes dev for X and Y the first thing done is closing these tickets

What you have now is someone may be using a deprecated feature that is documented as deprecated 2 releases ago and there is no plan to remove it in place. You are pushing pain on to your user base. As a v1.10 greenfield install:

  • Why can I not trust the project's documentation?
  • Why do I need to manually manage my automated install?
  • Will I get a nasty surprise if I set up a second cluster and KubernetesCluster=cluster2 gets set and then I delete cluster2?

There needs to be a hard deadline when the feature shall be removed or people won't bother to fix their installs.

After rereading this it may sound a bit too confrontational, just trying to be clear and not trying to start an argument

I have seen a lot of people use this tag as cost allocation tag, so it is probably not a good idea to completely remove it.

the project has officially deprecated it, it should be gone or the decision should be revisited with the docs updated. kubernetes.io/cluster/\

With kubernetes.io/cluster/<cluster-name> you'd have to add every single tag as cost allocation tag. It would be quite hard to do billing dashboards with that as well.

While it is deprecated by Kubernetes there are no removal planned.

@olemarkus as I understand deprecated in terms of products it means this is going away so make what changes you need to so you can adjust to the new state of things. the change was made to fix a problem, 2 or more clusters per vpc/subnet, until the change is completed you have the worst of both possible worlds. So should the change be finished or reverted? the state it is in now is bad.

why would I not want to allocate cost by cluster? k8s is infrastructure, knowing to bill QA for this cluster, dev for those two and project duck head for the last 3 makes sense

the dashboard/billing argument is poor, either i have:

  • less than 10 clusters, very easy to do by hand or fix the script
  • more than 10 clusters, needs automation ie fix the script
  • more than 10 clusters and all done by hand, fix the process

the first to are manageable and the third is not kops problem

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

/remove-lifecycle stale

Same for monitoring. It is much easier to filter clusters using the KubernetesCluster tag as a filter variable instead of adding each kubernetes.io/cluster/* tag as a separate filter. So at least worth keeping it as a feature.

@kivagant-ba maybe yes and maybe no, it depends on the other problems it causes

the cost accounting problem is also handled better with kubernetes.io/cluster/* in some ways, you can split the cost between clusters of shared resources. The issue is that if it is set by the second cluster and you delete that cluster you may break other clusters, ie all the nat gateways go away. It gets to the issue of should kops clusters be sharing resources at all, if yes it should be done safely and if no it should be made very clear "you can't do that".

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Was this page helpful?
0 / 5 - 0 ratings

Related issues

rot26 picture rot26  路  5Comments

justinsb picture justinsb  路  4Comments

RXminuS picture RXminuS  路  5Comments

chrislovecnm picture chrislovecnm  路  3Comments

argusua picture argusua  路  5Comments