Thanks for submitting an issue! Please fill in as much of the template below as
you can.
------------- BUG REPORT TEMPLATE --------------------
kops version are you running? The command kops version, will displaykops 1.9.0
k8s 1.9.3
kubectl version will print thekops flag.Client Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.6", GitCommit:"9f8ebd171479bec0ada837d7ee641dec2f8c6dd1", GitTreeState:"clean", BuildDate:"2018-03-21T15:21:50Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"9", GitVersion:"v1.9.3", GitCommit:"d2835416544f298c919e2ead3be3d0864b52323b", GitTreeState:"clean", BuildDate:"2018-02-07T11:55:20Z", GoVersion:"go1.9.2", Compiler:"gc", Platform:"linux/amd64"}
AWS
kops get ig nodes -o yaml
The result for spec.image is a AMI k8s-1.8
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
I'm expect find an AMI for k8s 1.9 like this AMI:
k8s-1.9-debian-stretch-amd64-hvm-ebs-2018-04-11
https://github.com/kubernetes/kops/blob/master/channels/stable seem to not have been updated before release of kops 1.9.0
kops get --name my.example.com -oyaml to display your cluster manifest.apiVersion: kops/v1alpha2
kind: Cluster
metadata:
creationTimestamp: 2018-04-13T09:51:10Z
name: k8s-init.lab-aws.domdom.com
spec:
api:
loadBalancer:
type: Public
authorization:
rbac: {}
channel: stable
cloudLabels:
Environment: lab
cloudProvider: aws
configBase: s3://cgasmi-kops/k8s-init.lab-aws.domdom.com
etcdClusters:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-04-13T09:51:12Z
labels:
kops.k8s.io/cluster: k8s-init.lab-aws.domdom.com
name: master-eu-central-1a
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: m4.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-central-1a
role: Master
rootVolumeSize: 30
subnets:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-04-13T09:51:12Z
labels:
kops.k8s.io/cluster: k8s-init.lab-aws.domdom.com
name: master-eu-central-1b
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: m4.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-central-1b
role: Master
rootVolumeSize: 30
subnets:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-04-13T09:51:12Z
labels:
kops.k8s.io/cluster: k8s-init.lab-aws.domdom.com
name: master-eu-central-1c
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: m4.large
maxSize: 1
minSize: 1
nodeLabels:
kops.k8s.io/instancegroup: master-eu-central-1c
role: Master
rootVolumeSize: 30
subnets:
apiVersion: kops/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: 2018-04-13T09:51:13Z
labels:
kops.k8s.io/cluster: k8s-init.lab-aws.domdom.com
name: nodes
spec:
image: kope.io/k8s-1.8-debian-jessie-amd64-hvm-ebs-2018-02-08
machineType: m4.large
maxSize: 3
minSize: 3
nodeLabels:
kops.k8s.io/instancegroup: nodes
role: Node
rootVolumeSize: 50
subnets:
As stated in https://github.com/kubernetes/kops/blob/master/docs/images.md,
kope.io => 383156758163
The image k8s-1.9-debian-stretch-amd64-hvm-ebs-2018-04-11 you mentioned is owned by 487596255802.
Is this an official account ?
Regards
The base image doesn't matter so much for the actual installation, all the relevant k8s pieces are installed through nodeup and they come up just fine and report 1.9.3. But I agree, it's confusing for people that don't know that implementation detail.
This image doesn't support M5s (nvme) though, so it'd be great if it got updated.
Would be really helpful for kops to look at the instance type if an AMI is not compatible or at least document clearly on the images read me. Kops upgrade brought down our cluster due to this issue
It looks like kope.io/k8s-1.9-debian-stretch-amd64-hvm-ebs-2018-05-27 does work for m5 / c5 now, just fyi...
Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale
Just to add another data point to @cg-rsands comment - we got caught out by kubernetes/kubernetes#56850 which requires a change to the systemd kubelet definitions when upgrading to k8s 1.10
Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.
Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
@fejta-bot: Closing this issue.
In response to this:
Rotten issues close after 30d of inactivity.
Reopen the issue with/reopen.
Mark the issue as fresh with/remove-lifecycle rotten.Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.
Most helpful comment
This image doesn't support M5s (nvme) though, so it'd be great if it got updated.