Containers-roadmap: EKS Support for Kubernetes 1.13

Created on 12 Dec 2018  路  27Comments  路  Source: aws/containers-roadmap

Aware 1.13 was only released last week, but thought it would be good to see it somewhere on the roadmap.

EKS Proposed

Most helpful comment

Amazon Elastic Container Service for Kubernetes (EKS) now supports Kubernetes version 1.13.7 for all clusters. Additionally, you can use ECR PrivateLink and Kubernetes PodSecurityPolicies with EKS 1.13 clusters.

Kubernetes version 1.13 allows you to use ECR PrivateLink to securely pull images to run your applications and allows you to enable the Kubernetes Pod Security Policies to validates pod creation and update requests against a set of rules. Additional version highlights include the beta launch of Kubernetes DryRun, TaintBasedEvictions, and Raw Block Volume support. Learn more about Kubernetes version 1.13 in the Kubernetes project release notes.

Learn more about the Kubernetes versions available for production workloads on Amazon EKS and how to update your cluster to version 1.13 in the EKS documentation.

Notes

  • All EKS APIs support creating 1.13 clusters in all AWS regions with EKS today.
  • The AWS console does not yet support creating EKS 1.13 clusters in all regions, we are rolling out console support for creating 1.13 clusters to all regions over the next week.
  • EKSctl support for K8s 1.13 is coming soon. EKSctl supports 1.13 clusters in version 0.1.36.
  • We will close this issue when 1.13 cluster creation is supported in the console in all regions.

All 27 comments

especially with NLB fixes in it.. #62

Will this cover also migration to Pod Security Policy?

any ETA for 1.13 as it is GA? Also 1.10/1.11 does not have support of volumeBindingMode forWaitForFirstConsumer what made usage of statefulsets with affinity/anity-affinity unusable :(
Pods scheduled at one node , while PV created absolutely in different availability zone.

@md2k WaitForFirstConsumer was beta in 1.10, so it should work already?
https://kubernetes.io/docs/concepts/storage/storage-classes/#local

when i'm trying to set this in EKS 1.11.5 i get error, i'm doing something wrong? going to double check, but still it is very interesting to know any estimation for more fresh GA versions of Kubernete on AWS.

Did some tests, and actually it doesn't work. with WaitForFirstConsumer in 1.11 Pods waiting PVs, while PVC waits firs consumer:

PVC Event:   Normal     WaitForFirstConsumer  69s (x63 over 16m)  persistentvolume-controller  waiting for first consumer to be created before binding
Pod Event:   Warning  FailedScheduling  4m13s (x25 over 4m41s)  default-scheduler  0/8 nodes are available: 2 Insufficient memory, 3 node(s) didn't find available persistent volumes to bind, 5 node(s) didn't match node selector.

StorageClass:

 k describe storageclasses.storage.k8s.io
Name:            gp2
IsDefaultClass:  Yes
Annotations:     kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"name":"gp2"},"parameters":{"fsType":"ext4","type":"gp2"},"provisioner":"kubernetes.io/aws-ebs","reclaimPolicy":"Delete","volumeBindingMode":"WaitForFirstConsumer"}
,storageclass.kubernetes.io/is-default-class=true
Provisioner:           kubernetes.io/aws-ebs
Parameters:            fsType=ext4,type=gp2
AllowVolumeExpansion:  <unset>
MountOptions:          <none>
ReclaimPolicy:         Delete
VolumeBindingMode:     WaitForFirstConsumer
Events:                <none>

Experiencing same issue: WaitForFirstConsumer does not create a PV. Both the Pod and PVC are pending.
So it AWS does not seem to support this even though it is mentioned in the K8S documentation.

WaitForFirstConsumer also hilariously doesn't work with StatefulSets on 1.11.8-eks in my testing. As others noted, both the PVC and the Stateful set are waiting for the other.

Events:
  Type       Reason                Age                  From                         Message
  ----       ------                ----                 ----                         -------
  Normal     WaitForFirstConsumer  2s (x16 over 3m20s)  persistentvolume-controller  waiting for first consumer to be created before binding
Events:
  Type     Reason            Age                  From               Message
  ----     ------            ----                 ----               -------
  Warning  FailedScheduling  19s (x7 over 2m30s)  default-scheduler  0/3 nodes are available: 3 node(s) didn't find available persistent volumes to bind.

Ran into the same issue, I was hoping that WaitForFirstConsumer would solve the problem that volumes are created in a zone on which the nodes do not have sufficient resources to actually hold the pod meant to consume the volume.

Also 1.13 has Flexvolume support that helps us resize ebs volumes while in-use quote from kubernetes.io (https://kubernetes.io/docs/concepts/storage/persistent-volumes/#resizing-an-in-use-persistentvolumeclaim)

'Expanding in-use PVCs for FlexVolumes is added in release 1.13. To enable this feature use ExpandInUsePersistentVolumes and ExpandPersistentVolumes feature gates. The ExpandPersistentVolumes feature gate is already enabled by default. If the ExpandInUsePersistentVolumes is set, FlexVolume can be resized online without pod restart.'

4 months is an eternity in the k8s world in terms of features. Kops has 1.13 support while gke/aks/eks are still on 1.12 -- why are the folks doing some of the heavy core contributions so gun shy?

AFAICT, kops 1.13.0-alpha.1, gke 1.13.4-gke.10 (Public preview), AKS is 1.12 right now. EKS supporting 1.12 isn't out of line, really.

kops isn't even GA for 1.12 yet (glad to see they've got early alphas for 1.13 and 1.14, hopefully their pace will pick up)

1.13 is GA since 2 weeks on Azure AKS :)

How about v1.14 ?

How about v1.14 ?

You should see: https://github.com/aws/containers-roadmap/issues/212

It looks AWS/EKS is being more informative about release timings. 馃帀 This AWS blog article promises 1.13 for June, with 1.10 being forceably discontinued in July. It also suggests 1.14 will be ~90 days after 1.13. So it looks like no chance of 1.14 before September 2019, about 6 months after 1.14 went GA.

https://aws.amazon.com/blogs/compute/updates-to-amazon-eks-version-lifecycle/

The cadence suggested by this article seems unsustainable, since it suggest 90 day spacing between releases (based on the past 4 releases a year). but from 2019 that changed and the target is 11 weeks (~77 days) between releases. The article does says it uses 90 days 'for simplicity', so hopefully '90 days' actually means a 'less than 90 days' 馃槃

Would love to see 1.13 rollout soon.

I've switched to manual bare metal cluster and now I have v1.14.2.

Timeline

Based on this timeline, we can anticipate 1.13 this month! :shipit:

It's a race between EKS 1.13 and K8S 1.15! Kubernetes 1.15 comes out of beta into GA in one week (17 June).

The AWS blog article suggests EKS aims to track about 6 months behind GA. If so, that would be EKS 1.14 in September and EKS 1.15 just before New Year.

Amazon Elastic Container Service for Kubernetes (EKS) now supports Kubernetes version 1.13.7 for all clusters. Additionally, you can use ECR PrivateLink and Kubernetes PodSecurityPolicies with EKS 1.13 clusters.

Kubernetes version 1.13 allows you to use ECR PrivateLink to securely pull images to run your applications and allows you to enable the Kubernetes Pod Security Policies to validates pod creation and update requests against a set of rules. Additional version highlights include the beta launch of Kubernetes DryRun, TaintBasedEvictions, and Raw Block Volume support. Learn more about Kubernetes version 1.13 in the Kubernetes project release notes.

Learn more about the Kubernetes versions available for production workloads on Amazon EKS and how to update your cluster to version 1.13 in the EKS documentation.

Notes

  • All EKS APIs support creating 1.13 clusters in all AWS regions with EKS today.
  • The AWS console does not yet support creating EKS 1.13 clusters in all regions, we are rolling out console support for creating 1.13 clusters to all regions over the next week.
  • EKSctl support for K8s 1.13 is coming soon. EKSctl supports 1.13 clusters in version 0.1.36.
  • We will close this issue when 1.13 cluster creation is supported in the console in all regions.

The kubectl install instruction is not updated with the latest version yet
https://docs.aws.amazon.com/eks/latest/userguide/install-kubectl.html

Do you have any plans to update it?
https://github.com/awsdocs/amazon-eks-user-guide/blob/master/doc_source/install-kubectl.md


Why don't you point to the official kubectl version?

It's a race between EKS 1.13 and K8S 1.15! Kubernetes 1.15 comes out of beta into GA in one week (17 June).

The winner is... EKS 1.13, beating K8s 1.15 by, I think ~12 hours?

K8s 1.15 was supposed to come out Monday 17 June, but a last-minute blocking issue delayed it to today, 19 June, allowing EKS 1.13 to slip in yesterday on 18 June for the win! Congratulations to all involved 馃槃

@joaovitor I've merged your pull request into the Amazon EKS docs on GitHub. This change should hit our public docs site in an hour or two.

@tabern i noticed the ECR PrivateLink/SDK fix backport for 1.13 didn't seem to land upstream (https://github.com/kubernetes/kubernetes/pull/73755#issuecomment-480360891). Does this mean the issue was fixed in a different way, or is the EKS AMI using a custom-patched kubelet build?

@lstoll this was a backport patch that we applied to EKS clusters only as it was not accepted per https://github.com/kubernetes/kubernetes/pull/73755#issuecomment-480360891

Was this page helpful?
0 / 5 - 0 ratings