What happened?
eksctl automatically upgraded the control plan version while doing eksctl update cluster
What you expected to happen?
Not upgrade the control plane version unless asked for.
How to reproduce it?
eksctl create cluster -f eks_config.yaml with config as
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: us-east-1-prod-eks
region: us-east-1
version: "1.12"
vpc:
cidr: "10.0.0.0/16"
and then try to update cluster with eksctl update cluster -f eks_config.yaml
[ℹ] using region us-east-1
[!] NOTE: config file is only used for finding cluster name and region, deep cluster configuration changes are not yet implemented
[ℹ] re-building cluster stack "eksctl-us-east-1-prod-eks-cluster"
[✔] all resources in cluster stack "eksctl-us-east-1-prod-eks-cluster" are up-to-date
[ℹ] (plan) would upgrade cluster "us-east-1-prod-eks" control plane from current version "1.12" to "1.13"
[!] no changes were applied, run again with '--approve' to apply the changes
Related #909
Hi @dhanvi , thank you for your report. I see in the last line of the logs that it says
[!] no changes were applied, run again with '--approve' to apply the changes
Did you check if it actually upgraded the cluster version or not? You can see the version running eksctl get cluster us-east-1-prod-eks.
It didn't upgrade the cluster version without the --approve flag. (It does upgrade with the approve flag)
Ideally I am expecting eksctl not to upgrade the cluster (even with the approve flag) as in the yaml we are defining the version to be as 1.12
Oh, I see. I think that makes sense. There are several commands where the config file is only used to get the region and the cluster name. We will discuss it in the team.
The behaviour of the upgrade ignoring the version flag is totally non-intuitive so I would consider this a bug.
We are trying to use eksctl as part of a larger automation piece to manage EKS clusters. Having to review the changes and approve them without having any control over what version of EKS we are upgrading to is something very difficult to script.
Oh, I see. I think that makes sense. There are several commands where the config file is only used to get the region and the cluster name. We will discuss it in the team.
Has this been discussed with the team?
As mentioned in the Slack channel I run into that behavior. I expected that the version in the cluster config file is respected. Today an update of eksctl may also upgrade the clusters control plane version. E.g. switching to eksctl v0.17.0 forcefully upgraded all clusters to 1.15 if eksctl update cluster is run.
Our current CI pipeline runs the following commands (with some scriptint around) to ensure a cluster matches the given cluster configuration file:
eksctl create cluster -f cluster.ymleksctl update cluster -f cluster.ymleksctl create nodegroup -f cluster.ymleksctl delete nodegroup --wait --only-missing -f cluster.ymleksctl utils update-kube-proxy --cluster=$clusternameeksctl utils update-aws-node --cluster=$clusternameeksctl utils update-coredns --cluster=$clusternameeksctl upgrade nodegroup -f cluster.yml --kubernetes-version=$ourVersion With this approach we have to pin the eksctl version. The current ad-hoc nature of the eksctl commands makes it really hard to maintain multiple clusters in an automated way. It is hard to write down the target configuration into a file and maintain that target state via merge requests.
Hi @hikhvar ,
Thank you for this feedback, it's really useful. This command has been creating confusion for a long time. It is commonly mistaken to be something like the kubectl update command but it is in fact an "upgrade control plane" command.
$ eksctl update cluster --help
Upgrade control plane to the next Kubernetes version if available. Will also perform any updates needed in the cluster stack if resources are missing.
Usage: eksctl update cluster [flags]
General flags:
-n, --name string EKS cluster name
-r, --region string AWS region
-f, --config-file string load configuration from a file (or stdin if set to '-')
--approve Apply the changes
--timeout duration maximum waiting time for any long-running operation (default 35m0s)
AWS client flags:
-p, --profile string AWS credentials profile to use (overrides the AWS_PROFILE environment variable)
Common flags:
-C, --color string toggle colorized logs (valid options: true, false, fabulous) (default "true")
-h, --help help for this command
-v, --verbose int set log level, use 0 to silence, 4 for debugging and 5 for debugging with AWS debug logging (default 3)
Use 'eksctl update cluster [command] --help' for more information about a command.
The eksctl update cluster only tries to update the stacks in CF with top level resources if they are missing but it will not apply the changes in the config file thoroughly. Nor it will delete resources that are not needed anymore.
I think what you would like to have is something like the eksctl apply command that we haven't implemented yet.
Regarding reading the version from the config file I think that makes a lot of sense, I will raise this again with the team.
I think implementing a eksctl apply command will help a lot of people. Moreover it will allow us to implement GitOps for clusters. Our chain of commands try to mimic a eksctl apply behaviour. However, we fail to achieve that.
Closing as duplicate of https://github.com/weaveworks/eksctl/issues/909 and duplicate of #462
Most helpful comment
I think implementing a
eksctl applycommand will help a lot of people. Moreover it will allow us to implement GitOps for clusters. Our chain of commands try to mimic aeksctl applybehaviour. However, we fail to achieve that.