Here is an example which should be documented, it uses only pre-existing IAM and VPC resources:
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: test-cluster-c-1
region: eu-north-1
vpc:
securityGroup: "sg-0f2ae54eb340e8191"
sharedNodeSecurityGroup: "sg-02a47cee779b317a7"
subnets:
Private:
eu-north-1c:
id: "subnet-065fe8f12d0910d06"
cidr: "10.1.128.0/19"
eu-north-1b:
id: "subnet-06ebc3649c1321fc5"
cidr: "10.1.96.0/19"
iam:
serviceRoleARN: "arn:aws:iam::123:role/eksctl-test-cluster-a-3-cluster-ServiceRole-5YEWP7CFA24K"
nodeGroups:
- name: ng2-private
instanceType: m5.large
desiredCapacity: 1
privateNetworking: true
securityGroups:
withShared: true
withLocal: false
attachIDs: [sg-0b85ff315ea644478]
iam:
instanceProfileARN: "arn:aws:iam::123:instance-profile/eksctl-test-cluster-a-3-nodegroup-ng2-private-NodeInstanceProfile-Y4YKHLNINMXC"
instanceRoleARN: "arn:aws:iam::123:role/eksctl-test-cluster-a-3-nodegroup-NodeInstanceRole-DNGMQTQHQHBJ"
We cannot just publish this, as it's unclear to the user what are all the security groups and IAM ARNs, we need explain the concepts before we publish it. Perhaps we could create some graphical aid to explain this.
Other fields to document:
Would love this. I'm currently having trouble translating cmd flags to config file to git-based infrastructure versioning.
As a starting point, can you point to me (in Go code) where the config struct is parsed?
Would love this. I'm currently having trouble translating cmd flags to config file to git-based infrastructure versioning.
It's not direct, and you already can do more with config then you can do with flags. In the future we'd like to reduce the number of flags.
As a starting point, can you point to me (in Go code) where the config struct is parsed?
I'm glad you asked! Please see github.com/weaveworks/eksctl/pkg/apis/eksctl.io/v1alpha4.
Thanks a lot for such a prompt response! I'll study that - I also found this, leaving it here for the next soul to see:
https://godoc.org/github.com/weaveworks/eksctl/pkg/apis/eksctl.io/v1alpha4
That soul is me! If you have any tips on how to translate it would love more information. I'm trying to use the config file as much as possible. Was looking for documentation.
This is what I've built for myself, this doesn't contain all the possible configuration parameters:
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: my-cluster
region: us-west-2
vpc:
id: "vpc-xxxxxxxx"
cidr: "xxx.xxx.xxx.xxx/21"
subnets:
public:
us-west-2c: {id: "subnet-xxx"}
us-west-2b: {id: "subnet-xxx"}
us-west-2a: {id: "subnet-xxx"}
nodeGroups:
- name: dev-t3-micro
ami: ami-xxx
labels: {pool: dev-t3-micro}
instanceType: t3.micro
desiredCapacity: 3
minSize: 1
maxSize: 5
volumeSize: 50
volumeType: gp2
iam:
withAddonPolicies:
imageBuilder: true
autoScaler: true
externalDNS: true
- name: prod-c5-4xlarge
ami: ami-xxx
labels: {pool: prod-c5-4xlarge}
instanceType: c5.4xlarge
desiredCapacity: 3
minSize: 1
maxSize: 3
volumeSize: 50
volumeType: gp2
iam:
withAddonPolicies:
imageBuilder: true
autoScaler: true
externalDNS: true
You can map all the fields in the YML config with the fields in the this section
So I'm personally actually quite confused by what is intended by withLocal for node groups:
securityGroups:
withShared: true
withLocal: false
Note: that the case in the above example file is incorrect in some places e.g. AutoScaler ☞ autoScaler so be careful.
If I have withLocal: false and privateNetworking: true then my pods fail to start in the created eks cluster.
Warning FailedCreatePodSandBox 1m kubelet, ip-162-28-155-157.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "2099408ab6381b2bdd4a9ca458f876bd91609b5947e34e0ad867cbe3776b192f" network for pod "hello-from-kube-774d68dd97-9r8t2": NetworkPlugin cni failed to set up pod "hello-from-kube-774d68dd97-9r8t2_kube-public" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused", failed to clean up sandbox container "2099408ab6381b2bdd4a9ca458f876bd91609b5947e34e0ad867cbe3776b192f" network for pod "hello-from-kube-774d68dd97-9r8t2": NetworkPlugin cni failed to teardown pod "hello-from-kube-774d68dd97-9r8t2_kube-public" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"]
Normal SandboxChanged 2s (x10 over 1m) kubelet, ip-162-28-155-157.ec2.internal Pod sandbox changed, it will be killed and re-created.
If I change withLocal: true (and also corrected the VolumeType ☞ volumeType) then they seem to work ok.
Here's what worked for me for pre-existing shared vpc and pre-existing, shared iam roles as described in #573:
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: my-test
region: us-east-1
vpc:
id: "vpc-11111"
cidr: "152.28.0.0/16"
subnets:
private:
us-east-1d:
id: "subnet-1111"
cidr: "152.28.152.0/21"
us-east-1c:
id: "subnet-11112"
cidr: "152.28.144.0/21"
us-east-1a:
id: "subnet-11113"
cidr: "152.28.136.0/21"
iam:
serviceRoleARN: "arn:aws:iam::11111:role/eks-base-service-role"
nodeGroups:
- name: my-test-m5-private
labels: {pool: my-test-m5-private}
instanceType: m5.large
desiredCapacity: 3
minSize: 1
maxSize: 15
volumeSize: 50
volumeType: gp2
iam:
instanceProfileARN: "arn:aws:iam::11111:instance-profile/eks-nodes-base-role"
instanceRoleARN: "arn:aws:iam::1111:role/eks-nodes-base-role"
privateNetworking: true
securityGroups:
withShared: true
withLocal: true
attachIDs: ['sg-11111', 'sg-11112']
allowSSH: true
sshPublicKeyName: 'my-instance-key'
tags:
'environment:basedomain': 'example.org'
So I'm personally actually quite confused by what is intended by
withLocalfor node groups:securityGroups: withShared: true withLocal: false
If you just want to attach extra SGs, you shouldn't need to set withLocal and withShared. These settings are for those who need to use pre-existing SGs only.
Shared SG is one that we create for all nodegroups, it allows for cluster-wide connectivity between all nodes in all nodegroups. Normally you want to use withShared: true, and it is the default. If you disable it, you isolate the given nodegroup from other nodegroups, unless you use a custom SG that caters for what you need.
And local SG is one that we create specifically for a given nodegroup only, it is there to control external access as well as access to control plane. E.g. when you enable SSH, the local SG will be were port 22 will be open. Just like withShared, withLocal is enabled by default.
Hope this makes clearer. As I said above, documentation is needed and help would be hugely appreciated, but we might want to create a new website first, so we can have topical pages.
Very helpful!
So I observed that when I specified a vpc securityGroup and sharedNodeSecurityGroup and in the nodegroup omitted withShared: and withLocal: it tried to create a security group in the worker node stack.
I explicitly attached the sharedNodeSecurityGroup to the nodegroup:
securityGroups:
withShared: false
withLocal: false
attachIDs: ['sg-1111111']
And that seems to work. I'm not sure if the sharedNodeSecurityGroup is implied in the attachIDs or if I need to be explicit.
With the IAM stuff, this allows unprivileged users with no access to create IAM or security groups to be empowered to spin up ephemeral eks clusters!
Those clusters that re-use security groups could communicate with one another across clusters. For my use case, that's not problematic, and it might be very useful for heptio/gimbal, and multi-cluster istio, etc.
And that seems to work. I'm not sure if the sharedNodeSecurityGroup is implied in the attachIDs or if I need to be explicit.
It is implied.
With the IAM stuff, this allows unprivileged users with no access to create IAM or security groups to be empowered to spin up ephemeral eks clusters!
Yes, that use-case was fully thought of, apologies that we didn't find the time to document it properly yet.
I love eksctl! I think you've spent your time and effort very wisely. I hope to contribute some documentation once my own understanding is accurate.
By the way, when I tested omitting the sharedNodeSecurityGroup from the attachIDs, it did not add that sharedNodeSecurityGroup security group to the worker nodes. I had attached other security groups in that list, so maybe it's only implied when you don't list any attachIDs?
Also, somewhat counterintuitively, the nodegroup config sshPublicKeyName does not select the existing ec2 keypair (in fact it seems to do nothing). However, sshPublicKeyPath is the equivalent of --ssh-public-key so you use this to select an existing ec2 keypair named cfn-ec2-key for instance:
allowSSH: true
sshPublicKeyPath: 'cfn-ec2-key'
@StevenACoffman yes, SSH attributes are in need of improvement (#386).
I think this list is missing version
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: my-test
region: us-east-1
version: 1.11
I am having trouble with this config file, the iam policies do not get attached:
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: demo
region: us-west-2
version: '1.11'
nodeGroups:
- name: dev-t3-small
labels: {pool: dev-t3-micro}
instanceType: t3.small
desiredCapacity: 3
minSize: 1
maxSize: 5
volumeSize: 50
VolumeType: gp2
iam:
withAddonPolicies:
imageBuilder: true
AutoScaler: true
ExternalDNS: true
That example you copied was wrong, please use 'autoScaler' and 'externalDNS'.
VolumeType should also be volumeType. @mofirouz can you edit the case of your example? It is causing some confusion.
Apologies all - must have been a copy-paste issue. Updated above.
Can anyone explain what imageBuilder and EBS fields denote in the configuration for nodegroups?
iam:
withAddonPolicies:
imageBuilder: true
ebs: true
Hi Romil! The 'imageBuilder' one allows for full ECR access, you would want
this if you are looking to run some sort of CI that need to push images to
ECR. And 'ebs' is for new EBS CSI driver, which I am not familiar with, but
I know it's an up-and-coming replacement for built-in EBS driver.
On Sun, 31 Mar 2019, 4:35 pm Romil Punetha, notifications@github.com
wrote:
Can anyone explain what imageBuilder and EBS fields denote in the
configuration for nodegroups?iam:
withAddonPolicies:
imageBuilder: true
EBS: true—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/508#issuecomment-478352046,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWSwVBxECrqDGuUHpOhmxyJsMDzOZGks5vcNXGgaJpZM4ahQb0
.
@errordeveloper Yes, I was willing to create different nodegroups, one with all the necessary components that jenkins-x requires and that would run the CI jobs, and the other that would run the application containers. I guess the second nodegroup does not need imageBuilder access.
Thanks for the help.
Yes, indeed, only the Jenkins X nodegroup needs push access to ECR. Pull
access is naturally required by all EKS nodes.
On Sun, 31 Mar 2019, 5:37 pm Romil Punetha, notifications@github.com
wrote:
@errordeveloper https://github.com/errordeveloper Yes, I was willing to
create different nodegroups, one with all the necessary components that
jenkins-x requires and that would run the CI jobs, and the other that would
run the application containers. I guess the second nodegroup does not need
imageBuilder access.
Thanks for the help.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/508#issuecomment-478357199,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWS4mc-oMzXhacVoGjYnYP4YNwJ1DBks5vcORUgaJpZM4ahQb0
.
@errordeveloper EBS suggests consistency is going off the rails quite a bit? Most things in the config are lower camel case, but then we have some acronyms in caps like xxxYyyDNS and xxxYyyARN, then some things that aren't even acronyms are in capitals too, like xxxID and xxxSSH, then EBS even starts with a capital in direct contrast to iam, vpc etc. It's OCD hell 😄
Are there any conventions or guidelines you aim to follow for the project?
@errordeveloper https://github.com/errordeveloper EBS suggests
consistency is going off the rails quite a bit?
No, it is ebs.
More things in the config are lower camel case, but then we have some
acronyms in caps like xxxYyyDNS and xxxYyyARN, then some things that
aren't even acronyms are in capitals too, like xxxID and xxxSSH, then EBS
even starts with a capital in direct contrast to iam, vpc etc. It's OCD
hell 😄Are there any conventions or guidelines you aim to follow for the project?
The convention are borrowed from the Kubernetes API.
I've update @romil-punetha's comment above, so it now says ebs.
@whereisaaron thanks for pointing this out!
Is there a Kind: NodeGroup available as there is Kind: ClusterConfig ? I need a yaml file to create nodegroup.
Please take a look at the docs, you can create new nodegroups by adding
appropriate definition in your 'ClusterConfig' and passing it to 'eksctl
create nodegroup'.
On Mon, 1 Apr 2019, 10:27 am Romil Punetha, notifications@github.com
wrote:
Is there a Kind: NodeGroup available as there is Kind: ClusterConfig ? I
need a yaml file to create nodegroup.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/weaveworks/eksctl/issues/508#issuecomment-478505290,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAPWS6apLiAaIDTUt4wlqr2SU0P1Jxikks5vcdEWgaJpZM4ahQb0
.
@errordeveloper It worked. Thanks.
This worked for me. And yes, I had to use "sshPublicKeyPath" to specify an existing SSH key pair.
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: stang-eksctl-jx-swa-vpc-cluster-01
region: us-east-1
version: "1.12"
vpc:
cidr: "10.99.232.0/23"
availabilityZones: ["us-east-1a", "us-east-1b"]
nodeGroups:
Worth noting somewhere that if you choose to attachPolicyARNs, you must include the default node policies:
nodeGroups:
- name: my-special-nodegroup
iam:
attachPolicyARNs: ['arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy','arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy']
We had been tried to establish EKS with following script below. As check result it work like charm. All worker node was create and all cloudformation stack is complete. Anyway the result on command line is still pending
apiVersion: eksctl.io/v1alpha4
kind: ClusterConfig
metadata:
name: eks-cluster-production
region: ap-southeast-1
vpc:
securityGroup: "sg-XXXX"
sharedNodeSecurityGroup: "sg-YYYY"
id: "vpc-9999" # (optional, must match VPC ID used for each subnet below)
cidr: "10.21.X.X/X" # (optional, must match CIDR used by the given VPC)
subnets:
# must provide 'private' and/or 'public' subnets by availibility zone as shown
private:
ap-southeast-1a:
id: "subnet-9999"
cidr: "10.21.X.0/24" # (optional, must match CIDR used by the given subnet)
ap-southeast-1b:
id: "subnet-9999"
cidr: "10.21.X.0/24" # (optional, must match CIDR used by the given subnet)
ap-southeast-1c:
id: "subnet-9999"
cidr: "10.21.X.0/24" # (optional, must match CIDR used by the given subnet)
nodeGroups:
ubuntu@ip-10-21-2-125:~$ eksctl create cluster -f wkeworkshoprd.yml
[ℹ] using region ap-southeast-1
[✔] using existing VPC (vpc-0a68285c895967781) and subnets (private:[subnet-9999 subnet-9999 subnet-9999] public:[])
[!] custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
[ℹ] nodegroup "EKS-XXXX" will use "ami-XXXX" [AmazonLinux2/1.12]
[ℹ] creating EKS cluster "eks-cluster-XXXX" in "ap-southeast-1" region
[ℹ] will create a CloudFormation stack for cluster itself and 1 nodegroup stack(s)
[ℹ] if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=ap-southeast-1 --name=eks-cluster-production1'
[ℹ] 2 sequential tasks: { create cluster control plane "eks-cluster-production1", create nodegroup "EKS-WORKER" }
[ℹ] building cluster stack "eksctl-eks-cluster-production1-cluster"
[ℹ] deploying stack "eksctl-eks-cluster-production1-cluster"
[ℹ] buildings nodegroup stack "eksctl-eks-cluster-production1-nodegroup-EKS-WORKER"
[ℹ] deploying stack "eksctl-eks-cluster-production1-nodegroup-EKS-WORKER"
[✔] all EKS cluster resource for "eks-cluster-production1" had been created
[✔] saved kubeconfig as "/home/ubuntu/.kube/config"
[ℹ] adding role "arn:aws:iam::635780203112:role/eksworkshop-admin" to auth ConfigMap
[ℹ] nodegroup "EKS-WORKER" has 0 node(s)
[ℹ] waiting for at least 1 node(s) to become ready in "EKS-XXXX"
Here's what worked for me for pre-existing shared vpc and pre-existing, shared iam roles as described in #573:
apiVersion: eksctl.io/v1alpha4 kind: ClusterConfig metadata: name: my-test region: us-east-1 vpc: id: "vpc-11111" cidr: "152.28.0.0/16" subnets: private: us-east-1d: id: "subnet-1111" cidr: "152.28.152.0/21" us-east-1c: id: "subnet-11112" cidr: "152.28.144.0/21" us-east-1a: id: "subnet-11113" cidr: "152.28.136.0/21" iam: serviceRoleARN: "arn:aws:iam::11111:role/eks-base-service-role" nodeGroups: - name: my-test-m5-private labels: {pool: my-test-m5-private} instanceType: m5.large desiredCapacity: 3 minSize: 1 maxSize: 15 volumeSize: 50 volumeType: gp2 iam: instanceProfileARN: "arn:aws:iam::11111:instance-profile/eks-nodes-base-role" instanceRoleARN: "arn:aws:iam::1111:role/eks-nodes-base-role" privateNetworking: true securityGroups: withShared: true withLocal: true attachIDs: ['sg-11111', 'sg-11112'] allowSSH: true sshPublicKeyName: 'my-instance-key' tags: 'environment:basedomain': 'example.org'
More sample on attachPolicyARNs in nodeGroups
nodeGroups:
- name: ngplatformnonprod09232
instanceType: m5.xlarge
minSize: 2
maxSize: 8
privateNetworking: true
iam:
attachPolicyARNs:
- arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess
- arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
- arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
- arn:aws:iam::1111111111:policy/kube2iam
withAddonPolicies:
autoScaler: true
imageBuilder: true
Hi folks, is it possible to put tags on subnets during the cluster creation? Id like to provision them to use with alb-ingress.
Thanks in advance.
Here's what worked for me for pre-existing shared vpc and pre-existing, shared iam roles as described in #573:
apiVersion: eksctl.io/v1alpha4 kind: ClusterConfig metadata: name: my-test region: us-east-1 vpc: id: "vpc-11111" cidr: "152.28.0.0/16" subnets: private: us-east-1d: id: "subnet-1111" cidr: "152.28.152.0/21" us-east-1c: id: "subnet-11112" cidr: "152.28.144.0/21" us-east-1a: id: "subnet-11113" cidr: "152.28.136.0/21" iam: serviceRoleARN: "arn:aws:iam::11111:role/eks-base-service-role" nodeGroups: - name: my-test-m5-private labels: {pool: my-test-m5-private} instanceType: m5.large desiredCapacity: 3 minSize: 1 maxSize: 15 volumeSize: 50 volumeType: gp2 iam: instanceProfileARN: "arn:aws:iam::11111:instance-profile/eks-nodes-base-role" instanceRoleARN: "arn:aws:iam::1111:role/eks-nodes-base-role" privateNetworking: true securityGroups: withShared: true withLocal: true attachIDs: ['sg-11111', 'sg-11112'] allowSSH: true sshPublicKeyName: 'my-instance-key' tags: 'environment:basedomain': 'example.org'More sample on attachPolicyARNs in nodeGroups
nodeGroups: - name: ngplatformnonprod09232 instanceType: m5.xlarge minSize: 2 maxSize: 8 privateNetworking: true iam: attachPolicyARNs: - arn:aws:iam::aws:policy/ElasticLoadBalancingFullAccess - arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy - arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy - arn:aws:iam::1111111111:policy/kube2iam withAddonPolicies: autoScaler: true imageBuilder: true
is it creating a new ssh key if you are passing sshPublicKeyName: 'my-instance-key' ?
i am passing my public key
ssh:
allow: true
publicKeyPath: ~/.ssh/my-key
to my config file and it ends up creating a new key instead of using and existing one.
Hi folks, is it possible to put tags on subnets during the cluster creation? Id like to provision them to use with alb-ingress.
Thanks in advance.
I was looking for an option to add those tags as well, since I too want to use the ALB ingress controller. I think the kubernetes.io/role/elb and kubernetes.io/role/internal-elb tags are present by default: https://github.com/weaveworks/eksctl/pull/445/files
But I'm not sure yet if the kubernetes.io/cluster/<cluster-name> tag is added too. I can see that the security groups are getting that tag. But I can't find the same happening to all subnets.
AWS docs state that it is required on all subnets though...
Edit: If I'm reading this right, then all subnets are tagged with the kubernetes.io/cluster/<cluster-name> tag by default on cluster creation. Thus all tags are present for using the alb-ingress-controller.
Here is an example which should be documented, it uses only pre-existing IAM and VPC resources:
apiVersion: eksctl.io/v1alpha4 kind: ClusterConfig metadata: name: test-cluster-c-1 region: eu-north-1 vpc: securityGroup: "sg-0f2ae54eb340e8191" sharedNodeSecurityGroup: "sg-02a47cee779b317a7" subnets: Private: eu-north-1c: id: "subnet-065fe8f12d0910d06" cidr: "10.1.128.0/19" eu-north-1b: id: "subnet-06ebc3649c1321fc5" cidr: "10.1.96.0/19" iam: serviceRoleARN: "arn:aws:iam::123:role/eksctl-test-cluster-a-3-cluster-ServiceRole-5YEWP7CFA24K" nodeGroups: - name: ng2-private instanceType: m5.large desiredCapacity: 1 privateNetworking: true securityGroups: withShared: true withLocal: false attachIDs: [sg-0b85ff315ea644478] iam: instanceProfileARN: "arn:aws:iam::123:instance-profile/eksctl-test-cluster-a-3-nodegroup-ng2-private-NodeInstanceProfile-Y4YKHLNINMXC" instanceRoleARN: "arn:aws:iam::123:role/eksctl-test-cluster-a-3-nodegroup-NodeInstanceRole-DNGMQTQHQHBJ"We cannot just publish this, as it's unclear to the user what are all the security groups and IAM ARNs, we need explain the concepts before we publish it. Perhaps we could create some graphical aid to explain this.
Other fields to document:
- SSH keys
- IAM add-ons
- AZs
i am struggling to add security groups while creating the cluster with EKSCTL . i have a pre existing VPC and one control plane SG . how many SG should i create in VPC for EKS and where should i mention in Yaml file which one under VPC and which one under Worker Node . Kindly elaborate .
How I can pass parameters like CloudFormation ?
cluster.yaml
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: ${CLUSTER_NAME}
region: ${AWS_REGION}
managedNodeGroups:
- name: nodegroup
desiredCapacity: ${NODEGROUP_DESIRED_CAPACITY}
iam:
withAddonPolicies:
albIngress: true
cluster-parameters.json
[
{
"ParamtereKey": "CLUSTER_NAME",
"ParameterValue": "k1-cluster"
},
{
"ParamtereKey": "AWS_REGION",
"ParameterValue": "us-east-1"
},
{
"ParamtereKey": "NODEGROUP_DESIRED_CAPACITY",
"ParameterValue": 4
}
]
Running eksctl with parameters
$ eksctl create cluster -f cluster.yaml --parameters file://cluster-parameters.json
Hi @danilobrinu parameterizing or templating the config file is not supported in eksctl, but feel free to use other tools like ksonnet or jk config.
@martina-if any plans to support to pass parameters ?
@danilobrinu No, no plans so far to support that since there are plenty of tools out there for it. We do support templating for quickstart profiles though.
@martina-if ksonnect is discontinued and jk-config doesn't have support for amazon eks :(
@danilobrinu I haven't used it myself so I might be wrong but what I though is that you could use any yaml or json templating tool to generate the actual yaml or json before passing that to eksctl. So no need for that tool to have actual support for eks.
I'm going to close this issue as we now have schema docs, we're working on improving those docs (#1472) and we have examples in /examples. Anyone who brought up specific issues here, feel free to create a new issue or add settings to an existing example or create a new config.
Most helpful comment
Hi folks, is it possible to put tags on subnets during the cluster creation? Id like to provision them to use with alb-ingress.
Thanks in advance.