Containers-roadmap: [EKS] [request]: Manage aws-auth ConfigMap with CloudFormation

Created on 5 Mar 2019  路  18Comments  路  Source: aws/containers-roadmap

Tell us about your request

CloudFormation resources to register IAM roles in the aws-auth ConfigMap.

Which service(s) is this request for?
EKS

Tell us about the problem you're trying to solve. What are you trying to do, and why is it hard?

A Kubernetes cluster managed by EKS is able to authenticate users with IAM roles. This is very useful to grant access to Lambda functions. However, as described in the documentation, every IAM role has to be registered manually in a ConfigMap with the name aws-auth.

For every IAM role we add to the CloudFormation stack, we have to add an entry like this:

mapRoles: |
  - rolearn: "arn:aws:iam::11223344:role/stack-FooBarFunction-AABBCCDD"
    username: lambdafoo
    groups:
      - system:masters
  - ...

This process is a bit tedious, and it is hard to automate.

It will be much better if those IAM roles can be registered directly in the CloudFormation template. For example, with something like this:

LambdaKubeUser:
  Type: AWS::EKS::MapRoles::Entry
  Properties:
    Cluster: !Ref EKSCluster
    RoleArn: !GetAtt FunctionRole.Arn
    UserName: lambdafoo
    Groups:
      - system:masters

Thus, CloudFormation will add and remove entries in the ConfigMap as necessary, with no extra manual steps.

Another AWS::EKS::MapUsers::Entry can be used to register IAM users in mapUsers.

With this addition, we can automate the extra step to register the IAM role of the worker nodes when a new EKS instance is created:

NodeInstanceKubeUser:
  Type: AWS::EKS::MapRoles::Entry
  Properties:
    Cluster: !Ref EKSCluster
    RoleArn: !GetAtt NodeInstanceRole.Arn
    UserName: system:node:{{EC2PrivateDNSName}}
    Groups:
      - system:bootstrappers
      - system:nodes
EKS Proposed

Most helpful comment

Hey @hellupline

We are actually working on exactly that right now, an EKS API to manage IAM users and their permissions to an EKS cluster. This will allow you to manage IAM users via IaC tools like CloudFormation

All 18 comments

@ayosec have you created something to automate this as of now? I'm running into this when setting up a cluster using CloudFormation. Do you mind sharing your current approach?

have you created something to automate this as of now?

Unfortunately, no. I haven't found a reliable way to do it 100% automatic.

Do you mind sharing your current approach?

My current approach is to generate the ConfigMap using a template:

  1. All relevant ARNs are available in the outputs of the stack.
  2. A Ruby script reads those outputs, and fills a template.
  3. Finally, the generated YAML is applied with kubectl apply -f -.

Adding this feature on CloudFormation would allow the same feature to be added on AWS CDK. This will greatly simplify the process of adding/removing new nodes, for example.

I also thought about this. An api to manage the config map for aws-iam-authenticator is interesting, i think would be a bit clunky. I am using terraform to create an eks cluster, and this approach is alot nicer
https://github.com/terraform-aws-modules/terraform-aws-eks/pull/355

I'd love this

Anybody from AWS care to comment on this feature request?

With the release of Managed Nodes with CloudFormation support, EKS now automatically handles updating aws-auth config map for joining nodes to a cluster.

Does this satisfy the initial use case here, or is there a separate ask to manage adding users to the aws-auth config map via CloudFormation?

@mikestef9 I think that https://github.com/aws/containers-roadmap/issues/554 can be one of the similar issues why you would like this kind of options

@mikestef9

Does this satisfy the initial use case here, or is there a separate ask to manage adding users to the aws-auth config map via CloudFormation?

My main use case is with Lambda functions.

The managed nodes feature is pretty cool, and very useful for new EKS clusters, but most of our modifications to the aws-auth ConfigMap are to add or remove roles for Lambda functions.

@mikestef9 It would be useful to then allow people / roles to be able to then run kubectl commands.

Right now, we have a CI deploy role - but we want to allow other saml based users to be able to kubectl

We do a post cluster creation to kubectl

apiVersion: v1
kind: ConfigMap
metadata:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    - rolearn: {{ ClusterAdminRoleArn }}
      username: system:node:{{ '{{EC2PrivateDNSName}}' }}
      groups:
        - system:bootstrappers
        - system:nodes
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ AdminRoleName }}
      username: admin
      groups:
        - system:masters
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ CIRoleName }}
      username: ci
      groups:
        - system:masters
    - rolearn: arn:aws:iam::{{ AccountId }}:role/{{ ViewRoleName }}
      username: view

But I'd much rather have this configMAp created by me during the cluster creation

@mikestef9 Some relevant issues related to EKS users debugging authentication problems (https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/174 and https://github.com/kubernetes-sigs/aws-iam-authenticator/issues/275) that imo are data points in favor of API and Cloudformation management of auth mappings (and configurable admin role: https://github.com/aws/containers-roadmap/issues/554).

This ^^, how can we get this implemented? Can anyone from AWS tell us if they gonna support this at CF template level? Or a workaround is needed at eksctl level?

@nemo83 AWS team tagged researching this issue, so that's the stage of this issue.

I'm also looking into automating the updates to this configmap from cloudformation. Doing so via lambda seems doable.

My main concern with automation are race-conditions on the contents of the configmap when applying updates as the content has to be parsed. A strategic merge is not possible. If the configuration would be implemented in one or more (one per entry) CRD's it would be easier to apply a patch. In that case existing efforts on Kubernetes support for CloudFormation like kubernetes-resources-provider can be reused.

Update: we gave up on writing a lambda to update the configmap. The code became too complex and fragile. We now template it separately.

Update 2: I had a concern for automatically updating the configmap if it would become corrupt and thereby prevent API access. With the current workings of AWS (1 sept 2020) there is a way of recovering from an aws-auth configmap corruption:

aws-auth configmap recovery (tested 1 sept 2020)

The prerequisite is to have a pod in the cluster running with a serviceaccount that can update the aws-auth configmap. Ideally something that you can interact with, like k8s-dashboard or in our case ArgoCD.

Then if the aws-auth become corrupt you can hopefully still update the configmap that way.

If that is not the case because the nodes have lost their access we can use the EKS-managed Node Group to restore node access to the Kubernetes API. You can create an EKS-managed Node Group of just 1 node with the role that is also used by your cluster nodes. (Note: this is not recommended by AWS, but we abuse AWS's power to update the configmap on the managed master nodes.)

AWS will now add this role to the aws-auth configmap:

apiVersion: v1
kind: ConfigMap
metadata:
  annotations:
  name: aws-auth
  namespace: kube-system
data:
  mapRoles: |
    ...
    # this entry is added by AWS automically
    - rolearn: < NodeInstanceRole ARN >
      username: system:node:{{EC2PrivateDNSName}}
      groups:
      - system:bootstrappers
      - system:nodes

Deletion of that Node Group will remove that entry (for which AWS warns you), so the serviceaccount access is required to ensure another method of cluster access, like via the kubectl CLI. Update to aws-auth configmap to get that method of access. Then the Node Group can be removed, which in turn removes the added aws-auth configmap entry that was automatically created earlier. Now the persistent connection (e.g. kubectl CLI) can be used to permanently fix the configmap to ensure the nodes have access.

Note: if a service is automatically but incorrectly updating the configmap it would be harder, of not impossible to recover.

I would go a extra mile and ask AWS to create an API to manage aws-auth, with IAM action associated

in case I delete the IAM role/user associated with the cluster creation ( detail: this user/role is not visible after, you have to save this info outside the cluster, or tag the cluster with it. )

and if I dont add another admin to the cluster, I am now locked out of the cluster,

for me, this is a major issue, because I use federated auth, users ( and my day-to-day account ) are efemeral... my user can be recreated without warning with another name/ID,

the ideia is: can AWS add an IAM action like ESHttpGet/ESHttpPost ? ( example from ElasticSearch, because is a third party software )

Hey @hellupline

We are actually working on exactly that right now, an EKS API to manage IAM users and their permissions to an EKS cluster. This will allow you to manage IAM users via IaC tools like CloudFormation

I wonder, why this isn't possible with eks clusters (but with selfhosted k8s clusters on AWS?)

https://github.com/kubernetes-sigs/aws-iam-authenticator#crd-alpha

Even looking at the cdk implementation of auth mapping, it would be simple to get rid of some limitations that exist right now (stack barrier, imported clusters ...)

So if something like CF Support for Auth-Mapping will be implemented (i support this) it would be good, if it won't conflict with the crd's I hope coming soon to eks.

Was this page helpful?
0 / 5 - 0 ratings