Aws-cli: EKS update-config overwrites user details when using multiple roles against the same cluster

Created on 16 Apr 2019  路  16Comments  路  Source: aws/aws-cli

I have an EKS cluster where I manage user access using IAM roles. This is roughly what my map roles would look like:

- rolearn: arn:aws:iam::123456789123:role/k8s-admin
  username: admin
  groups:
    - system:masters
- rolearn: arn:aws:iam::123456789123:role/k8s-developer
  username: developer
  groups:
    - developers

To use kubectl with EKS, I need to assume the right role. Therefore, I'll have to provide the role ARN when updating the kubeconfig:

aws eks update-kubeconfig --name mycluster --role-arn arn:aws:iam::123456789123:role/k8s-admin --alias mycluster-admin

So far so good. However, when I try to add my second role to the same kubeconfig...

aws eks update-kubeconfig --name mycluster --role-arn arn:aws:iam::123456789123:role/k8s-developer --alias mycluster-developer

...the user information for the previous config update is overwritten. You can see this in the kubeconfig file:

apiVersion: v1
clusters;
- cluster:
    certificate-authority-data: <base64>
    server: https://XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX.YYY.us-east-1.eks.amazonaws.com
  name: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
contexts:
- context:
    cluster: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
    user: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
  name: mycluster-admin
- context:
    cluster: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
    user: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
  name: mycluster-developer
current-context: mycluster-developer
kind: Config
preferences: {}
users:
- name: arn:aws:eks:us-east-1:123456789123:cluster/mycluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - token
      - -i
      - eks
      - -r
      - arn:aws:iam::123456789123:role/k8s-developer
      command: aws-iam-authenticator
      env: null

Both contexts remain in the config file, but there's only one user which is attached to both contexts. There should be two users with distinct role ARNs.

I haven't verified this from the code, but I'm guessing the user is overwritten because the same user name is used for both updates.

customization eks-kubeconfig feature-request service-api

Most helpful comment

@dgomesbr @sarmad-abualkaz - I contacted the EKS team to check in on this and the associated PR. Thanks!

All 16 comments

@jkpl - Thank you for your post. There will be two user with distinct role ARNs if you update two distinct cluster. But in this case as you are updating only one cluster the role is getting changed with the new role and user is overwritten.

This issue has been automatically closed because there has been no response to our request for more information from the original author. With only the information that is currently in the issue, we don't have enough information to take action. Please reach out if you have or find the answers we need so that we can investigate further.

Hi! Sorry for the late response. Yes, that seems to be the case: updating the config with two different clusters would create distinct contexts, but updating the config with a different role updates an existing context. I can see how that would be useful, if you want to edit cluster config.

Any chance the command could also support adding multiple contexts to the config for the same cluster?

Hi team,

I also experienced something like this, steps for the replication:

I create my .kube/config using

aws eks update-kubeconfig --name hugoprudente-cluster-12

I manually update it to use my --role-arn to allow my other users to the cluster.

- name: arn:aws:eks:eu-west-1:004815162342:cluster/hugoprudente-cluster-12
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-1
      - eks
      - get-token
      - --cluster-name
      - hugoprudente-cluster-12
      - --role-arn
      - arn:aws:iam::004815162342:role/Admin
      command: aws

If I run the aws eks update-kubeconfig --name hugoprudente-cluster-12 again, the --role-arn get removed.

- name: arn:aws:eks:eu-west-1:004815162342:cluster/hugoprudente-cluster-12
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1alpha1
      args:
      - --region
      - eu-west-1
      - eks
      - get-token
      - --cluster-name
      - hugoprudente-cluster-12
      command: aws

Hey Hugo! Any reason why you manually update it instead of using the --role-arn flag with update-kubeconfig?

aws eks update-kubeconfig --name hugoprudente-cluster-12 --role-arn arn:aws:iam::004815162342:role/Admin

It seems that you need to always provide the role ARN to update-kubeconfig or it will be removed from the existing config.

Maybe to give this some more priority. We are experiencing the same issue for our setup. We have got two clusters, prod & test, divided into multiple namespaces and permissions setup per namespace using IAM role bindings. So when one of our developers needs access to two namespaces on the same cluster, they have to run the setup command each time before accessing it because the username is based on the cluster name. Updating the settings "updates" the rolearn. This doesn't feel correct.

@yourilefers I agree ... our organization is attempting to structure our RBAC in a similar way and running directly into this problem...

We are also experiencing this issue, I don't why Cluser-ARN is been using as user name so that we can't set the different user name for different context.

In case it helps anyone else, we stopped using update-config and instead are updating our user's kube config files with the following commands (per cluster per role):

# Set the cluster object in the kube config.
kubectl config set-cluster "${cluster_arn}" --server="${cluster_domain}"

# Add the certificate data for the indicated cluster, data pulled from the `aws eks describe-cluster` output.
kubectl config set "clusters.${cluster_arn}.certificate-authority-data" "${cluster_cert_data}"

# Set the user's credentials.
kubectl config set-credentials "${desired_context_alias}" --exec-command=aws --exec-arg=--region --exec-arg="${cluster_region}" --exec-arg=--profile --exec-arg="${role_profile_name}" --exec-arg=eks --exec-arg=get-token --exec-arg=--cluster-name --exec-arg=${cluster_name}" --exec-api-version=client.authentication.k8s.io/v1alpha1

# Set the context that combines the cluster information and the credentials set in the above commands.
# The namespace is 100% optional.
kubectl config set-context "${desired_context_alias}" --cluster="${cluster_arn}" --namespace="${desired_role_namespace}" --user="${desired_context_alias}"

Needless to say, that was a LOT of steps to take. Apologies if the above code doesn't quite work for you as I had to generalize the code we have internally to post the details publicly (i.e. some variable references might be inconsistent due to human editing without running the code). The above assumes the users have their AWS profiles set locally through one of the mechanisms available to do that.

@kdaily fyi

@dgomesbr @sarmad-abualkaz - I contacted the EKS team to check in on this and the associated PR. Thanks!

@dgomesbr @sarmad-abualkaz - I contacted the EKS team to check in on this and the associated PR. Thanks!

Hi @kdaily, any updates from the EKS team on this issue?

@sarmad-abualkaz - sorry, no update.

IMO, this behavior is extremely surprising in the first place and really warrants being considered a bug rather than a feature request. I appreciate that there are some additional features that would make handling best "principle of least privilege" patterns against a cluster, fundamentally the behavior here is surprising and likely causes unexpected consequences for those of us attempting to do best RBAC practices.

Hi @ryanmt, thanks for the feedback. Does the proposed solution in #5413 work as you would expect?

@kdaily It looks like a good fit to me. That's effectively what I've done when I manually reconciled the bug in a kubeconfig.

Was this page helpful?
0 / 5 - 0 ratings