What happened?
When trying to perform eksctl utils update-kube-proxy on an old EKS cluster (created Sept 2018, was running 1.11, now running 1.12) I get the following error:
[ℹ] using region eu-west-1
[✖] getting list of API resources for raw REST client: Unauthorized
What you expected to happen?
I expected it to query for the current version of kube-proxy and determine if an update was required.
How to reproduce it?
Unsure. It is only happening on this old cluster, a new cluster created yesterday in the same account behaves as expected.
Anything else we need to know?
➜ terraform version
Terraform v0.11.7
<ul>
<li>provider.aws v2.28.1</li>
<li>provider.template v2.1.0<br />
Versions
Please paste in the output of these commands:
$ eksctl version
[ℹ] version.Info{BuiltAt:"", GitCommit:"", GitTag:"0.6.0"}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-19T14:00:14Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12+", GitVersion:"v1.12.10-eks-825e5d", GitCommit:"825e5de08cb05714f9b224cd6c47d9514df1d1a7", GitTreeState:"clean", BuildDate:"2019-08-18T03:58:32Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"}
Logs
➜ eksctl utils update-kube-proxy --name bts-ecommerce-staging-app_server-cluster -v 4
2019-09-20T16:15:45+01:00 [ℹ] using region eu-west-1
2019-09-20T16:15:47+01:00 [â–¶] role ARN for the current session is "arn:aws:sts::000000000000:assumed-role/ADFS-Developer/ME"
2019-09-20T16:15:48+01:00 [â–¶] cluster = {
Arn: "arn:aws:eks:eu-west-1:000000000000:cluster/server-cluster",
CertificateAuthority: {
Data: "XXXXXXX"
},
CreatedAt: 2018-11-21 14:09:17 +0000 UTC,
Endpoint: "https://XXXXXXXX.yl4.eu-west-1.eks.amazonaws.com",
Logging: {
ClusterLogging: [{
Enabled: false,
Types: [
"api",
"audit",
"authenticator",
"controllerManager",
"scheduler"
]
}]
},
Name: "server-cluster",
PlatformVersion: "eks.4",
ResourcesVpcConfig: {
EndpointPrivateAccess: false,
EndpointPublicAccess: true,
SecurityGroupIds: ["sg-000000000"],
SubnetIds: ["subnet-000000000","subnet-0000000"],
VpcId: "vpc-000000"
},
RoleArn: "arn:aws:iam::000000000000:role/server-cluster-role",
Status: "ACTIVE",
Version: "1.12"
}
2019-09-20T16:15:48+01:00 [✖] getting list of API resources for raw REST client: Unauthorized
Same problem with version 0.26.0 here for creating NodeGroups. Only happens with one cluster, other works as expected.
Same problem, eksctl 0.26.0, after upgrading control plane from 1.15 to 1.16. Similar output from -v 4, it seems to successfully connect to AWS API, but fails (without debug logging) when trying to talk to Kubernetes API.
@zhujik This issue was specifically for update-kube-proxy with k8s version 1.11 -> 1.12, exactly which k8s versions are you using, exactly which commands are you entering, what do your configs look like and what do the logs say?
@forsberg You're also getting this with update-kube-proxy? When using eksctl write-kubeconfig can you connect to the cluster?
I managed to solve my problem now, I was giving eksctl an AWS profile (from ~/.aws/config) that had full administrative rights in AWS, but the role_arn was not listed in the aws-auth configmap, so I guess Kubernetes API server disliked the request for that reason.
I was also getting it with update-kube-proxy
@michaelbeaumont thank you for your reply. Maybe this warrents a separate issue, however I'll explain my case.
K8S Version: 1.17
command:
eksctl create nodegroup --config-file ng-worker.yaml
config file is just a NodeGroup.
my kube.config is configured as follows:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: xxx
server: https://xxx.eu-central-1.eks.amazonaws.com
name: arn:aws:eks:eu-central-1:xxxx:cluster/dev
- cluster:
certificate-authority-data: xxx
server: https://xxx.eu-central-1.eks.amazonaws.com
name: arn:aws:eks:eu-central-1:xxxx:cluster/test
- cluster:
certificate-authority-data: xxx
server: https://xxx.eu-central-1.eks.amazonaws.com
name: arn:aws:eks:eu-central-1:xxxx:cluster/prod
contexts:
- context:
cluster: arn:aws:eks:eu-central-1:xxxx:cluster/dev
user: arn:aws:eks:eu-central-1:xxxx:cluster/dev
name: dev
- context:
cluster: arn:aws:eks:eu-central-1:xxxx:cluster/prod
user: arn:aws:eks:eu-central-1:xxxx:cluster/prod
name: prod
- context:
cluster: arn:aws:eks:eu-central-1:xxxx:cluster/test
user: arn:aws:eks:eu-central-1:xxxx:cluster/test
name: test
current-context: test
kind: Config
preferences: {}
users:
- name: arn:aws:eks:eu-central-1:xxxx:cluster/dev
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-central-1
- eks
- get-token
- --cluster-name
- dev
- --role
- arn:aws:iam::xxxx:role/dev-cluster-admin
command: aws
env: null
- name: arn:aws:eks:eu-central-1:xxxx:cluster/prod
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-central-1
- eks
- get-token
- --cluster-name
- prod
- --role
- arn:aws:iam::xxxx:role/prod-cluster-admin
command: aws
env: null
- name: arn:aws:eks:eu-central-1:xxxx:cluster/test
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- eu-central-1
- eks
- get-token
- --cluster-name
- test
- --role
- arn:aws:iam::xxxx:role/test-cluster-admin
command: aws
env: null
The aws-auth configmap of the clusters do not contain mapUsers, because all users are authenticated with roles to the cluster. E.g. I have the permission to assume the "test-cluster-admin" role, which has permissions for EKS, and the aws-auth configmap contains an entry like so:
data:
mapRoles: |
- groups:
- system:masters
rolearn: arn:aws:iam::xxxx:role/test-cluster-admin
username: clusterAdmin
When I enable the authentication logging for the EKS control plane, I can see that eksctl is trying to connect to the cluster with my users arn, something like arn:aws:iam::xxxx:user/zhujik, which is not authorized to perform the action of course. This is just for the test cluster, dev and prod are fine, however I did not check if the same authorization happens with those clusters. Maybe EKS lets me manipulate them because I created them (I don't remember if I created the test cluster myself though).
@zhujik - How do you tell eksctl which Role ARN it should assume? In my case, I have entries in ~/.aws/config similar to this:
[profile dev-admin]
role_arn = arn:aws:iam::123456789:role/DevAdminRole
source_profile = dev-admin
Obviously there's also a source profile which has actual credentials, in my case with aws-google-auth.
The ARN is then listed as being part of system:masters in my aws-auth configmap, and I run eksctl with --profile dev-admin as command line argument.
There are other ways of doing this as well, I guess you can assume the role via the aws CLI and ensure the right environment variables are set. I'm unsure how much .kube/config is involved here.
@forsberg I don't, I assumed that the kubeconfig was involved with eksctl. Next time I will try with --profile then. Thank you for the suggestion.
I also don't know, but the very error message - getting list of API resources for raw REST client: Unauthorized kind of indicates that eksctl is using the REST kubernetes API directly rather than a Kubernetes client library. And why wouldn't it, given that it talks only to EKS-managed clusters, where the authentication method is always known.
Note that there are two commandline flags which may be of interest, from eksctl create cluster --help
AWS client flags:
-p, --profile string AWS credentials profile to use (overrides the AWS_PROFILE environment variable)
--cfn-role-arn string IAM role used by CloudFormation to call AWS API on your behalf
@zhujik I'm going to close this as resolved. To anyone reading this, please comment if @forsberg's solution didn't help!
@forsberg's solution seems like it would work for me as well, but even splitting the CFN policies into a separate role seems like it creates a pretty complex IAM setup if you don't want k8s admins to also be eks admins. :+1: for a solution that respects what's in the kubeconfig -- what're the main issues with always using what's in kubeconfig now?