If $HOME/.aws is read only mounted:
When using a profile that does an assume role like so:
[master]
....
[eu-staging]
role_arn= arn:aws:iam::REDACTED
source_profile=master
region = eu-west-1
The command:
aws --profile eu-staging ec2 describe-instances
Error:
[Errno 30] Read-only file system: '/home/spinnaker/.aws/cli'
Please address the tempfile requirement, as in kubernetes land we mount secrets as folders, and can't make them read-write
Can you give me some more details of what your setup is? I am not familiar with kubernetes at all. My concern is will be really inefficient as you'll need to do the assume role call before every single CLI operation. Would an environment variable/config option that allows you to disable caching be sufficient for your needs?
I have a similar issue. The home directory of a service account does not get mounted, so when the awscli tries to write the $HOME/.aws/cli/cache files I get the following error.
[Errno 13] Permission denied: '/home/username'
I am using the environment variables to redirect the credentials and config files.
For me, an Environment variable to redirect the whole .aws folder to a new location would be a good solution.
@stealthycoin I've dug into the code, and the issue is caused by the caching mechanism of awscli/boto3.
When we mount secrets/configmaps into kubernetes containers, such as the credentials/config to ~/.aws it is done as read-only (by k8s, and not configurable.).
running awscli sometimes creates a ~/.aws/.cache which fails due to read-only filesystem.
Sorry @stealthycoin , to answer your question: A flag that disables caching would solve this problem, yes.
@pieterza I ran into the same problem trying to use an AWS Config file with Spinnaker's Clouddriver deployed in K8s. I mounted my config file to $HOME/.aws
as you would and noticed the secret mounts as root, so the spinnaker
user doesn't have rights to do anything with the .aws
directory resulting in the error you mentioned.
Using Spinnaker custom configuration I have my config file being mounted to a non-standard AWS location and use the AWS_CONFIG_FILE
environment variable to point to the config file. This way, when an AWS CLI command is ran, the spinnaker
user will create and own the .aws
folder, allowing the cli
folder to be created and commands to successfully run. I tested this using aws ecr get-authorization-token --profile {SOME_PROFILE}
and it worked as expected.
Here's my clouddriver.yml
as an example:
env:
AWS_CONFIG_FILE: /home/spinnaker/tmp/config
kubernetes:
volumes:
- id: aws-profile
mountPath: /home/spinnaker/tmp
type: secret
I know this specifically doesn't solve the problem but at least it's a workaround.
Thanks a lot @jhindulak
I think this will work nicely!
Most helpful comment
@pieterza I ran into the same problem trying to use an AWS Config file with Spinnaker's Clouddriver deployed in K8s. I mounted my config file to
$HOME/.aws
as you would and noticed the secret mounts as root, so thespinnaker
user doesn't have rights to do anything with the.aws
directory resulting in the error you mentioned.Using Spinnaker custom configuration I have my config file being mounted to a non-standard AWS location and use the
AWS_CONFIG_FILE
environment variable to point to the config file. This way, when an AWS CLI command is ran, thespinnaker
user will create and own the.aws
folder, allowing thecli
folder to be created and commands to successfully run. I tested this usingaws ecr get-authorization-token --profile {SOME_PROFILE}
and it worked as expected.Here's my
clouddriver.yml
as an example:I know this specifically doesn't solve the problem but at least it's a workaround.