Containers-roadmap: env-file support

Created on 4 Nov 2015  Â·  54Comments  Â·  Source: aws/containers-roadmap

I open this issue to pick up a point that was made in #127 to support the --env-file parameter. As pointed out in #127 this would be useful e.g. to add some environment variables that contain sensitive information. This way sensitive environment variables could be stored in a private S3 bucket and be pulled in from there either directly or via a mounted volume.

If the --env-file parameter is supported I guess the documentation on Task Definition Parameters could also be improved. Under environment it is mentioned that it is not recommended to put sensitive information in there, however it does not point to a solution on how to do this otherwise.

Extract from issue #127:

[...] Ideally it would allow an s3 endpoint:

"containerDefinitions":[
  {
    "env_file":[
      { "bucket":"my-bucket", "key":"myenvlist" }
    ]
  }
]

Elastic Beanstalk lets you do something similar in the Dockerrun.aws.json for docker private repository configuration:

"Authentication":{
  "Bucket":"my-bucket",
  "Key":"mydockercfg"
},
Coming Soon ECS

Most helpful comment

I'm looking for env-file support as well.

All 54 comments

I'm looking for env-file support as well.

:+1:

+1

I have the same issue. I'm running a db connected task on ecs and I don't want to embed my db auth in the compose / task. I'm currently using ecs-cli but there's no support for encrypting the environment variables as far as I know.

When I've worked with CI systems that utilize docker (travis for example) they usually provide a mechanism for encrypting environment variables such that they can be embedded in config and decrypted when they are passed into the container. Travis Encryption Keys. I'm wondering if AWS does or could offer a similar feature for encrypting sensitive information destined for the container.

+1 would like to be able to use KMS or something similar to encrypt env vars

Does anyone have a good workaround for this?

I've sometimes used a pattern where the entry point of my container fetches an env file from S3 and sources it before running my actual command. The location of this file can be passed as an env var, and IAM permissions used to control access, e.g (from memory, so it might not work as is):

CMD ["/bin/sh", "-c", "aws s3 cp --region eu-west-1 `echo ${ENV_FILE_PATH}` ./env.sh; . ./env.sh; command-to-run"]

It isn't ideal, but seems to work OK.

I plan to use https://github.com/zeroturnaround/configo in my containers. It's a more general solution for loading environment variables from etcd, file, DynamoDB or Vault. Unfortunately, S3 is not supported yet.

I'm not sure env-file is supported in the Docker API, I guess it's a feature strictly of the "docker" commandline tool. Also it helps little in a clustered environment, as then you would still need to put this file on the host.

I'd prefer loading the environment variables from S3 instead, if you were to add this feature to ECS.

+1 would be a great feature

It would be cool to know what the maintainers think about this issue in terms of relevance/priority. I am really needing this and I might be able to submit a patch.

+💯

:100: This would be an awesome feature!

I think it should be implemented very closely to what @tbinna recommended, although I would include support for using KMS to decrypt the envfile before running the container.

Perhaps

"containerDefinitions":[
  {
    "env_file":[
        { 
           "s3": {          
               "bucket": "my-bucket", 
               "key":"myenvlist" 
           }
           "kmsArn": "<kmsArn>" // Optional
        }
    ]
  }
]

This way if you have sensitive data in the file you encrypt it, upload it to s3. The container can pull it down and use KMS at runtime to decrypt and pass the file directly via --envfile.

Thoughts from the Amazon team? If I were to submit a PR for the agent level changes, might that help see it implemented at the task definition level?

@itsjamie maybe rather than a separate kmsArn key, something in the s3 object that can be used to specify whatever value for the SSE, whether that's a kms arn or AES256, etc. This is part of the s3 API so it might make more sense as part of the s3 block.

Another option might be to use parameter store.

Would be extremely useful to have to env-file and an even bigger win to have that data come from parameter store in a TaskDefinition

I created a PoC with CloudFormation that creates a Lambda function to fetch values from the ParameterStore, and CFN then uses the Lambda function as a CustomResource. The same CFN template also creates the TaskDefinition (and the Cluster, Service, ALB etc.). This way it's possible to inject SecureText ParameterStore values into the TaskDefinition ENV (or to any other CloudFormation resource).

This is definitely not the most secure way to implement this as the Lambda needs to be able to read/decrypt values for all the Tasks in the CFN template. I would prefer to use IAM Roles for Tasks and grant each Task access to only it's own parameters, and decrypting them with the help of ecs-agent using the Task's IAM Role when creating the container. Also one downside is that the decrypted values can currently be seen from the TaskDefinition settings in AWS console. Anyway, this implementation needs no changes to the actual container and it's possible to use single key/value pairs instead of the full env-file.

All current solutions I have seen involve having a bootstrap container that fetches secrets and writes them to a file. Then you need some way to get the env vars into the target service container. Volumes are one obvious way to do this.

It would then be expected you need tools inside the target container to load the env vars from a volume.

This can work if your service container container bundles tools to source a file. If the target service is a bare bones container with only a binary, such as a go app, then there exists no method to load env vars to it on the fly.

Currently this is a hard blocker for me being able to use ECS at all. Bundling plaintext secrets into task definitions is -not- a solution.

Direct KMS integration would be great but at the very least there needs to be a way to load environment vars from a volume or file on disk. Then a bootstrap container could do the legwork.

@cbbarclay I think that the parameter store would be a much better solution as the host EC2 instance running the Clusters wouldn't need to store the file for the env vars. Essentially what @mrburrito is stating on #328 would solve the problem with out exposing them to the host env file system. Much like the link that @myronahn provided but running in ECS agent rather than a special container.
Perhaps if we had something like the following.

"ContainerDefinitions":[
  {
    "Environment":[
        {
           "Name": "PRIVATE_VAR",
           "Value": {
               "Type": "parameter-store",
               "Name": "some.value",
               "Decrypt": true
           }
        }
    ]
  }
]

Then we could just manage the access through the tasks Role.

Just my two cents.

Better integration between EC2 Parameter Store and ECS would be great. Please consider this.

Better integration of all docker command line parameter of the run command would be great.

Docker offers many smart and interesting possibilities, ECS shoots them down, thanks Amazon.

Hi! as we also ran into the issue of secure passing env variables, we've just released small utility that handles that problem.

It's using AWS Parameter Store for injecting env vars on container startup. Check sample Dockerfile on: https://github.com/Droplr/aws-env

Cheers! :-)

Please consider this. It would be very nice to help maintain clean docker images and be able to inject environment variables from the /etc/ecs/ecs.config file by specifying the environment file in the container / task definition.

not having a secure way to do this, is a bug. perhaps we should dup this request as a bug to get some aws love

There are several related issues, including #1209 and #328.

over two years and no response from aws team? wow...

This thing is dead as a doornail. They're all in on EKS now. But that won't
be ready for months at the earliest.

On Thu, Mar 1, 2018 at 1:10 PM, Felipe Francisco notifications@github.com
wrote:

over two years and no response from aws team? wow...

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/aws/amazon-ecs-agent/issues/247#issuecomment-369731844,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AAQFRvxkB0OcTeb-omh5qLQLYkv73sdtks5taGPLgaJpZM4GboAD
.

Please see #1209. We'd love your feedback there to help guide our approach.

ECS is far from being abandoned.
Many big companies depend on it, including amazon retail

closing issue, this functionality is supported by the generic secrets feature. feel free to reopen if this feature is specific to s3 and not just an implementation detail.

secrects isn't a solution to this; it's not supported across all of ECS and requires customization to exiting service/tasks past just the task definition update (task role? ecs agent version). please reconsider supporting an environment file.

I occasionally came and visit this issue from time to time and I always got disappointed by the way AWS decided (for some reason) not to support running docker image with env-file flag. All the solutions supported and provided above are simply workarounds not solutions.
Even injecting SSM params in the task definition is a warkaround because there are couple of restrictions here:

  1. You can only retrieve Secrets Manager secrets by using the GetParameter and GetParameters API actions. Modification operations and advance querying API actions, such as DescribeParameters or GetParametersByPath, are not supported for Secrets Manager.

  2. Parameters that reference Secrets Manager secrets can't use the Parameter Store versioning or history features.

ECS secrets now supports parameter store and secrets manager in the latest ECS AMIs as well as in the newest Fargate platform version.

We’d like to understand the use cases that this doesn't address. Is it portability between dev/prod environments? Ease of use? What interaction would you prefer to see here instead? Some ideas:

  • using 'env-file' like syntax inside of an ssm parameter
  • using 'env-file' backed by an s3 object
  • using 'GetParametersByPath' and recursively adding secrets for a given path

For sensitive information, why not use parameter store?

@petderek @adnxn The use case that isn't addressed is a mass-import of environment variables. I'm currently trying to run a docker image on AWS that takes upwards of 15 variables through the env file for configuration and AWS doesn't let me. I'm surprised it's still not implemented and is something that would lend me to considering GCP or azure

@srrengar I think it would be best to support both env-file and cluster-wide env definitions, which could also allow cluster-wide env files.

Idk if we are doing it wrong, but we have around 20 cluster specific and around 10 task definition specific env variables. Some of those task definitions have multiple container definitions that mostly use the same env variables as well. Also we have staging & production clusters. I had to copy around so many things that my eyes went black. There must be a better way.

@srrengar When I try to run a task after adding an env file spec in the task definition, I am getting an error, simply titled 'Reasons : ["ATTRIBUTE"]'.

Hi @shafi-khan,
This error message means the instance is missing required attributes to launch the task. The new envfiles feature requires a new instance attribute "ecs.capability.env-files.s3".
Are you using the latest ECS Optimized AMI? Agent version 1.39.0 onward supports this feature.

@yunhee-l I am currently using v1.37.0. Does updating to the latest AMI all I need to do? Do I need to specify that attribute somewhere?

Nevermind, It works after updating to v1.39.0. Thanks @yunhee-l .

Hi folks we just released environment files for containers using the EC2 launch type and Fargate support coming soon

https://aws.amazon.com/about-aws/whats-new/2020/05/amazon-elastic-container-service-supports-environment-files-ec2-launch-type/

https://docs.aws.amazon.com/AmazonECS/latest/developerguide/taskdef-envfiles.html

Can you please point me to some documentation that shows how to do it in a yaml config?

I've tried this:

EnvironmentFiles:
- Value: "--s3 arn--"
Type: "s3"

(tried the same with camelCase as well)
I keep getting - "Encountered unsupported property EnvironmentFiles"

@TusharMehtani - the docs clearly state what format the env file needs to be in. Thus, you need to respect that format.

@tehmaspc - I think my question wasn't clear. I'll try to clarify it. My question isn't about the format of the env file, its about the Cloudformation ECS Task Definition template. The documentation declares the following JSON to be a valid template:

"environmentFiles": [
                {
                    "value": "arn:aws:s3:::s3_bucket_name/envfile_object_name.env",
                    "type": "s3"
                }
            ],

Since I usually write my task definitions in YAML, I tried to write the JSON as the following YAML Equivalent:

EnvironmentFiles:
    - Value: "arn:aws:s3:::s3_bucket_name/envfile_object_name.env"
      Type: "s3"

This gives the error - Encountered unsupported property "EnvironmentFiles".
I'm using Container Agent v1.41.0 which is fine as per the docs (req >=v1.39.0). The env file is UTF-8 encoded and follows the proper format as per documentation.
I'm certain this isn't an issue with the env file as the same thing works when I'm creating the task definition using the ECS UI. So, pretty sure the issue is with the template file.
Is the YAML format not supported here for some reason? This would be strange!

Here is a simplified version of my complete task definition YAML file for reference:

Description: >
  This is an example of a long running ECS service that serves a JSON API.

Parameters:
  VPC:
    Description: The VPC that the ECS cluster is deployed to
    Type: AWS::EC2::VPC::Id

  Cluster:
    Description: Please provide the ECS Cluster ID that this service should run on
    Type: String

  DesiredCount:
    Description: How many instances of this task should we run across our cluster?
    Type: Number
    Default: 1

  MyServiceImage:
    Description: URI of the Docker Image of Service you want to deploy
    Type: String
    Default: service/service-v1

Resources:
  Service:
    Type: AWS::ECS::Service
    Properties:
      Cluster: !Ref Cluster
      DeploymentConfiguration:
        MaximumPercent: 100
        MinimumHealthyPercent: 0
      DesiredCount: !Ref DesiredCount
      TaskDefinition: !Ref TaskDefinition

  TaskDefinition:
    Type: AWS::ECS::TaskDefinition
    Properties:
      Family: service-defn-dev
      ContainerDefinitions:
        - Name: service-backend
          Essential: true
          Image: !Ref MyServiceImage
          MemoryReservation: 128
          PortMappings:
            - ContainerPort: 8080
              HostPort: 8080
          MountPoints:
            - ContainerPath: "/container/path/"
              SourceVolume: "mount-point"
          EnvironmentFiles:
            - Value: "arn:aws:s3:::s3_bucket_name/envfile_object_name.env"
              Type: "s3"
          Environment:
            - Name: ENV_VAR_1
              Value: ENV_VAR_VAL_1
            - Name: ENV_VAR_2
              Value: ENV_VAR_VAL_2
          DependsOn:
            - ContainerName: pre-req-service
              Condition: START

      Cpu: "1024"
      Memory: "128"
      Volumes:
        - Host:
            SourcePath: "/host/path/to/mount"
          Name: "mount-point"

Hi TusharMehtani,
Cloudformation support for env files is not yet available. We are working on it however, and will be made available soon.

Hi TusharMehtani,
Cloudformation support for env files is not yet available. We are working on it however, and will be made available soon.

Thanks for the update @yunhee-l. Looking forward to this, will try to use some alternative for now.

The docs say environment files are not supported by Fargate even though
they are S3 resources.

Can we expect Fargate support in the near future?

Thomas Raehalme
CEO, Founder
Mobile +358 40 545 0605

Aitio Finland Oy
Lutakonaukio 7
40100 JYVÄSKYLÄ, Finland
Tel. +358 10 322 0040
www.aitiofinland.com
www.hillava.com

Yes Fargate support is in development

Keeping secrets in S3 is bad idea. Better is just use secret manager for it.

Yes Fargate support is in development

Do you have a separat github issue for that we can follow @srrengar ?

Hi everyone, thank you for your patience. This feature is now available in Fargate as of today.

https://aws.amazon.com/about-aws/whats-new/2020/11/aws-fargate-for-amazon-ecs-launches-features-focused-on-configuration-and-metrics/

Was this page helpful?
0 / 5 - 0 ratings

Related issues

pauldougan picture pauldougan  Â·  3Comments

abby-fuller picture abby-fuller  Â·  3Comments

MartinDevillers picture MartinDevillers  Â·  3Comments

clareliguori picture clareliguori  Â·  3Comments

tabern picture tabern  Â·  3Comments