Terraform-provider-aws: S3 Backend with service account Vs Assumed role

Created on 13 Jun 2017  ยท  4Comments  ยท  Source: hashicorp/terraform-provider-aws

_This issue was originally opened by @daxroc as hashicorp/terraform#15162. It was migrated here as part of the provider split. The original body of the issue is below._


Hi there,

We've been using the terraform remote state with s3 to great benefit in recent efforts. I've come across an issue with either the resource or our IAM (cough separation of duties).

We've chosen to centralise S3 into one account and use bucket policy to allow other accounts access let's call this the Federated bucket.

Bucket Policy

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "Federated Bucket Pemissions",
            "Effect": "Allow",
            "Principal": {
                "AWS": [
                    "arn:aws:iam::<ACC_A>:root",
                    "arn:aws:iam::<ACC_B>:root",
                    "arn:aws:iam::<ACC_C>:user/service_account"
                ]
            },
            "Action": "*",
                "Resource": [
                    "arn:aws:s3:::examplebucket",
                    "arn:aws:s3:::examplebucket/*"
                ]
        },
        {
            "Sid": "EnsureWritesOwnedByOwner",
            "Effect": "Allow",
            "Action": [
                "s3:PutObject"
            ],
            "Resource": [
                "arn:aws:s3:::examplebucket/*"
            ],
            "Condition": {
                "StringEquals": {
                    "s3:x-amz-acl": [
                        "bucket-owner-full-control"
                    ]
                }
            }
        }
    ]
}

We're using several AWS accounts with One providing end user IAM. Now when I assume an Administrator role on the CLI and use the credentials provided there is no confusion as terraform writes the state to the bucket as those assumed credentials and the bucket policy is applied.

This is expected behaviour and it's the desired outcome. Any assumed role can write and read the states this way - huzzah all is well in this bright new terraformed landscape.

Now when a service account tries to execute the same terraform code assuming the role on behalf of the service account things go awry.

So the service account has the privilege to assume a role within the sub-accounts. Without the s3 backend this works fine - this user can plan, apply etc..

The aws provider

provider "aws" {
    region = "eu-west-1"
    assume_role {
      role_arn     = "arn:aws:iam::<ACC_ID>:role/Provisioner"
    }
}

Now when adding the following backend configuration. I can now write the statefile to the bucket.

terraform {
  backend "s3" {
    bucket       = "examplebucket"
    key          = "app/example/app123.tfstate"
    region       = "eu-west-1"
    encrypt      = true
    acl          = "bucket-owner-full-control"
  }
}

The exception here seems to be that the written state object is not subject to the bucket policy anymore - this means that only this service account/users can read this state object - others receive a 403 on the s3/GetObject (acl option is required for the bucket owner to retain read privilege).

This restrictive policy is default behaviour on the S3 backend in that only the uploader can read objects unless the ACLs are pushed at time of writing, or so I've been told.

This also seems to cause issues with the data terraform_remote_state resource as it tries to read as the service account user and not the assumed role.

Terraform Version

Terraform version: 0.9.6

Affected Resource(s)

terraform backend
provider aws
data terraform_remote_state

Terraform Configuration Files

The following is enough to generate this issue given the accounts.
Terraform is executed by service account user credentials exported as ENVAR

provider "aws" {
    region = "${data.terraform_remote_state.vpc.region}"
    assume_role {
      role_arn     = "arn:aws:iam::<account-a>:role/Administrator"
    }
}
terraform {
  backend "s3" {
    bucket       = "examplebucket"
    key          = "test/test.tfstate"
    region       = "eu-west-1"
    encrypt      = true
    acl          = "bucket-owner-full-control"
  }
}
data "terraform_remote_state" "vpc" {
  backend = "s3"
  config {
    bucket = "examplebucket"
    key = "env/vpc.tfstate"
    region = "eu-west-1"
  }
}

References

Similar to hashicorp/terraform#5136

question servics3 upstream-terraform

Most helpful comment

@daxroc - I'm having the same issue. Did you ever come to a solution?

All 4 comments

@daxroc - I'm having the same issue. Did you ever come to a solution?

Adding "role_arn" directly into the backend configuration solved my issue:

terraform {
  backend "s3" {
    bucket = "backend-prod-architecture"
    key    = "stuff/stuff"
    region = "eu-central-1"
    role_arn = "arn:aws:iam::XXXXXXXXX:role/AdministratorAccessForXXXXXX"
  }
}

I got an error once that could explain why:

The backend configuration is loaded by Terraform extremely early, before the core of Terraform can be initialized. This is necessary because the backend dictates the behavior of that core.

Thank you for using Terraform and for opening up this question! It appears this question has been answered.

Issues on GitHub are intended to be related to bugs or feature requests with the provider codebase. Please use https://discuss.hashicorp.com/c/terraform-providers for community discussions, and questions around Terraform.

If you believe that your issue was miscategorized as a question or closed in error, please create a new issue using one of the following provided templates: bug report or feature request. Please make sure to provide us with the appropriate information so we can best determine how to assist with the given issue.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!

Was this page helpful?
0 / 5 - 0 ratings