Terraform: Terraform does not push new statefile to S3

Created on 6 Apr 2017  ยท  18Comments  ยท  Source: hashicorp/terraform

Terraform Version

0.9.2

Intially, I was using Terraform 0.8.8 and configure remote statefile using this command

terraform remote config \
-backend=s3 \
-backend-config="bucket=abc-tio" \
-backend-config="key=tfstate/prod/common.tfstate" \
-backend-config="region=us-east-1" \
-backend-config="profile=aws-cm-abc"

After some accidental download of Terraform 0.9.2, my statefile situation is now a complete mess

I created a statefile config:

backend = "s3"
bucket  = "abc-tio"
key     = "tfstate/prod/common.tfstate"
region  = "us-east-1"
profile = "aws-cm-abc"

When I first ran terraform init -backend-config=statefile.config nothing happened.

Then I delete the local .terraform directory and try a init -backend-config=statefile.config again, this time, it showed that terraform initialization completed.

Afterward, I can now use my terraform version 0.9.2 without issues. However, Terraform never push my statefile to S3 again.

Today I reviewed my S3 bucket, and the file is still with version 0.8.2.

I tried again in another way WITHIN ANOTHER REPO. Following your instruction here

I configured my statefile within my .tf file:

terraform {
  backend "s3" {
    bucket = "abc-tio"
    key    = "tfstate/prod/ANOTHERSTATEFILE.tfstate"
    region = "us-east-1"
   profile = "aws-cm-abc"
  }
}

I then run terraform init -backend-config=statefile.config and was prompted to download statefile from S3 bucket. Of course I did and afterward I received a statefile on my local machine, and I can update my environment just as I want. However, Terraform does NOT push my statefile to S3 any longer (in tfstate/prod/ANOTHERSTATEFILE.tfstate)

This also revealed another issues - terraform refresh doesn't detect my environment correctly after this.

However, I tried to repeat this in the original repo and it didn't work.

I really wish you guys can have a complete instruction/demo on terraform init, as right now this is cumbersome and very frustrating to fix. I have a team of ~40 people working across 250 AWS accounts with multiple statefile and statefile buckets, and we are all puzzled by this whole piece.

Best regards,

backens3 bug

Most helpful comment

Any advantage here?

I recently upgrade to 0.9.5 and I have the same issue.

I have added the backend configuration in my "main.tf" file as:

## main.tf
terraform {
    required_version = ">= 0.9.5"
    backend "s3" {}
}

I provide the environment and the id in each execution because are used to configure S3, that's why I need to configure the backend in each execution. I run:

terraform init \
    -backend-config="bucket=${TF_VAR_environment}-bucket" \
    -backend-config="key=devops/terraform-state/${TF_VAR_service_id}/terraform.tfstate" \
    -backend-config="region=${TF_VAR_region}"

And I get:

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.

At this point, if it's the first time and there is any terraform state at S3 I don't have any terraform.tfstate at local neither at S3.

After that, I run Terraform and I get:

terraform apply

...Creating all resources...

Apply complete! Resources: 149 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: 

Outputs:
...

Checking S3 any terraform.tfstate file has been uploaded and my local terraform.tfstate file has not been updated with the remote state, it starts with:

{
    "version": 3,
    "terraform_version": "0.9.5",
    "serial": 0,
    "lineage": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "modules": [
        { ...

The fact that this has not upload to S3 automatically scares me a lot. Any idea about what is the problem? Maybe I'm doing something wrong.

All 18 comments

So, just to be clear.

The value from statefile.config seems to be used ONLY AT FIRST (when I first run terraform init)

After that, I have to define the backend within my .tf file to get it to work? Because if I don't have the block

terraform {
  backend "s3" {
    bucket = "abc-tio"
    key    = "tfstate/prod/ANOTHERSTATEFILE.tfstate"
    region = "us-east-1"
   profile = "aws-cm-abc"
  }
}

within my .tf file, Terraform does NOT push my statefile to S3 (eventhough I initialized Terraform with remote statefile).

Is this the original design?

I struggled with this conversion today as well. I finally got it working by adding a .tf file with the terraform {} block and backend details, starting with no s3 file and no existing state, ran terraform init with no args (this seems key), and it finally hooked up the state file to s3 correctly. Once it init's and syncs, it will automatically push/pull state on a terraform apply/terraform plan as it's supposed to.

I would love to use this new functionality, but it doesn't seem as "simple" as advertised.

I have tried to create my statefile.config in all different directories. I have tried creating it as just a .tf file. I have tried just using a data.terraform_remote_state resource. I've tried with and without local state files.
Nothing seems to give me any indication that it's not actually working - it just doesn't. When I turn on debug logging I can see it's not actually fetching a backend (modified for privacy):

$ terraform init
2017/04/11 14:30:09 [INFO] Terraform version: 0.9.2  6365269541c8e3150ebe638a5c555e1424071417+CHANGES
2017/04/11 14:30:09 [INFO] Go runtime version: go1.8
2017/04/11 14:30:09 [INFO] CLI args: []string{"<user dir>/terraform", "init"}
2017/04/11 14:30:09 [DEBUG] Detected home directory from env var: <home dir>
2017/04/11 14:30:09 [DEBUG] Detected home directory from env var: <home dir>
2017/04/11 14:30:09 [DEBUG] Attempting to open CLI config file: <home dir>/.terraformrc
2017/04/11 14:30:09 [DEBUG] Detected home directory from env var: <home dir>
2017/04/11 14:30:09 [INFO] CLI command args: []string{"init"}
2017/04/11 14:30:09 [DEBUG] command: loading backend config file: <home dir>/<project dir>
2017/04/11 14:30:09 [INFO] command: backend initialized: <nil>
2017/04/11 14:30:09 [INFO] command: backend <nil> is not enhanced, wrapping in local

2017/04/11 14:30:09 [DEBUG] plugin: waiting for all plugin processes to complete...
Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.

My statefile.config looks (mostly) like this:

terraform {
  backend "s3" {
    bucket = "dev"
    key    = "project/project-test"
    region = "us-east-1"
    encrypt = true
    profile = "test"
  }
}

Any suggestions on what I may be doing wrong/how to get terraform to actually use the backend I specify?

@asharti :

The solution in our case is to rename statefile.config to statefile.tf and use the code similar to what you had above.

After initial terraform init, we are now able to use remote statefile. But this means we have to keep this file all the time

@tanmng - you dont have to. terraform will dl the statefile on every apply/destroy automatically if it is not present.

Thanks for the tip @tanmng. I tried renaming to just statefileand nothing changed. But when I tried renaming the file to statefile.tf, I seem to be hitting a different roadblock now.

On initialization, and every terraform command afterwards, I am getting some notice that the s3.Backend is "not enhanced", so it just quietly defaults to not using S3.

Initializing the backend...
2017/04/12 09:15:27 [INFO] Ignoring AWS metadata API endpoint at default location as it doesn't return any instance-id
2017/04/12 09:15:27 [INFO] command: backend initialized: *s3.Backend
2017/04/12 09:15:27 [INFO] command: backend *s3.Backend is not enhanced, wrapping in local

I've been digging through source code trying to figure out where/how it is deciding my backend isn't "enhanced" and why it is just defaulting back to a local backend when that is clearly not what I want. This seems like something Terraform should be more verbose about (I am having to turn up the log level to see that it's not actually initializing an S3 backend as it has been instructed).

Anyone have any ideas what an "enhanced" backend would be and how to get my configuration to meet these requirements?

@asharti

Sorry for the mistake from my previous comment, I meant "rename to statefile.tf"

The content of our statefile.tf right now is:

#------------------------------------------------------------------------------
# File: xxxxxxxx/common/statefile.tf
# Description: Configure remote statefile/backend
#
# Please only create this AFTER YOU HAVE RUN THE BOOTSTRAP MODULE
#------------------------------------------------------------------------------
terraform {
  backend "s3" {
    bucket  = "xxxxxx-tio-reaxys-xxxxxxxx"
    key     = "tfstate/prod/aws-cm-xxxxod-fab-prod.tfstate"
    region  = "us-east-1"
    profile = "aws-cm-xxxxx"
  }
}

In your case, since you have encrypt = true, maybe you need to also specify the KMS key ID for encryption to work. Can you please try to disable that and change to a new key value, then try terraform init again and see if your state file is available at the new key?

@tanmng Appreciate the help, but I'm still seeing the same INFO output which makes it seem like TF is just quietly not using the S3 backend. I tried both without encrypt and with both encrypt and my kms_key_id. Both times, I saw this in the init output, as well as plan:

2017/04/12 14:24:32 [INFO] command: backend initialized: *s3.Backend
2017/04/12 14:24:32 [INFO] command: backend *s3.Backend is not enhanced, wrapping in local

Still nothing in my S3 bucket. I guess I'll have to wait to use this feature once the kinks have been worked out.

Thanks again, though!

I also get this. There's a message "Refreshing state... (ID: i-" and the tfstate + tfstate.backup are created locally but don't get copied to S3 bucket. It doesn't look like a credentials issue since the ec2 instance gets created.

Any advantage here?

I recently upgrade to 0.9.5 and I have the same issue.

I have added the backend configuration in my "main.tf" file as:

## main.tf
terraform {
    required_version = ">= 0.9.5"
    backend "s3" {}
}

I provide the environment and the id in each execution because are used to configure S3, that's why I need to configure the backend in each execution. I run:

terraform init \
    -backend-config="bucket=${TF_VAR_environment}-bucket" \
    -backend-config="key=devops/terraform-state/${TF_VAR_service_id}/terraform.tfstate" \
    -backend-config="region=${TF_VAR_region}"

And I get:

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.

At this point, if it's the first time and there is any terraform state at S3 I don't have any terraform.tfstate at local neither at S3.

After that, I run Terraform and I get:

terraform apply

...Creating all resources...

Apply complete! Resources: 149 added, 0 changed, 0 destroyed.

The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.

State path: 

Outputs:
...

Checking S3 any terraform.tfstate file has been uploaded and my local terraform.tfstate file has not been updated with the remote state, it starts with:

{
    "version": 3,
    "terraform_version": "0.9.5",
    "serial": 0,
    "lineage": "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
    "modules": [
        { ...

The fact that this has not upload to S3 automatically scares me a lot. Any idea about what is the problem? Maybe I'm doing something wrong.

I have the same issue as blaltarriba above, partial configuration with a backend config file, doesn't push the config to S3.

$ terraform --version
Terraform v0.9.6

My main terraform configuration file has the S3 remote backend defined as so-

terraform {
  backend "s3" {}
}

And S3 backend config is defined in terraform-sf-staging.tf-

bucket = "mybucket"
key    = "mykey"
region = "us-east-1"

My AWS credentials are configured in ~/.aws/credentials.
When I call terraform init as below-

$ terraform init -backend-config=terraform-sf-staging.tf
Initializing the backend...


Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.

Locally, I have a .terraform directory, but no state has been uploaded to S3.

Now, if I run another terraform command, like plan, the config appears lost-

$ terraform plan -var-file=staging.tfvars
1 error(s) occurred:

* module root: 3 error(s) occurred:

* Unknown root level key: bucket
* Unknown root level key: key
* Unknown root level key: region
$

**Note - If I add the config directly in my terraform main configuration file, a statefile is uploaded to S3.

Any update on this?

it looks like starting from terraform version v0.9.3, terraform doesn't recognize the pre-existing state file (i.e. local state file) during terraform backend initialization.
As a workaround, you can continue use newer terraform version(s), but you need to execute $ terraform state push terraform.tfstate.backup after running $ terraform init

Hi Everyone,

I apologize for the confusion here around remote state, and not getting involved in this thread earlier. There seems to be a number of different issues here, which may or may not be related. There are also a number of conflicting details here that may be from typos, operator error, or even remote api failure.

@tanmng: if terraform init did nothing, it means it was already initialized somehow from a previous command.

@tanmng: @gdmello: You can't use a .tf suffix on the variables files you use for backend config, as it's not valid HCL, and will interfere with the loading of the configuration.

@blaltarriba @tanmng: Are you certain there is only 1 terraform and backend config block in your config files? Does the backend config in .terraform/terraform.tfstate look correct? (that file is _not_ your actual state, just a config cache for terraform).

@bitbrain: Which issue are you having, and looking for information on?

@AllaeddineEL: I don't think any of the above were having trouble migrating from local state. If you're having a problem initializing a backend starting from local state, I would suggest open a new issue as this one is primarily about remote state migration and already has too many sub-parts.

I've spent significant time trying to reproduce the above issues with the listed versions, but haven't had any luck, so there is likely some usage or config detail that I'm missing. If anyone can still reproduce the problem with 0.9.11 or 0.10-beta, I would appreciate the detailed steps and configuration.

Thanks

@jbardin I managed to get S3 running with Terraform remote backend.

Given a s3 bucket called my-bucket and a dynamoDB table called my-dynamo-db-table. Also set the AWS environment variables accordingly before running this.

main.tf:

terraform {
  backend "s3" {}
}

provider "aws" {
  region = "eu-west-1"
}

asdf.tfvars:

bucket = "my-bucket"
key = "path/to/my/terraform.tfstate"
# encrypt on the backend
encrypt = true
# use state locking
dynamodb_table = "my-dynamo-db-table"
region = "eu-west-1"

And then run:

terraform init -backend-config=asdf.tfvars
terraform plan

Terraform will not generate a local terraform.tfstate file. Instead, it will maintain a statefile in the S3 bucket.

Terraform Version used = v0.9.11

If it can help here, my file state appears on s3 after : terraform apply.
Nothing present in s3 after terraform init and plan.
If i understand the logic ?:
terraform init =>just initialized the s3 config to target
terraform plan => just show the plan but do not store it because the config is set to s3, so nothing in the local store. you can just read it before the next apply step.
terrform apply => do the job and store the state in s3.

terraform init -backend-config=someS3config.tfvars
=> nothing in local, nothing in s3 bucket
terraform plan
=> nothing in local, nothing in s3 bucket
terraform apply
=> nothing in local, file in s3 OK
terraform destroy
=> nothing in local, file in s3 OK (with bucket versionning 2 files versions, with the current state to empty)

Hi @branciard,

just show the plan but do not store it because the config is set to s3

No, terraform plan doesn't store anything regardless of the backend config. You can _choose_ to store the plan with the -out flag, which can be used as an argument for apply.

I'm going to close this issue for now, as we have a number of diverging state related questions that aren't necessarily related. If anyone is having an issue shown here with a current release, feel free to file a new issue, or reply here and we can re-evaluate this issue or open a new one.

I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.

If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.

Was this page helpful?
0 / 5 - 0 ratings