Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
0.9.0
Please list the resources as a list, for example:
terraform {
backend "s3" {
bucket = “terraform-xxxxxx"
key = "dev/terraform.tfstate"
encrypt = "true"
kms_key_id = “xxx"
region = “xxx"
}
}
Upgrade exist stack from 0.8.8 to 0.9.0
terraform init,Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
terraform plan, work as normalterraform applyDeprecation warning: This environment is configured to use legacy remote state.
Remote state changed significantly in Terraform 0.9. Please update your remote
state configuration to use the new 'backend' settings. For now, Terraform
will continue to use your existing settings. Legacy remote state support
will be removed in Terraform 0.11.
You can find a guide for upgrading here:
https://www.terraform.io/docs/backends/legacy-0-8.html
terrafrom apply several times, get same warning.This part has been updated to 0.9.0
"terraform_version": "0.9.0",
But still have the session of remote. If compare with a total new stack created by terraform 0.9.0, there is no remote session any more.
terraform apply is successful, no error, only warning
How to get the old state file converted to 0.9.0 properly?
State file gets version updates, but still with old format.
Detail in Debug Output
Run stack in AWS
old terraform stack has used s3 as backup to save the tfstate file.
I can't install new version of terraform command on the CI/CD build agent, run terraform from container hashicorp/terraform:0.9.0
TERRAFORM_CMD="docker run --rm -w /app -v $(pwd):/app hashicorp/terraform:0.9.0"
$TERRAFORM_CMD init
Hey @SydOps, happy to help here. I'm not seeing the same behavior and could use some help to get there... here is what I have.
Note that my end result was that Terraform guided me properly to upgrade the state, migrated my legacy state, and then removed my old legacy state config (though the data still remained, it doesn't delete it).
I was unable to see your behavior... do you mind telling me what I'm missing? Thanks!
1. I setup an existing remote state with Terraform 0.8.8
$ terraform remote config -backend=local -backend-config='path=legacy'
$ terraform apply
I verified that the local state is not being used and that the "legacy" file has my state properly.
2. I ran terraform plan with 0.9.0 to see what would happen
I see this:
$ terraform plan
Deprecation warning: This environment is configured to use legacy remote state.
Remote state changed significantly in Terraform 0.9. Please update your remote
state configuration to use the new 'backend' settings. For now, Terraform
will continue to use your existing settings. Legacy remote state support
will be removed in Terraform 0.11.
You can find a guide for upgrading here:
https://www.terraform.io/docs/backends/legacy-0-8.html
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
null_resource.A: Refreshing state... (ID: 1917627528621563423)
No changes. Infrastructure is up-to-date.
3. I added a backend config to my file and ran plan again
I added this config:
terraform {
backend "local" {
path = "legacy"
}
}
And plan said this:
$ terraform plan
Backend reinitialization required. Please run "terraform init".
Reason: Initial configuration for backend "local"
The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.
Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.
If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.
Failed to load backend: Initialization required. Please see the error message above.
4. I ran terraform init
$ terraform init
Initializing the backend...
New backend configuration detected with legacy remote state!
Terraform has detected that you're attempting to configure a new backend.
At the same time, legacy remote state configuration was found. Terraform will
first configure the new backend, and then ask if you'd like to migrate
your remote state to the new backend.
Do you want to copy the legacy remote state from "local"?
Terraform can copy the existing state in your legacy remote state
backend to your newly configured backend. Please answer "yes" or "no".
Enter a value: yes
Do you want to copy state from "local" to "local"?
Pre-existing state was found in "local" while migrating to "local". An existing
non-empty state exists in "local". The two states have been saved to temporary
files that will be removed after responding to this query.
One ("local"): /var/folders/3q/c37yygcs4vl7cvzq3k3wdnyr0000gn/T/terraform652916411/1-local.tfstate
Two ("local"): /var/folders/3q/c37yygcs4vl7cvzq3k3wdnyr0000gn/T/terraform652916411/2-local.tfstate
Do you want to copy the state from "local" to "local"? Enter "yes" to copy
and "no" to start with the existing state in "local".
Enter a value: yes
Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
5. I ran terraform plan
$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
null_resource.A: Refreshing state... (ID: 1917627528621563423)
No changes. Infrastructure is up-to-date.
This means that Terraform did not detect any differences between your
configuration and real physical resources that exist. As a result, Terraform
doesn't need to do anything.
Conclusion
Everything upgraded properly and no more deprecation warnings once I updated.
Please let me know what I'm missing!
Thanks, @mitchellh
I updated part of Important Factoids with more information:
- In old terraform stack we have used s3 as the backup to save the tfstate file.
- I can't install new version of terraform command on the CI/CD build agent, run terraform from container `hashicorp/terraform:0.9.0`
TERRAFORM_CMD="docker run --rm -w /app -v $(pwd):/app hashicorp/terraform:0.9.0"
$TERRAFORM_CMD init
Back to my problem, I saw the message in step 4 at first time when run terraform init. If I re-run init command, I don't see that again.
With terraform plan, the remote state file is not updated. So I run terraform apply, this time, it is updated on s3 bucket, its version is changed to 0.9.0, but still have remote session in state file
Another difference I found, after terraform apply
in terraform 0.8.8, tfstat file at .terraform/terraform.tfstate
But in terraform 0.9.0, there are two state files:
./terraform.tfstate - have all updated status.
.terraform/terraform.tfstate - a nearly empty state file with backend detail only.
So with old version, clean .terraform is enough. But with new version 0.9.0, if I need to build another environment (for example, qa), seems I have to delete ./terraform.tfstate, otherwise, it will impact the next build.
Hey @SydOps. I just tried S3 + Docker-based TF and still can't reproduce your scenario... I'm confused but really want to help you here. A couple weird things:
./terraform.tfstate, so this is weird and seems to be a key detail here.terraform/terraform.tfstate should only have backend details so that is correct, for youAre you passing any flags to apply/plan/etc.?
I'm sorry to ask for this but is there any way you can try to reproduce this with a fresh environment (using both the 0.8.8 and 0.9 binaries as I did above) so I can just try a step-by-step approach?
I pass a varfile -var-file=dev.tfvars when run plan/apply.
I also run terraform plan with -out=path and terraform apply with the out plan file.
I'm experiencing the same, where terraform init doesn't seem to do anything (doesn't prompt for anything but exits cleanly).
Since our setup is a bit more complicated, I'll try to extract a minimal reproducible example and share the findings.
@mitchellh Can you please give some example to configure s3 backend with and without locking state?
I used below config but it could not able to see remote state configured properly using terraform init
cat set-remote.tf
terraform {
backend "s3" {
bucket = "xxxxxxx"
key = "xxxxx/terraform.tfstate"
region = "eu-west-1"
lock = "false"
}
}
It just output like this:
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
If you run terraform plan after this then it just create terraform.tfstate file locally instead of creating in s3 bucket.
Please let me know your inputs for the same, thanks!
People keep saying things like "Add this to config"
terraform {
backend "s3" {
bucket = “terraform-xxxxxx"
key = "dev/terraform.tfstate"
encrypt = "true"
kms_key_id = “xxx"
region = “xxx"
}
}
But what is "config"?! I have tons of state files, and tf files etc... what is all this talk about a "config" file?
@voltechs Sorry, by "config" we mean "TF configuration files" (.tf files).
@sachincab
I created a file with the same name, and same configuration. This is my output
test → terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
And I run plan/apply and it creates the data in S3.
Exactly the same as yours so assuming something different again...
- I never get a local ./terraform.tfstate, so this is weird and seems to be a key detail here
- .terraform/terraform.tfstate should only have backend details so that is correct, for you
I wrote this user groups email just now and only then I found this thread. These 2 points look very similar to what I am seeing in my paired down repo here, despite my repo demonstrates a non-upgrade approach.
Hey @stephenchu I saw your email! I'm planning on taking a look right after lunch here so we can get a fix in as part of 0.9.1 (assuming I can repro).
@voltechs
Plus to read the comments in below issue. There are two ways to set the backend config.
https://github.com/hashicorp/terraform/pull/12067#issuecomment-287246299
Ah @stephenchu you're experiencing a bug that was just recently fixed where apply from a plan file didn't respect backend configurations. This is fixed in 0.9.1!
@SydOps is this perhaps how you were getting a local state? Were you running a apply with a plan file?
@mitchellh
Thanks for the updates, I will run the plan/apply again on version 0.9.1 to confirm if my problem can be fixed or not.
Yes, I run plan with -out and apply with that plan file.
By the way, @mitchellh
Can we have the new docker image with terraform v0.9.1 hashicorp/terraform:0.9.1?
Yes we will, it appears that Alpine had some issues with 0.9.0.
@SydOps I'm confident your issue is the same as what was fixed yesterday then. 0.9.1 is building now and will be released within the hour and should resolve this. Also, you'll be happy to know i added -backend-config=key=value back and its part of the init command.
@mitchellh
Terraform v0.9.1 here using hashicorp/terraform:light image. I still get this. I've migrated from 0.8.x just now following the guide. All went well. I did not choose to copy the state file as I was already using s3 bucket for state.
I run terraform init, followed by terraform plan -out terra.plan and terraform apply terra.plan. I get
Failed to load backend: The plan file contained both a legacy remote state and backend configuration.
This is not allowed. Please recreate the plan file with the latest version of
Terraform.
The .terraform directory inside the module only has terraform.tfstate file with the backend config and modules folder with symlinks.
However, not using plan output, i.e. terraform plan && terraform apply works.
@jurajseffer
I managed to fix this problem by downloading the tfstate file from S3, removing the "remote" section from the top
"remote": {
"type": "s3",
"config": {
"bucket": "terraform-remote-state",
"encrypt": "true",
"key": "terraform.tfstate",
"region": "us-west-2"
}
},
then uploading the modified version. After this, I can output and apply plan files successfully. This is the root bug, that the 0.8.x => 0.9.x migration doesn't remove the legacy remote and/or that the documentation doesn't mention this at all.
Ah hah, there is a new issue tracking this I'm taking a look https://github.com/hashicorp/terraform/issues/12871
@mitchellh
I'm still facing the problem while setting remote state with s3 where modules are used in main.tf files.
terraform init output:
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
However if I use the same remote-state.tf file for testing, it could set s3 remote backend properly.
terraform init output:
terraform init
Initializing the backend...
Successfully configured the backend "s3"! Terraform will automatically
use this backend unless the backend configuration changes.
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your environment. If you forget, other
commands will detect it and remind you to do so if necessary.
Please let me know your thoughts for the same and will able to provide more information on this.
@sachincab I don't understand what incorrect behavior exists, please go into more detail. Thank you!
Just trying to upgrade from TF 0.8.8. to 0.9.1 and encountered this problem. Even due this issues is closed it doesn't look like it has been solved in 0.9.1.
Our situation:
Some observations:
Workaround to upgrade without editing state file manually:
$ terraform remote pull
$ terraform remote config -disable # disabling remote config removes the remote state entry from state
$ chtf 0.9.1 # switch to terraform 0.9.1
$ terraform init
$ terraform plan -out plan.out
$ terraform apply plan.out
Conclusion is that the remote state entry in the original pre-0.9 state file is not removed when migrating the state to the (new) backend, causing generating an invalid plan file. As apply can handle it correctly it looks like a bug in the plan command?
@koenijn Thanks for your feedback! We fixed the removal of the "remote" section on migration in master and it will be part of 0.9.2 next week.
thx for the quick fix! was hitting this today , right before an upgrade
@mitchellh What do you think of a mailing list to notify users of deprecations and upcoming breaking changes? My team doesn't have well-formed expectations regarding the frequency at which we'll run plan or apply on our infrastructure, so it would be great to know ASAP of changes like this.
@jdori Details of all changes (including upcoming releases it seems):
https://github.com/hashicorp/terraform/blob/master/CHANGELOG.md
And if you want notifications, you can subscribe to an atom feed for all new releases:
@kinghajj
I must be missing something fundamental here. I've been using Terraform since pre 0.7 (yay, me) and now I get the deprecation warning. I followed the instructions with no success and now I've tried your suggestion as well with no luck.
I use s3 for state storage and also see the same state file on my local drive. Did you remove the remote section from both the s3 copy and the local copy?
I'm currently on 0.9.9 and would really like to remove this deprecation warning.
I really don't understand how this issue can be considered closed when following the instructions does not return desired results.
I followed the instructions. Backed up my state file. Ran terraform init (which did not ask me any questions at all). Yet, when I run terraform plan I continue to get the same deprecation warning.
My state file contains the following:
"backend": "s3",
"config.%": "3",
"config.bucket": "<fancy bucket name>",
"config.key": "terraform.tfstate",
"config.region": "us-east-2",
"environment": "default",
@mitchellh Sorry to keep harping on this issue but I can't get rid of the deprecation warning. As you can see from my two previous messages, I followed the upgrade instructions and terraform init does nothing.
I'm sure I'm missing something silly. Thanks!
Any update on this? I can't find any other ticket for this issue which is why I keep writing here
@jmvbxx
Understood your frustration, but the information you provide is not detail enough, and here is not the right place to discuss and fix your issue.
I recommend you to raise a ticket in stackoverflow for your problem with details tf codes, *.tfstate file, errors and other informations you can provide.
Good luck
I don't understand why this isn't the correct place. My issue is exactly
what the title of this issue is. I also provided my code snippet in a
previous message.
Regardless, I'll do what you suggest and try my luck there.
Thank you
On Aug 14, 2017 18:12, "Bill Wang" notifications@github.com wrote:
@jmvbxx https://github.com/jmvbxx
Understood your frustration, but the information you provide is not detail
enough, and here is not the right place to discuss and fix your issue.I recommend you to raise a ticket in stackoverflow for your problem with
details tf codes, *.tfstate file, errors and other informations you can
provide.Good luck
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/hashicorp/terraform/issues/12792#issuecomment-322334707,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AJryY2UlofN08cU-5jFjwS4dn4so8V4Iks5sYNRdgaJpZM4MgFDb
.
Hi @jmvbxx! Sorry the previous fixes aren't working for you.
Given the age of this issue, and that you're using a newer Terraform version than was apparently last used to do these steps, I think it's best to start a fresh issue for this since you may have found a regression or a new bug introduced since 0.9.2. That will give us a fresh place to dig into the problem and avoid creating notification noise for the many people who previously participated in _this_ issue.
I understand that the symptoms seem similar but, often different bugs can have similar symptoms due to the complexity of Terraform's state migration codepaths.
Thank you @apparentlymart! I'll do that right away and stop bugging in this dusty old ticket :-)
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@jurajseffer
I managed to fix this problem by downloading the tfstate file from S3, removing the "remote" section from the top
then uploading the modified version. After this, I can output and apply plan files successfully. This is the root bug, that the 0.8.x => 0.9.x migration doesn't remove the legacy remote and/or that the documentation doesn't mention this at all.