_NOTE: This ticket is when terraform is initialized above a subfolder_
Terraform v0.12.3
+ provider.aws v2.17.0
terraform {
backend "local" {
}
}
provider "aws" {
}
resource "aws_sns_topic" "test" {
name = "test"
}
N/A
N/A
The plan is show:
$ terraform show "test.plan"
+ aws_sns_topic.test
id: <computed>
arn: <computed>
name: "test"
policy: <computed>
Error stating that
Backend reinitialization required
terraform init ./subfolder-terraform
$ terraform init ./subfolder-terraform
Initializing the backend...
Successfully configured the backend "local"! Terraform will automatically
use this backend unless the backend configuration changes.
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "aws" (terraform-providers/aws) 2.17.0...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.aws: version = "~> 2.17"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
terraform validate ./subfolder-terraform
$ terraform validate ./subfolder-terraform`
Success! The configuration is valid.
terraform plan -out=test.plan ./subfolder-terraform
$ terraform plan -out=test.plan ./subfolder-terraform
Acquiring state lock. This may take a few moments...
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# aws_sns_topic.test will be created
+ resource "aws_sns_topic" "test" {
+ arn = (known after apply)
+ id = (known after apply)
+ name = "test"
+ policy = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: test.plan
To perform exactly these actions, run the following command to apply:
terraform apply "test.plan"
terraform show "test.plan"
Backend reinitialization required. Please run "terraform init".
Reason: Unsetting the previously set backend "local"
The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.
Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.
If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.
Error: Initialization required. Please see the error message above.
terraform plan "test.plan"
successfullyHi @scottschreckengaust !
I am sorry you've come across this unexpected behavior. Terraform is unfortunately rather inconsistent about which commands can work from a different directory and which commands must be run from the configuration's working directory. The current advice is to always work from your configuration directory.
show
is such a command that must be run in the same directory as your configuration, because it loads your state and config. I am going to re-label this is a documentation issue. We should clarify this expectation in the show
docs and see about improving the error message.
Facing the same problem. terraform show "test.plan"
doesn't work with a subdirectory and there is no way to provide ./subfolder-terraform to show command.
I can understand why show
needs your configuration to show state, but given that a plan file contains a state snapshot, and all of your configuration files, why does it need anything but the plan itself?
I'm looking to write tooling to view / approve plan files, and requiring access to the originating configuration makes things impractical.
I have the same use-case as @ThisGuyCodes. We had built this functionality based on plan outputs in the 0.11.x version branch. Would like to see this carried forward into future releases.
@ThisGuyCodes - that's a good question! The component that the plan is missing is the full provider schema, which is used in several places to format the json output. I'm not saying we can't work around it to provide this enhancement at some point, but that's why it's required now. Terraform does not store the provider schemas in the plan or state.
And for everyone who is interested in this enhancement: please react to the original issue with a 馃憤 - we use those to help prioritize which features to work on next.
My request: please create separate command line flags for local state and configuration directory in all terraform commands.
Many commands allow terraform
to be executed anywhere as long as state and configuration are identified at the command line. Typically this involves a -state
flag, plus the use of a final optional dir
argument pointing to the configuration. This combination of flags allows projects to be created where state and configuration are separate, and where terraform
is being called from any working directory. Failing to provide this same separation for state
commands means it is possible to create Terraform projects which operate fine with the majority of commands, then become hard to work with when use cases requiring state manipulation are encountered.
This is a major bummer. It means reorganizing projects which may support production use, or resorting to sketchy hijinks to convert from remote to local state, or to merge state and config directories which may have been separated. It's an invitation for Terraform users to shoot themselves in the foot.
Please be consistent. Please permit the same flexibility in state
commands that you allow in the rest of the operations supported by the CLI.
I believe this also affects terraform output
when you have remote state unfortunately
```
terraform output ./terraform/
Backend reinitialization required. Please run "terraform init".
Reason: Unsetting the previously set backend "s3"
The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.
Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.
If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.
Error: Initialization required. Please see the error message above.```
@teamterraform Are there any plans to fix this? Sadly it still exists :(
@guyzyl and others on here: I've been going through and trying to create concrete reproduction cases for issues, and it has greatly accelerated the rate of fixing bugs. Given the number of upvotes and the console output I totally believe this is real, but I tried to create a reproduction case for it and the behavior I saw was that it showed the plan exactly as is being requested and did not request a reinitialization. My assumption is that my reproduction case is incorrect, and I would really appreciate help fixing it so that we can work on this.
I put the repro case in https://github.com/danieldreier/terraform-issue-reproductions/tree/master/21966 and would very much appreciate a PR that fixes it to reproduce the issue people are describing here. I've tried this on 0.12.3, 0.12.26, and 0.13.0 beta 1.
Here is the output when I run the run.sh
reproduction script in that reproduction case:
bash-3.2$ bash run.sh
+ terraform init ./subfolder
Initializing the backend...
Initializing provider plugins...
- Using previously-installed hashicorp/null v2.1.2
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, we recommend adding version constraints in a required_providers block
in your configuration, with the constraint strings suggested below.
* hashicorp/null: version = "~> 2.1.2"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
+ terraform validate ./subfolder
Success! The configuration is valid.
+ terraform plan -out=test.plan ./subfolder
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.example will be created
+ resource "null_resource" "example" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
This plan was saved to: test.plan
To perform exactly these actions, run the following command to apply:
terraform apply "test.plan"
+ terraform show test.plan
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# null_resource.example will be created
+ resource "null_resource" "example" {
+ id = (known after apply)
}
Plan: 1 to add, 0 to change, 0 to destroy.
If anyone can help with this I would appreciate it.
Hey @danieldreier , just tried you example by myself and you are correct in saying that it does not reproduce the issue.
I was able to reproduce the issue by adding a gcs
backing to 21966-subfile.tf
(on Terraform version 0.12.26).
Here's the full error:
Backend reinitialization required. Please run "terraform init".
Reason: Unsetting the previously set backend "gcs"
The "backend" is the interface that Terraform uses to store state,
perform operations, etc. If this message is showing up, it means that the
Terraform configuration you're using is using a custom configuration for
the Terraform backend.
Changes to backend configurations require reinitialization. This allows
Terraform to setup the new configuration, copy existing state, etc. This is
only done during "terraform init". Please run that command now then try again.
If the change reason above is incorrect, please verify your configuration
hasn't changed and try again. At this point, no changes to your existing
configuration or state have been made.
Error: Initialization required. Please see the error message above.
so when are we going to get consistency with terraform show
? :100: :)
terraform fmt -recursive terraform/ :+1:
terraform validate terraform/ :+1:
terraform init terraform/ :+1:
terraform plan --out=terraform.plan terraform/ :+1:
terraform show terraform/ :-1: :(
terraform state pull terraform/ :-1: :(
the dreaded
Backend reinitialization required. Please run "terraform init".
Reason: Unsetting the previously set backend "http"
not everyone likes polluting their root directory of git projects with code.
any workarounds for people like?
@freibuis this isn't the best solution, but the way I solved it in our pipelines is by cd
ing to the directory.
Backend reinitialization required. Please run "terraform init".
Reason: Initial configuration of the requested backend "azurerm"
....
it seems at least run [terraform init] once, the [terraform workspace list] will work, but why?
@guyzyl thank you for that pointer. I've updated the reproduction caseto use the local state backend, and am now able to reproduce this issue as stated. Thank you!
Ran into this issue today also. I had to copy the terraform module directory into my CWD just to just show on a plan output I was providing it. Initialization seems unnecessary.
It seems like there are a few different threads of conversation going on here so I'm not sure if this actually addresses the original problem this issue represents, but regarding the consistency of the command line options I can at least say that Terraform 0.14 is introducing a new global option -chdir=
which is supported for all Terraform commands and causes Terraform to switch to the given working directory before actually running the command:
terraform -chdir=terraform/ init
terraform -chdir=terraform/ validate
terraform -chdir=terraform/ plan --out=../terraform.plan
terraform -chdir=terraform/ show
terraform -chdir=terraform/ state pull
terraform -chdir=terraform/ fmt
You can preview this in 0.14.0-beta1, though please be careful not to run commands that generate new state snapshots unless you plan to keep working with 0.14.0-beta1, because new state snapshots will not be compatible with Terraform 0.13 releases.
The inconsistently-supported positional command line arguments remain where they were for 0.14, but we intend to remove them in a future major release, in favor of the -chdir
option which works for all commands and -- unlike the old command line arguments -- also works consistently for all file access done by Terraform.
This issue isn't about the -chdir
option, so if you try it and have feedback on it that isn't directly related to the behavior of terraform show <plan>
then please open an _new_ issue to discuss that, so that this one can continue to be about the terraform show
situation in particular.
@apparentlymart omg this is going to make life SO MUCH BETTER
Most helpful comment
And for everyone who is interested in this enhancement: please react to the original issue with a 馃憤 - we use those to help prioritize which features to work on next.