Cross referencing https://github.com/terraform-providers/terraform-provider-terraform/issues/17
Because i have no clue where this one goes
0.10.6
in a remote state:
output "staging_default_security_group_id" {
value = "${module.strat_staging_vpc.default_security_group_id}"
}
output "staging_public_subnet_ids" {
value = [
"${module.strat_staging_vpc.public_subnets}",
]
}
output "staging_private_subnet_ids" {
value = [
"${module.strat_staging_vpc.private_subnets}",
]
}
In a top level tf:
data "terraform_remote_state" "strat_ops_vpc" {
backend = "s3"
config {
bucket = "com-strat-terraform"
key = "ops/vpc/terraform.tfstate"
region = "${var.region}"
}
}
module "rds" {
source = "../../../../modules/rds/snapshot"
apply_immediately = "true"
environment = "staging"
instance_class = "db.m3.medium"
skip_final_snapshot = "true"
final_snapshot_identifier = "health-staging-final-5"
name = "health-staging"
security_group_ids = ["${data.terraform_remote_state.strat_ops_vpc.staging_default_security_group_id}"]
snapshot_identifier = "${var.rds_snapshot_identifier}"
subnet_ids = ["${data.terraform_remote_state.strat_ops_vpc.staging_private_subnet_ids}"]
engine_version = "9.5.7"
}
--other modules----
On the tf with outputs, apply works and shows the outputs:
11:55:01 Apply complete! Resources: 0 added, 0 changed, 0 destroyed.[0m
11:55:01 [0m[1m[32m
11:55:01 Outputs:
11:55:01
11:55:01 staging_default_security_group_id = sg-3b3b6f4
11:55:01 staging_nat_enable = true
11:55:01 staging_private_subnet_ids = [
11:55:01 subnet-eae465a,
11:55:01 subnet-2decb31
11:55:01 ]
11:55:01 staging_public_subnet_ids = [
11:55:01 subnet-f6e766e,
11:55:01 subnet-2cecb30
11:55:01 ]
11:55:01 staging_security_group_ids = [
11:55:01 sg-3b3b6f4
11:55:01 ]
Using terraform apply -target=module.rds fails and never attempts to look up the remote state outputs:
11:55:11 [terraform] Running shell script
11:55:12 + cd app/health/webserver/staging
11:55:12 + terraform apply --var elasticache_snapshot_identifier=automatic.health-production-2017-10-27-06-00 --var redshift_snapshot_identifier=rs:stratashift-2017-10-27-05-41-52 --var rds_snapshot_identifier=rds:health-production-2017-10-27-02-05 --var version_label=health-99b7672-2017-10-26T18:14:03.666173 -target=module.rds -target=aws_redshift_cluster.stratashift -no-color
11:55:13 Error running plan: 3 error(s) occurred:
11:55:13
11:55:13 * aws_redshift_subnet_group.stratashift: 1 error(s) occurred:
11:55:13
11:55:13 * aws_redshift_subnet_group.stratashift: Resource 'data.terraform_remote_state.strat_ops_vpc' does not have attribute 'staging_private_subnet_ids.0' for variable 'data.terraform_remote_state.strat_ops_vpc.staging_private_subnet_ids.0'
11:55:13 * module.rds.var.security_group_ids: Resource 'data.terraform_remote_state.strat_ops_vpc' does not have attribute 'staging_default_security_group_id' for variable 'data.terraform_remote_state.strat_ops_vpc.staging_default_security_group_id'
11:55:13 * module.rds.var.subnet_ids: Resource 'data.terraform_remote_state.strat_ops_vpc' does not have attribute 'staging_private_subnet_ids' for variable 'data.terraform_remote_state.strat_ops_vpc.staging_private_subnet_ids'
You can see it never did any look ups. Running the same command with no -target=... works exactly as expected:
13:09:52 [terraform] Running shell script
13:09:52 + cd app/health/webserver/staging
13:09:52 + terraform apply --var elasticache_snapshot_identifier=automatic.health-production-2017-10-27-06-00 --var redshift_snapshot_identifier=rs:stratashift-2017-10-27-02 --var rds_snapshot_identifier=rds:health-production-2017-10-27-02-05 --var version_label=health-22222 -no-color
13:09:53 data.terraform_remote_state.health_beanstalk: Refreshing state...
13:09:53 data.terraform_remote_state.strat_spectrum_iam_role: Refreshing state...
13:09:53 data.terraform_remote_state.strat_ops_vpc: Refreshing state...
13:09:53 data.terraform_remote_state.strat_ops_lambda_to_slack: Refreshing state...
13:09:53 data.aws_ami.redis: Refreshing state...
13:09:53 data.aws_ami.beanstalk: Refreshing state...
13:09:53 aws_iam_role.main-ec2-role: Refreshing state... (ID: health-staging-webserver-ec2)
...logs more lookups...
Able to target resources/modules individually
What actually happened?
See debug output
Are there any other GitHub issues (open or closed) or Pull Requests that should be linked here? For example:
+1. Still the same behaviour for 0.11.2. remote state lookup fails when targetting specific modules, though there's a caveat in 0.10.0 about "not intended for routine use".
The -target option available on several Terraform subcommands has changed behavior and now matches potentially more resources. In particular, given an option -target=module.foo, resources in any descendent modules of foo will also be targeted, where before this was not true. After upgrading, be sure to look carefully at the set of changes proposed by terraform plan when using -target to ensure that the target is being interpreted as expected. Note that the -target argument is offered for exceptional circumstances only and is not intended for routine use.
Is this still an issue? I am trying to target one module that contains a Launch Configuration to use the latest AMI. However, then I target that module, I get 'up-to-date'. But when I take the -target out, terraform checks the remote state and notices there are changes that need to be made. Any idea how I can get around this?
Bug in the core as flagged above.
Any news about this? 0.11.11 seems to be affected by this bug too!
Still an issue on v0.11.13, but the solution proposed in https://github.com/terraform-providers/terraform-provider-terraform/issues/17#issuecomment-342491403 worked for me.
Just hit this myself and noticed that if I explicitly -target both the resource I want to create _and_ the data.terraform_remote_state.whatever, it does the refresh and works.
Edit: so to be clear, instead of
terraform apply -target=provider.my_resource
I run
terraform apply -target=provider.my_resource -target=data.terraform_remote_state.shared
wow, I've spent a bit amount of time debugging my config before ran into this.
Can't at least terraform issue a warning to refresh manually? But it would be even better to read the remote state if we have referenced it in a resource automatically.
Hi all! Just looking through some older issues today and found this one.
Based on the behavior described, this sounds like a bug in 0.11 and earlier where the -target handling didn't understand that module.rds depends on data.terraform_remote_state.strat_ops_vpc because those input variables were not considered "part of the module" for the purpose of resolving dependencies.
The relevant parts of Terraform were completely redesign as part of the configuration language revamp in Terraform 0.12, so I strongly suspect that this problem was addressed in 0.12. Even if there is still a similar issue, it would manifest in a very different way under 0.12 because the codepath that produces the error string "Resource A does not have attribute B for variable C" no longer exists in Terraform 0.12. If a similar problem does still exist then someone opening an issue for it would not recognize it as the same root cause as this one, because the observed behavior would be very different.
For those reasons, I'm going to close this issue now. If you find that you're facing what seems to be similar behavior in Terraform v0.12 or later, please open a new issue and complete the bug report template so we can see how things are playing out in the new codepaths in the new version. Thanks!
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Still an issue on v0.11.13, but the solution proposed in https://github.com/terraform-providers/terraform-provider-terraform/issues/17#issuecomment-342491403 worked for me.