terraform version
Terraform v0.12.5
+ provider.null v2.1.2
module 1:
provider "null" {}
resource "null_resource" "name" {
count = 1
triggers = { description = var.description }
provisioner "local-exec" { command = "echo name " }
}
variable "description" {
type = string
}
output "ids" {
value = [for s in null_resource.name : s.id]
}
moduel2:
provider "null" {}
resource "null_resource" "hi" {
count = length(var.resources)
triggers = { t = "${var.resources[count.index]}" }
provisioner "local-exec" {
command = "echo greattings ${var.resources[count.index]}"
}
}
variable "resources" {
type = list(string)
}
module3:
module "r" {
source = "../module1"
description = var.description
}
module "hi" {
source = "../module2"
resources = module.r.ids
}
variable "description" {
type = string
}
https://gist.github.com/eddytrex/43c5bd47b273607705ae0f9bf1931503
Error: Cycle: module.r.output.ids, module.hi.var.resources, module.hi.null_resource.hi (prepare state), module.hi.null_resource.hi[0] (destroy), module.r.null_resource.name[0] (destroy), module.r.null_resource.name[0]
Renew the null_resource in the module 1
pass the values to the module 2
then Renew null_resource in module 2
Error Cycle
Please list the full steps required to reproduce the issue, for example:
terraform initterraform apply -var description=first module3terraform apply -var description=second module3tested with terraform 0.12.6 with the same behavior
Using the lifecycle crate before destroy on the null_resource name, it work fine. But maybe is not positive to use create before destroy in other resources.
Thanks for the added info @eddytrex, that might be very useful in tracking this down.
You are correct that you should not need to use create_before_destroy to fix a cycle like this, it just coincidentally happens to break the cycle when destroying the instances.
I'm testing in 0.12.7, interesting if there is not count/for_each in the null_resource.hi in the module 2 that depends on the variable resources it works with out cycle error.
Closed by #22976
I'm going to lock this issue because it has been closed for _30 days_ โณ. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.