Having a file inside a module called templates/corrupted.json
and given the following Terraform inside a module:
data "template_file" "task_definition" {
template = "${file("${path.module}/templates/corrupted.json")}"
}
resource "aws_ecs_task_definition" "app_definition" {
family = "my-taskdefinition"
container_definitions = "${data.template_file.task_definition.rendered}"
}
resource "aws_ecs_service" "app_service" {
name = "myservice"
cluster = "mycluster"
task_definition = "${aws_ecs_task_definition.app_definition.arn}"
I expect that Terraform
gives me a reasonable error message to indicate that the json provided is invalid. However, I get this error instead:
Error: Error running plan: 1 error(s) occurred:
* module.aws_ecs_app.aws_ecs_service.app_service: 1 error(s) occurred:
* module.aws_ecs_app.aws_ecs_service.app_service: Resource 'aws_ecs_task_definition.app_definition' not found for variable 'aws_ecs_task_definition.app_definition.arn'
I suspect that technically Terraform is unable to create aws_ecs_task_definition
and then fails later when trying to resolve the dependency tree. Ideally, we should fail much earlier to indicate that the JSON rendered via the template is corrupted.
@bitbrain thanks for pointing this out! You could be indeed correct here. There are definitely certain scenarios right now in the graph where erroneous resources and data sources that end up bubbling down the graph into non-intuitive error messages.
This one is a bit more complex to solve, and actually might be an issue that needs to be resolved in Terraform core, but I've flagged it as such so that it can be examined at a later time to see if we can't solve it in the resultant resource or data source itself.
Thanks!
Adding my 2 cents, I ended here because I had the same error, and after double checking my container definitions I indeed had a syntax error in there.
After correcting It terraform successfully outputed the plan.
Hi folks! 👋 Sorry for the unexpected behavior here.
It turns out this is likely an issue upstream in Terraform core, rather than anything to do with the AWS provider specifically. This scenario basically occurs anytime we implement a resource attribute with a ValidateFunc
and then that resource is then referenced by another resource. Terraform core, when the first resource is failing that validation, is preferring to return the invalid resource reference message instead of the resource validation error.
I'm going to close this issue out as we have a few upstream tracking issues relating to this:
I would suggest commenting on and upvoting those for the latest updates on this. 👍
I've spent soo much time trying to understand what is the problem...
I don't know what the problem is, but this thread just saved me some headache.
To test the JSON output, one can insert a null_resource
(h/t) such as:
resource "null_resource" "test_template" {
triggers = {
json = "${data.template_file.my_template.rendered}"
}
}
// existing template definition:
data "template_file" "my_template" {
template = "${file("${path.module}/my_template.tpl.json")}"
vars {
...
}
}
(You might need to terraform init
now if you haven't installed the null provider before.)
And then target the null_resource
:
$ terraform plan -target=module.foo.module.bar.null_resource.test_template
Refreshing Terraform state in-memory prior to plan...
...
Terraform will perform the following actions:
+ module.foo.module.bar.null_resource.test_template
id: <computed>
triggers.%: "1"
triggers.json: "<your JSON here>"
With a couple of find/replaces for the encoded newlines and quotes you should be able to lint your JSON to see if it is valid.
Another way this error can occur is if your template is valid JSON, but the JSON doesn't represent a valid Task Definition. In that case, next try targeting the aws_ecs_task_definition
resource:
$ terraform plan -target=module.foo.module.bar.aws_ecs_task_definition.my_task_definition
...
Error: Error running plan: 1 error(s) occurred:
* module.foo.module.bar.aws_ecs_task_definition.my_task_definition: ECS Task Definition container_definitions is invalid: Error decoding JSON: json: cannot unmarshal string into Go struct field PortMapping.ContainerPort of type int64
Here I incorrectly used a string for a port value.
All the referenced issues in https://github.com/terraform-providers/terraform-provider-aws/issues/3281#issuecomment-397652696 are all closed and I got stuck by this on Friday.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. Thanks!
Most helpful comment
To test the JSON output, one can insert a
null_resource
(h/t) such as:(You might need to
terraform init
now if you haven't installed the null provider before.)And then target the
null_resource
:With a couple of find/replaces for the encoded newlines and quotes you should be able to lint your JSON to see if it is valid.
Another way this error can occur is if your template is valid JSON, but the JSON doesn't represent a valid Task Definition. In that case, next try targeting the
aws_ecs_task_definition
resource:Here I incorrectly used a string for a port value.