0.8.5
This may be an issue with terraform ordering of resources
I wrote an ignition configuration, but it had a problem (CoreOS did not boot) and I had to change the configuration. At first, I had something like:
resource "ignition_file" "squarescale-environment" {
count = "${var.cluster_size}"
filesystem = "rooCLUSTER_NODE_NAMEt"
path = "/etc/squarescale.env"
content {
content = <<EOF
CLUSTER_NODE_NAME=core-${count.index + 1}
EOF
}
}
resource "ignition_file" "aws-environment" {
filesystem = "root"
path = "/etc/aws_credentials.env"
content {
content = <<EOF
AWS_REGION=${var.aws_region}
AWS_ACCESS_KEY=${var.aws_access_key}
AWS_ACCESS_KEY_ID=${var.aws_access_key}
AWS_SECRET_ACCESS_KEY=${var.aws_secret_key}
EOF
}
}
resource "ignition_config" "core" {
count = "${var.cluster_size}"
systemd = [
# ...
]
files = [
"${element(ignition_file.squarescale-environment.*.id, count.index)}",
"${ignition_file.aws-environment.id}",
# ...
]
}
There is an obvious typo in the ignition_file but terraform succeeds. The machine won't boot. After correcting in the ignition_file filesystem = "root", I run terraform again (terraform apply, directly) and I get the following error (selected lines from output):
ignition_file.squarescale-environment.2: Refreshing state... (ID: c37c07ad2c0b8610b7ee18857ab46661e802c2d6fb0f4b269eef96d3fe07402a)
ignition_file.squarescale-environment.0: Refreshing state... (ID: 7082ef34e3e53a2a875ba75f036d1ec5f3cd512416e6ec0952db90b70c192b5e)
ignition_file.squarescale-environment.1: Refreshing state... (ID: d98ecf8f6a20b35e1cbfe7970c3dd4060a3414c9850828446c2c9f229fa0bbdb)
ignition_file.aws-environment: Refreshing state... (ID: 6973659e50d41717166bb73700a57c3003ef7b2f612e66888bcbf03569551765)
ignition_config.core.0: Refreshing state... (ID: 9d2cb39731b79a13ab367b048a1824623aed2c65bb6c9c6c000468346739d7e3)
ignition_config.core.2: Refreshing state... (ID: a03bfa9f8374b6a488511f38451727a11683e766c459431386322bee7b015ed7)
ignition_config.core.1: Refreshing state... (ID: 4b3bcbc40e24f09b5a2b30ab4dc07ccdefbd9beba5d34eca479ec4726688eeb8)
...
ignition_file.squarescale-environment.0: Creating...
content.#: "" => "1"
content.0.content: "" => "SQSC_FULL_NODE_NAME=shanti-dev6-staging-1\nSQSC_PARENT_ETCD=local-core.staging.sqsc.squarely.io\nSQSC_CLUSTER_TOKEN=77716bdb976c99fe3a23f11744fb016ee381f20d\nCLUSTER_NODE_NAME=core-1\nCLUSTER_ETCD_CONFIG_PATH=/clusters/77716bdb976c99fe3a23f11744fb016ee381f20d/nodes/core-1/config\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "root"
path: "" => "/etc/squarescale.env"
ignition_file.squarescale-environment.1: Creating...
content.#: "" => "1"
content.0.content: "" => "SQSC_FULL_NODE_NAME=shanti-dev6-staging-2\nSQSC_PARENT_ETCD=local-core.staging.sqsc.squarely.io\nSQSC_CLUSTER_TOKEN=77716bdb976c99fe3a23f11744fb016ee381f20d\nCLUSTER_NODE_NAME=core-2\nCLUSTER_ETCD_CONFIG_PATH=/clusters/77716bdb976c99fe3a23f11744fb016ee381f20d/nodes/core-2/config\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "root"
path: "" => "/etc/squarescale.env"
ignition_file.squarescale-environment.2: Creating...
content.#: "" => "1"
content.0.content: "" => "SQSC_FULL_NODE_NAME=shanti-dev6-staging-3\nSQSC_PARENT_ETCD=local-core.staging.sqsc.squarely.io\nSQSC_CLUSTER_TOKEN=77716bdb976c99fe3a23f11744fb016ee381f20d\nCLUSTER_NODE_NAME=core-3\nCLUSTER_ETCD_CONFIG_PATH=/clusters/77716bdb976c99fe3a23f11744fb016ee381f20d/nodes/core-3/config\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "root"
path: "" => "/etc/squarescale.env"
ignition_file.squarescale-environment.1: Creation complete
ignition_file.squarescale-environment.2: Creation complete
ignition_file.squarescale-environment.0: Creation complete
ignition_file.squarescale-environment.1 (deposed #0): Destroying...
ignition_file.squarescale-environment.1 (deposed #0): Destruction complete
ignition_file.squarescale-environment.2 (deposed #0): Destroying...
ignition_file.squarescale-environment.0 (deposed #0): Destroying...
ignition_config.core.0: Modifying...
files.0: "7082ef34e3e53a2a875ba75f036d1ec5f3cd512416e6ec0952db90b70c192b5e" => "235f0282bb7361ec4433a0c6bd851fee5afcb6a00c7c4a05108eca645df8a531"
ignition_file.squarescale-environment.2 (deposed #0): Destruction complete
ignition_config.core.1: Modifying...
files.0: "d98ecf8f6a20b35e1cbfe7970c3dd4060a3414c9850828446c2c9f229fa0bbdb" => "ae4dee4faf4e5e5064b6b2f2e1def78b108b95a84ad809a8572aded6ec06d34a"
ignition_file.squarescale-environment.0 (deposed #0): Destruction complete
ignition_config.core.2: Modifying...
files.0: "c37c07ad2c0b8610b7ee18857ab46661e802c2d6fb0f4b269eef96d3fe07402a" => "c10e4975add707e0ab849b2ec06316086ce4186fc74f30e70cd496f95a232c95"
Error applying plan:
3 error(s) occurred:
* ignition_config.core.0: invalid file "6973659e50d41717166bb73700a57c3003ef7b2f612e66888bcbf03569551765", unknown file id
* ignition_config.core.1: invalid file "6973659e50d41717166bb73700a57c3003ef7b2f612e66888bcbf03569551765", unknown file id
* ignition_config.core.2: invalid file "6973659e50d41717166bb73700a57c3003ef7b2f612e66888bcbf03569551765", unknown file id
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Changing the filesystem property shouldn't have caused this terraform error.
Running terraform through terragrunt, automatically triggered by a custom service.
I can also confirm that any other changes in the content of an ignition_file trigger the same error.
This is easy to reproduce 100%. Put in an empty directory the following file as test.tf:
variable "cluster_size" {
default = 3
}
resource "ignition_file" "squarescale-environment" {
count = "${var.cluster_size}"
filesystem = "rooCLUSTER_NODE_NAMEt"
path = "/etc/squarescale.env"
content {
content = <<EOF
CLUSTER_NODE_NAME=core-${count.index + 1}
EOF
}
}
resource "ignition_file" "aws-environment" {
filesystem = "root"
path = "/etc/aws_credentials.env"
content {
content = <<EOF
CLUSTER_SIZE=${var.cluster_size}
EOF
}
}
resource "ignition_config" "core" {
count = "${var.cluster_size}"
systemd = [
# ...
]
files = [
"${element(ignition_file.squarescale-environment.*.id, count.index)}",
"${ignition_file.aws-environment.id}",
# ...
]
}
Then run terraform-0.8.5 apply, it should say:
ignition_file.squarescale-environment.2: Creating...
content.#: "" => "1"
content.0.content: "" => "CLUSTER_NODE_NAME=core-3\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "rooCLUSTER_NODE_NAMEt"
path: "" => "/etc/squarescale.env"
ignition_file.aws-environment: Creating...
content.#: "" => "1"
content.0.content: "" => "CLUSTER_SIZE=3\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "root"
path: "" => "/etc/aws_credentials.env"
ignition_file.squarescale-environment.2: Creation complete
ignition_file.squarescale-environment.1: Creating...
content.#: "" => "1"
content.0.content: "" => "CLUSTER_NODE_NAME=core-2\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "rooCLUSTER_NODE_NAMEt"
path: "" => "/etc/squarescale.env"
ignition_file.squarescale-environment.0: Creating...
content.#: "" => "1"
content.0.content: "" => "CLUSTER_NODE_NAME=core-1\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "rooCLUSTER_NODE_NAMEt"
path: "" => "/etc/squarescale.env"
ignition_file.aws-environment: Creation complete
ignition_file.squarescale-environment.1: Creation complete
ignition_file.squarescale-environment.0: Creation complete
ignition_config.core.0: Creating...
files.#: "0" => "2"
files.0: "" => "b00008a0ba30d6876b15918db9f9f5706b721742f80b212b0bf766aa4196cdf1"
files.1: "" => "e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9"
rendered: "" => "<computed>"
ignition_config.core.2: Creating...
files.#: "0" => "2"
files.0: "" => "780fd24630c88551785fb3957c5b3c71e7521cc56146fa8f1ff594cbb4e9ff4d"
files.1: "" => "e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9"
rendered: "" => "<computed>"
ignition_config.core.0: Creation complete
ignition_config.core.2: Creation complete
ignition_config.core.1: Creating...
files.#: "0" => "2"
files.0: "" => "2b7c243d9aa05909d6309d63ddbed8144fd4adb793ebf1973367caade86a79d5"
files.1: "" => "e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9"
rendered: "" => "<computed>"
ignition_config.core.1: Creation complete
Apply complete! Resources: 7 added, 0 changed, 0 destroyed.
The state of your infrastructure has been saved to the path
below. This state is required to modify and destroy your
infrastructure, so keep it safe. To inspect the complete state
use the `terraform show` command.
State path: terraform.tfstate
Change something in the file, for example, you can run sed -i s/CLUSTER_SIZE=/SIZE=/ test.tf then retry terraform-0.8.5 apply, it should error with:
ignition_file.aws-environment: Refreshing state... (ID: e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9)
ignition_file.squarescale-environment.1: Refreshing state... (ID: 2b7c243d9aa05909d6309d63ddbed8144fd4adb793ebf1973367caade86a79d5)
ignition_file.squarescale-environment.2: Refreshing state... (ID: 780fd24630c88551785fb3957c5b3c71e7521cc56146fa8f1ff594cbb4e9ff4d)
ignition_file.squarescale-environment.0: Refreshing state... (ID: b00008a0ba30d6876b15918db9f9f5706b721742f80b212b0bf766aa4196cdf1)
ignition_config.core.2: Refreshing state... (ID: 025f1f5209e5932d426ead4e76b6b7fbc1efaed665167e79981af42b0c808b40)
ignition_config.core.0: Refreshing state... (ID: c56a8587625c62ccec2cd49af67425b6aea68961ebdada36844ff3e2cb9b29dc)
ignition_config.core.1: Refreshing state... (ID: 3acb6dacb9275ca6273f42af55572c4ce4ffada80e8ceb0ac9096f731004ce6e)
ignition_file.aws-environment: Destroying...
ignition_file.aws-environment: Destruction complete
ignition_file.aws-environment: Creating...
content.#: "" => "1"
content.0.content: "" => "SIZE=3\n"
content.0.mime: "" => "text/plain"
filesystem: "" => "root"
path: "" => "/etc/aws_credentials.env"
ignition_file.aws-environment: Creation complete
ignition_config.core.1: Modifying...
files.1: "e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9" => "01164a732b47d18e88c6f60da09d8b082ebb150c757747724078ab03cecc1217"
ignition_config.core.0: Modifying...
files.1: "e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9" => "01164a732b47d18e88c6f60da09d8b082ebb150c757747724078ab03cecc1217"
ignition_config.core.2: Modifying...
files.1: "e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9" => "01164a732b47d18e88c6f60da09d8b082ebb150c757747724078ab03cecc1217"
Error applying plan:
3 error(s) occurred:
* ignition_config.core.0: invalid file "b00008a0ba30d6876b15918db9f9f5706b721742f80b212b0bf766aa4196cdf1", unknown file id
* ignition_config.core.1: invalid file "2b7c243d9aa05909d6309d63ddbed8144fd4adb793ebf1973367caade86a79d5", unknown file id
* ignition_config.core.2: invalid file "780fd24630c88551785fb3957c5b3c71e7521cc56146fa8f1ff594cbb4e9ff4d", unknown file id
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Attached a zip file (issue11518.zip) with test.tf file after the sed command, and both tfstates. It also contains trace.log, the terraform output with TF_LOG=TRACE. it is also available as a gist here: trace.log Put them all in a directory and running terraform-0.8.5 apply in it should trigger the error
You can get out of it by tainting (terraform taint) the resource whose id appears in the error message. Enough times until there is no more errors. Basically, this is just tainting all the resources which appear on the logs as refreshed like in the example :
ignition_file.aws-environment: Refreshing state... (ID: e61fcc43178bba5e791999f038e84025ebb5e0f26b02f71491ea411d9ace60f9)
ignition_file.squarescale-environment.1: Refreshing state... (ID: 2b7c243d9aa05909d6309d63ddbed8144fd4adb793ebf1973367caade86a79d5)
ignition_file.squarescale-environment.2: Refreshing state... (ID: 780fd24630c88551785fb3957c5b3c71e7521cc56146fa8f1ff594cbb4e9ff4d)
ignition_file.squarescale-environment.0: Refreshing state... (ID: b00008a0ba30d6876b15918db9f9f5706b721742f80b212b0bf766aa4196cdf1)
ignition_config.core.2: Refreshing state... (ID: 025f1f5209e5932d426ead4e76b6b7fbc1efaed665167e79981af42b0c808b40)
ignition_config.core.0: Refreshing state... (ID: c56a8587625c62ccec2cd49af67425b6aea68961ebdada36844ff3e2cb9b29dc)
ignition_config.core.1: Refreshing state... (ID: 3acb6dacb9275ca6273f42af55572c4ce4ffada80e8ceb0ac9096f731004ce6e)
Just run:
terraform-0.8.5 taint ignition_file.aws-environment
terraform-0.8.5 taint ignition_file.squarescale-environment.1
terraform-0.8.5 taint ignition_file.squarescale-environment.2
terraform-0.8.5 taint ignition_file.squarescale-environment.0
terraform-0.8.5 taint ignition_file.core.2
terraform-0.8.5 taint ignition_file.core.0
terraform-0.8.5 taint ignition_file.core.1
You can get these commands by piping your tfstate into (grep 'ignition_.*{' | cut -d'"' -f2 | xargs -n 1 printf "terraform-0.8.5 taint %s\n")
I believe this is caused by the fact that an ignition_config takes the id of the different other ignition resources. it prevents terraform to converge during planning phase because to know the rendered ignition config (used in an AWS EC2 instance for example) it needs to provision the ignition resources. And because those are resources, it cannot provision them during planning phase.
This also causes terraform to need multiple application to converge cleanly. For instance, I just modified the ignition config and tainted all the ignition resource so terraform can run. The application will just recreate the ignition resource but not recreate the EC2 instances that depends on it. I need to apply one more time so terraform detects that the ignition rendered config has changed and it needs to reprovision an EC2 instance.
A solution would be IMO to convert all the ignition resources to data source (that can run during planning phase). Ignition resource are not true resource. Provisioning an ingnition_file does not have side effects. it only causes its rendered property to change. This is strictly equivalent to template_file which is a data source.
@mcuadros: is there a specific reason the ignition configuration is designed as resources and not data sources ?
This also happens when you have Ignition resources as separate modules. For example, if I have my ignition_systemd_units in a separate module to my ignition_config.
sorry I was at FOSDEM and a very busy week, I will take care of this todady.
@mildred I will try to make a PR today solving this issue, maybe transforming the resources in data sources. Thanks for the hint
@mcuadros thank you for your fix.
Someone here, is it possible to merge it?
I'm using Ignition as a data source, but i still get this after destroying and recreating instances:
Error applying plan:
2 error(s) occurred:
* module.kubernetes.data.ignition_config.worker: data.ignition_config.worker: invalid file "d5b9ec1c34b2f50caabc5d6a26c8c61fd782cd23286aecc18d2720a15b9238c4", unknown file id
* module.kubernetes.data.ignition_config.controller: data.ignition_config.controller: invalid file "d5b9ec1c34b2f50caabc5d6a26c8c61fd782cd23286aecc18d2720a15b9238c4", unknown file id
Another apply fixes this, but it's still a bit annoying.
Can confirm above mentioned issue happens to me too.
@jangrewe What is your exact use case there?
I'm running into this issue in a very similar context, also while generating ignition config for K8S nodes :)
@alexsomesan Well, not sure what to say, my use-case is generating ignition configs for K8s nodes - too ;-)
@mcuadros @stack72
Would it be reasonable to reopen this?
We've got two independent reports that this issue is still manifesting (mine and @jangrewe ).
Ok, I will take a look to see whats going on.
FWIW I am seeing the same issue
@alexclifford @jangaraj @andrewrynhard I was unable to replicate with a simple code. Can anyone provide a simple recipe config failing?
Terraform v0.9.4-dev (9aaf220efbe3079c1f645227b0d468a365b5b44d+CHANGES)
The message "invalid file %q, unknown file id" is being trigger when a ignition_file couldn't be found in the cache.
A very common mistake, that produces this error, is when a non ignition_file is given to the files key in a ignition_config. For example:
data "ignition_config" "example" {
files = [
"${data.ignition_systemd_unit.example.id}",
"${data.ignition_file.hello.id}",
"${data.ignition_file.bar.id}",
]
}
@mcuadros I am seeing this when generating an ignition_file, then passing the id into a module... use case is that I have three different node types that all share some common configs:
data "ignition_config" "example" {
files = [
"${var.ignition_file_bar_id}",
]
}
Ok that is not 100% true. In addition to passing in the ID, when the unit file throws the invalid file ... error, if I change any field in it (like name), it works. I can't quite pin it down, but there is definitely a bug.
Hi all, I've also been experiencing the same issue. I have been able to consistently replicate the error by creating a resource outside of a module, passing a reference to that resource into the module, referencing the variable in a template, creating an ignition_file with the template, and then creating what seemed to be an unrelated ignition_systemd_unit resource.
Hopefully others are able to replicate with this example https://github.com/jwieringa/terraform-bug-11518 or discover a simpler failure case.
Completed with Terraform version 0.9.3.
@jwieringa thanks! Was really helpful.
The issues hapends when the igntion resources are resolved in different stages due to depend in a external variable.
I made a solution, but I am not 100% sure if this will gonna work with more complex configurations. I plan to made some other changes to avoid this kind of problems.
@alexsomesan can you test with the Tectonic configuration?
@mcuadros I tested as discussed offline.
Looks good on my side. It fixes the issue I was mentioning earlier.
Looking forward to the PR.
Fix just got merged! Thank you @mcuadros!
Still experiencing this issue in 10.7 and 10.4. Was using tectonic-installer as a submodule. Worked fine for a few days and when I started passing in variables I consistently get this Ignition error now. Was wondering if there was a fix?
I believe the resource was kept for compatibility. Are you by any change using the ignition provider as a resource instead of a data source?
If you have:
resource "ignition_systemd_unit" "example" {
name = "example.service"
content = "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
}
this is incorrect. You should have instead:
data "ignition_systemd_unit" "example" {
name = "example.service"
content = "[Service]\nType=oneshot\nExecStart=/usr/bin/echo Hello World\n\n[Install]\nWantedBy=multi-user.target"
}
@mildred using as data not as resource. Seems like not using it in submodule has fixed the problem for time being.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
I'm using Ignition as a data source, but i still get this after destroying and recreating instances:
Another
applyfixes this, but it's still a bit annoying.