Terraform v0.11.14 and Terraform v0.12.0
provider "google" {
credentials = "${file("key.json")}"
project = "lgo-gce"
}
provider "google-beta" {
credentials = "${file("key.json")}"
project = "lgo-gce"
}
data "google_compute_network" "default" {
name = "default"
}
resource "google_compute_firewall" "web_access" {
name = "web-access"
allow {
protocol = "tcp"
ports = [ 80 ]
}
network = "${data.google_compute_network.default.name}"
source_ranges = [ "0.0.0.0/0" ]
}
resource "google_compute_instance_template" "web" {
name_prefix = "web-"
machine_type = "f1-micro"
disk {
source_image = "debian-cloud/debian-9"
}
network_interface {
network = "${data.google_compute_network.default.name}"
access_config {
}
}
metadata_startup_script = "apt-get update ; apt-get install -y nginx"
}
resource "google_compute_health_check" "web_check" {
name = "web-check"
check_interval_sec = 5
timeout_sec = 5
healthy_threshold = 2
unhealthy_threshold = 2
http_health_check {
request_path = "/"
port = "80"
}
}
resource "google_compute_instance_group_manager" "web_group" {
provider = "google-beta"
name = "web-group"
base_instance_name = "web"
target_size = 1
version {
name = "default"
instance_template = "${google_compute_instance_template.web.self_link}"
}
zone = "europe-west1-d"
auto_healing_policies {
health_check = "${google_compute_health_check.web_check.self_link}"
initial_delay_sec = 30
}
lifecycle {
ignore_changes = [
"version.0.name"
]
}
}
This configuration is savd as oops.tf and used below.
The code, first applied with terraform 0.11.14 should be applied without any change with terraform 0.12.0.
When applying with terraform 0.12.0, a replacement of the instance is planned if it has previously been replaced by GCE.
mkdir t11 t12
curl -sLo t11/terraform.zip https://releases.hashicorp.com/terraform/0.11.14/terraform_0.11.14_linux_amd64.zip
curl -sLo t12/terraform.zip https://releases.hashicorp.com/terraform/0.12.0/terraform_0.12.0_linux_amd64.zip
unzip -d t11 t11/terraform.zip
unzip -d t12 t12/terraform.zip
t11/terraform -v
t12/terraform -v
t11/terraform init
t11/terraform apply
gcloud beta compute instance-groups managed rolling-action replace web-group --zone=europe-west1-d
Following this step, version name will not be "default" as created, but will have a new generated name, based on time. As per writing this issue, I obtained a version called "0/2019-05-24 15:00:03.158055+00:00". This is exactly why we first used lifecycle / ignore_changes. It prevents instances recreation while not needed. Following step presents the expected behaviour with 0.11.14.
t11/terraform apply
Produced output is:
google_compute_health_check.web_check: Refreshing state... (ID: web-check)
data.google_compute_network.default: Refreshing state...
google_compute_firewall.web_access: Refreshing state... (ID: web-access)
google_compute_instance_template.web: Refreshing state... (ID: web-20190524145803821600000001)
google_compute_instance_group_manager.web_group: Refreshing state... (ID: lgo-gce/europe-west1-d/web-group)
Apply complete! Resources: 0 added, 0 changed, 0 destroyed.
t12/terraform 0.12upgrade
t12/terraform validate
validate output warns us about the wrong presentation of ignore_changes:
Error: Attribute name required
on oops.tf line 66, in resource "google_compute_instance_group_manager" "web_group":
66: ignore_changes = ["version.0.name"]
Dot must be followed by attribute name.
We then replace version.0.name with version[0].name and obtain a successful return from validate:
Success! The configuration is valid.
But then, lifecycle/ignore_changes is no more enforced when using t12/terraform apply
data.google_compute_network.default: Refreshing state...
google_compute_health_check.web_check: Refreshing state... [id=web-check]
google_compute_firewall.web_access: Refreshing state... [id=web-access]
google_compute_instance_template.web: Refreshing state... [id=web-20190524145803821600000001]
google_compute_instance_group_manager.web_group: Refreshing state... [id=lgo-gce/europe-west1-d/web-group]
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
~ update in-place
Terraform will perform the following actions:
# google_compute_instance_group_manager.web_group will be updated in-place
~ resource "google_compute_instance_group_manager" "web_group" {
base_instance_name = "web"
fingerprint = "uNsNuiZ0RZU="
id = "lgo-gce/europe-west1-d/web-group"
instance_group = "https://www.googleapis.com/compute/v1/projects/lgo-gce/zones/europe-west1-d/instanceGroups/web-group"
name = "web-group"
project = "lgo-gce"
self_link = "https://www.googleapis.com/compute/v1/projects/lgo-gce/zones/europe-west1-d/instanceGroupManagers/web-group"
target_pools = []
target_size = 1
wait_for_instances = false
zone = "europe-west1-d"
auto_healing_policies {
health_check = "https://www.googleapis.com/compute/beta/projects/lgo-gce/global/healthChecks/web-check"
initial_delay_sec = 30
}
update_policy {
max_surge_fixed = 1
max_surge_percent = 0
max_unavailable_fixed = 1
max_unavailable_percent = 0
min_ready_sec = 0
minimal_action = "REPLACE"
type = "PROACTIVE"
}
~ version {
instance_template = "https://www.googleapis.com/compute/v1/projects/lgo-gce/global/instanceTemplates/web-20190524145803821600000001"
~ name = "0/2019-05-24 15:00:03.158055+00:00" -> "default"
}
}
Plan: 0 to add, 1 to change, 0 to destroy.
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value: no
Apply cancelled.
As said above this is happening during the upgrade process from 0.11.14 to 0.12.0
According to the docs, the value of ignore_changes
changed from a string to an attribute name. I changed my configuration accordingly from:
lifecycle {
ignore_changes {
"spec.0.template.0.spec.0.container.2.image"
}
}
to:
lifecycle {
ignore_changes {
spec[0].template[0].spec[0].container[2].image
}
}
but this seems to have no effect. terraform apply
clobbers the modified image
attribute with the value specified in the configuration, when it should leave it untouched.
I'm also having this issue:
resource "aws_batch_compute_environment" "m_reg0" {
compute_environment_name = "compute_env"
lifecycle {
ignore_changes = [ compute_resources[0].desired_vcpus, compute_resources[0].min_vcpus ]
}
module.ml_pipeline_default.aws_batch_compute_environment.m_reg0: Modifying... [id=compute_env]
Error: : Manually scaling down compute environment is not supported. Disconnecting job queues from compute environment will cause it to scale-down to minvCpus.
status code: 400, request id: _redacted_
edit: ignore_changes = all
seems to work
FWIW, this is also an issue for us with managing K8S resources where the cluster is also being managed by Rancher2.
For 0.11 we had the following which worked well...
lifecycle {
ignore_changes = [
"metadata.0.annotations.field.cattle.io/publicEndpoints",
]
}
Now, I'm unable to determine the equivalent for 0.12.
If run unaltered, the error is:
Error: Attribute name required
on ../../modules/bootstrap/node-metrics.tf line 73, in resource "kubernetes_daemonset" "node-exporter":
73: "metadata.0.annotations.field.cattle.io/publicEndpoints",
Dot must be followed by attribute name.
On removing the '.0' part, it then becomes...
Error: Invalid character
on ../../modules/bootstrap/node-metrics.tf line 73, in resource "kubernetes_daemonset" "node-exporter":
73: "metadata.annotations.field.cattle.io/publicEndpoints",
Expected an attribute access or an index operator.
After some RTFMing, it seems I need to use attribute names instead of strings, so I tried this...
lifecycle {
ignore_changes = [
metadata[0].annotations["field.cattle.io/publicEndpoints"],
]
}
Which results in the following error:
Error: Invalid expression
on ../../modules/bootstrap/node-metrics.tf line 73, in resource "kubernetes_daemonset" "node-exporter":
73: metadata[0].annotations["field.cattle.io/publicEndpoints"],
A static variable reference is required.
As a workaround, I even tried just ignoring all the annotations.
lifecycle {
ignore_changes = [
metadata[0].annotations,
]
}
It doesn't give me an error, but it doesn't work as expected either. It just offers to clobber the attributes it's supposed to be ignoring when run.
Can anyone tell me if I'm doing something wrong, or whether this is now just plain broke in 0.12?!?
This is blocking me from upgrading to 0.12 as well, it will clobber tons of resources from the change in ignore_changes params from .0. to [0] when referencing attributes
This was milestoned for 0.12.1, but 0.12.1 has been released and did not address this issue. Could we get an update on the timeline for addressing this issue?
@apparentlymart - My use case is still not working with 0.12.3
Copying code from further up in this thread-- these attributes are still triggering an update...
resource "aws_batch_compute_environment" "m_reg0" {
compute_environment_name = "compute_env"
lifecycle {
ignore_changes = [ compute_resources[0].desired_vcpus, compute_resources[0].min_vcpus ]
}
This seems similar to the pattern described in the initial issue.
@apparentlymart I confirm what @jyoungs is saying and 0.12.3
is not fixing our use case as well. Due to a bug in terraform-aws-provider
(see https://github.com/terraform-providers/terraform-provider-aws/issues/4392) we need to ignore_changes
in a aws_kinesis_firehose_delivery_stream
resource and the following doesn't work anymore in Terraform 0.12 (tested with any minor of Terraform 0.12, up to 0.12.3
):
resource "aws_kinesis_firehose_delivery_stream" "name" {
extended_s3_configuration {
processing_configuration {
processors {
parameters {
parameter_name = "LambdaArn"
parameter_value = "${var.lambda_arn}:$LATEST"
}
parameters {
parameter_name = "BufferSizeInMBs"
parameter_value = "1"
}
parameters {
parameter_name = "BufferIntervalInSeconds"
parameter_value = "60"
}
}
}
}
lifecycle {
ignore_changes = [extended_s3_configuration[0].processing_configuration[0].processors[0].parameters]
}
}
@apparentlymart I can also confirm it does not work with the kubernetes provider. For instance the following worked for deployments/statefulsets in 0.11
lifecycle {
ignore_changes = ["spec.0.template.0.spec.0.container.0.image"]
}
but
lifecycle {
ignore_changes = [spec[0].template[0].spec[0].container[0].image]
}
does not work in 0.12.3 (or any other 0.12 version)
Also seeing this issue on 0.12.3. Here is an example from azurerm_redis_cache
resource "azurerm_redis_cache" "redis" {
name = "redis-${var.name}-${var.environment}"
location = var.location
resource_group_name = var.resource_group_name
capacity = var.capacity
family = var.family
sku_name = "Premium"
redis_configuration {
rdb_backup_enabled = true
rdb_backup_frequency = 60
rdb_backup_max_snapshot_count = 1
rdb_storage_connection_string = "DefaultEndpointsProtocol=https;BlobEndpoint=${azurerm_storage_account.redis.primary_blob_endpoint};AccountName=${azurerm_storage_account.redis.name};AccountKey=${azurerm_storage_account.redis.primary_access_key}"
}
tags = var.default_tags
lifecycle {
ignore_changes = [redis_configuration[0].rdb_storage_connection_string]
}
}
This previously used to work in 0.11.x to ignore the rdb_storage_connection_string, but it no longer ignores the value in 0.12.3. The only thing that appears to work is if i specify the top level redis_configuration.
I guess this might be related to issue: https://github.com/hashicorp/terraform/issues/21421
@apparentlymart would you please consider reopening this issue ?
Followed ignore_changes
option doesn't work after upgrade to 0.12:
resource "azurerm_app_service" "app-service" {
...
lifecycle {
ignore_changes = [
"site_config[0].linux_fx_version",
]
}
}
Same issue here with :
``` lifecycle {
ignore_changes = [
clone[0].template_uuid,
custom_attributes,
disk,
wait_for_guest_ip_timeout,
wait_for_guest_net_routable,
]
}
When I run the plan :
```[...]
+ wait_for_guest_ip_timeout = 0
+ wait_for_guest_net_routable = true
wait_for_guest_net_timeout = 5
~ clone {
- linked_clone = false -> null
~ template_uuid = "xxxxxxxxxxxxxxxxxxxxx" -> "yyyyyyyyyyyyyyyyyyyyyyyy" # forces replacement
timeout = 30
}
~ disk {
[...]
}
[...]
@wgebis - try dropping the quotes
lifecycle {
ignore_changes = [
site_config[0].linux_fx_version,
]
}
Thanks @rossigee. That was the issue. Now it works. 👌
@rossigee one more thing: after upgrade to 0.12.4, both options work, with and without quotes.
I'm still seeing issues even after upgrade to 0.12.4 where ignore_changes is still not working as expected like it previously did in 0.11.x. Here's an example from azurerm_function_app
where it is still not ignoring app_settings.WEBSITE_RUN_FROM_ZIP
as requested.
resource "azurerm_function_app" "function" {
count = var.function_count
name = "${var.name}${count.index + 1}-${var.environment}"
resource_group_name = var.resource_group_name
location = var.location
app_service_plan_id = element(azurerm_app_service_plan.function.*.id, count.index)
storage_connection_string = element(
azurerm_storage_account.function.*.primary_connection_string,
count.index,
)
tags = var.default_tags
version = var.function_version
app_settings = merge(
var.app_settings,
{
"FUNCTIONS_WORKER_RUNTIME" = var.function_runtime
"WEBSITE_NODE_DEFAULT_VERSION" = var.function_node_version
"APPINSIGHTS_INSTRUMENTATIONKEY" = element(
azurerm_application_insights.function.*.instrumentation_key,
count.index,
)
"StorageConnectionString" = element(
azurerm_storage_account.function.*.primary_connection_string,
count.index,
)
},
)
lifecycle {
ignore_changes = [ app_settings.WEBSITE_RUN_FROM_ZIP ]
}
}
And then during planning it's trying to modify the ignored item to null:
~ resource "azurerm_function_app" "function" {
...
~ app_settings = {
...
"WEBSITE_NODE_DEFAULT_VERSION" = "8.11.1"
- "WEBSITE_RUN_FROM_ZIP" = "https://<redacted>" -> null
}
...
}
Anyone else still seeing issues here?
If you are still seeing unexpected behavior with ignore_changes
, please open a new issue and complete the issue template so that we can understand how your situation differs from the one that was addressed in #21788.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
@apparentlymart would you please consider reopening this issue ?