Terraform: Data sources not refreshed on destroy

Created on 7 Dec 2020  路  4Comments  路  Source: hashicorp/terraform

Terraform Version

Terraform v0.14.0

Terraform Configuration Files

provider "kubernetes" {
  load_config_file       = false
  host                           = data.terraform_remote_state.cluster.outputs.endpoint
}
data "terraform_remote_state" "cluster" {
  backend = "s3"
  config = {
    bucket     = var.remote_state_bucket
    key          = var.remote_state_key
    region     = var.remote_state_region
  }
}
resource "kubernetes_namespace" "test" {
  metadata {
    name = "test"
  }
}

Debug Information

Error: Kubernetes cluster unreachable: the server has asked for the client to provide credentials

Expected Behavior

That the destroy run refreshes the data from the terraform_remote_state and use it in the provider block.

Actual Behavior

When the destroy is run the data terraform_remote_state is not refreshed when a plan is created. When the plan is executed the provider fails as there is no data from the remote state.

Additional Context

We use one terraform run to create a Kubernetes cluster. Then use a second one to create Kubernetes resources on the cluster. The terraform_remote_state data resource is used to get the hostname from the cluster from the state of the first run. When running a destroy the terraform_remote_state is not refreshed and the Kubernetes provider fails.

bug confirmed v0.14

Most helpful comment

Did some more testing. When using a data source from the same terraform run it also doesn't refresh.
The Kubernetes provider is never usable as the data.aws_eks_cluster_auth.default_auth.token doesn't get refreshed on a destroy.

provider "kubernetes" {
  load_config_file       = false
  host                   = aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(aws_eks_cluster.default.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.default_auth.token
}

All 4 comments

Did some more testing. When using a data source from the same terraform run it also doesn't refresh.
The Kubernetes provider is never usable as the data.aws_eks_cluster_auth.default_auth.token doesn't get refreshed on a destroy.

provider "kubernetes" {
  load_config_file       = false
  host                   = aws_eks_cluster.default.endpoint
  cluster_ca_certificate = base64decode(aws_eks_cluster.default.certificate_authority[0].data)
  token                  = data.aws_eks_cluster_auth.default_auth.token
}

Our full test street is depending on this bug. Would love to see it fixed. 馃憤

Hello,

For any that aren't aware, the workaround of refreshing, or applying an empty plan immediately before a destroy will update the data source in the state.

The destroy process itself has many shortcomings, though because the many other configurations that fail never worked previously, they are obviously not noticed as regressions. In this case, the removal of the separate refresh process means that data sources are not updated during an explicit destroy operation. While removing refresh fixes numerous other bugs, it seems it created another case that does not work well with the destroy command.

I think we can re-introduce a sort of "data refresh" into the destroy-plan graph, and in the process also take care of some other outstanding issues, like locals and outputs not being usable in provider configurations during destroy.

@jbardin Thank you for the workaround, this seems to help us with the issue we have.

Was this page helpful?
0 / 5 - 0 ratings