0.6.16
resource "aws_instance" "vpn" {
ami = "[REDACTED]"
instance_type = "t2.micro"
subnet_id = "[REDACTED]"
security_groups = ["[REDACTED]"]
key_name = "vpn-key"
source_dest_check = false
}
resource "aws_eip" "vpn" {
instance = "${aws_instance.vpn.id}"
vpc = true
depends_on = ["aws_instance.vpn"]
}
resource "cloudflare_record" "vpn" {
domain = "[REDACTED]"
name = "vpn"
value = "${aws_eip.vpn.public_dns}"
type = "CNAME"
ttl = 1
}
I was building only to replace an EC2 instance with a new one running a new AMI. I expected the instance to get replaced and assigned the Elastic IP assigned to it. The Cloudflare record should not have been modified since it's tied to the EIP. I expected the build to just replace the EC2 instance like it would have in 0.6.15
error:
The Cloudflare record nor the zone have been modified outside of Terrafom. I am able to reproduce this in every Terraform project I have with a cloudflare_record resource. So it is not limited to a particular instance. All resources work correctly if I roll back to Terraform 0.6.15.
Perhaps this relates somehow to #5508 or #6449 as those look to be the only changes to the cloudflare_record resource in 0.6.16?
I can reproduce this issue with non-AWS resource as well.
I can reproduce this too.
Also, if you...
terraform apply with v0.6.16terraform plan with v0.6.15...the cloudflare_record resource shows as "new", and a subsequent terraform apply will fail with a name collision.
https://github.com/hashicorp/terraform/pull/6449#issuecomment-216592074 notes a change in a library in the cloudflare provider that impacts how records are looked up. If state data isn't getting translated between v0.6.15's "get by name" and v0.6.16's "get by ID", I think that'd explain what we're seeing here.
I found a rather messy way to work around this issue.
First, go to DNS Records section on the CloudFlare website and try to find a XHR request with /api/v4/zones/xxx. Alternatively you can directly query CloudFlare API.
On the response payload, find result key. You will see an array with existing DNS records like this. Note the ID you would like to associate with existing Terraform resource:

Next, go to your terraform.tfstate, find a CloudFlare Record resource you would like to update, and replace value of the primary.id and primary.attributes.id attribute to the ID you noted above.

After these two steps, terraform plan and terraform apply works without any problem for me.
Thank you for the workaround @premist, worked great for me!
I'm always nervous hand-editing terraform.tfstate, but since this issue was due to a lib upgrade that adds support for CloudFlare's "proxying" feature to the Terraform provider, it seems worth the pain!
Related to this change, if you have any domains that are currently proxied by CloudFlare, you'll need to add proxied = true to your Terraform resource, or else your plan execution will disable proxying (since the new proxied option defaults to false):
$ terraform plan
~ module.application.cloudflare_record.webapp_root
name: "staging" => "staging.example.com"
proxied: "true" => "false"
I'm seeing the same issue with my cloudflare resources.
I am also seeing this issue.
Hello friends –
I have patched this in the master branch, sorry for the trouble!
@premist your work-around saved the day for me, many thanks!
@catsby thank you for the fix!
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
Most helpful comment
Hello friends –
I have patched this in the
masterbranch, sorry for the trouble!