Terraform-provider-google: 400 Error when adding a 2nd disk using custom raw encryption key.

Created on 28 Jul 2019  ·  7Comments  ·  Source: hashicorp/terraform-provider-google


Community Note

  • Please vote on this issue by adding a 👍 reaction to the original issue to help the community and maintainers prioritize this request
  • Please do not leave "+1" or "me too" comments, they generate extra noise for issue followers and do not help prioritize the request
  • If you are interested in working on this issue or have submitted a pull request, please leave a comment
  • If an issue is assigned to the "modular-magician" user, it is either in the process of being autogenerated, or is planned to be autogenerated soon. If an issue is assigned to a user, that user is claiming responsibility for the issue. If an issue is assigned to "hashibot", a community member has claimed the issue already.

Terraform Version

terraform version
Terraform v0.12.5

  • provider.google v2.11.0

Affected Resource(s)

  • google_compute_disk
  • google_compute_instance

Terraform Configuration Files

provider "google" {
  credentials = "${file(var.account_file)}"
  project     = "light-reality-248113"
  region      = "us-central1"
  version     = "~>2.11.0"
}


resource "google_compute_disk" "data-disk" {
    count = "${length(var.disksconfig)}"

    name  = "${format("data-disk%03d", count.index)}"
    zone  = "us-central1-a"
    type  = "pd-ssd"
    size  = "${var.disksconfig[count.index].size}"
    disk_encryption_key  {
        raw_key     = "${var.encryptionkey}"
    }
}

resource "google_compute_instance" "vm" {
    name = "vm"
    zone  = "us-central1-a"
    machine_type = "f1-micro"
    boot_disk {
        disk_encryption_key_raw = "${var.encryptionkey}"
        initialize_params {
            image = "debian-cloud/debian-9"
        }
    }

    network_interface {
        network = "default"
    }

    dynamic "attached_disk" {
        for_each = google_compute_disk.data-disk.*
        content {
            source                  =  attached_disk.value.self_link
            device_name             =  attached_disk.value.name
            disk_encryption_key_raw = "${var.encryptionkey}"
        }
    }
}

variables.tf

variable "disksconfig" {
    type = list(object(
        {
            mount = string
            size  = number
        }
    ))
}

variable "encryptionkey" {
    default = ""
}

variable "account_file" {

}

stage1.tfvars

disksconfig = [
    {
        mount = "/mount/point"
        size  = 2
    }
]

stage2.tfvars

disksconfig = [
    {
        mount = "/mount/point"
        size  = 2
    }, {
        mount = "/mount/point2"
        size  = 2
    }
]

Debug Output


https://gist.github.com/eddytrex/6b1ba1f44fac795901b1b3436c7fba5d

Panic Output

Error: Error updating scheduling policy: googleapi: Error 400: 'projects/light-reality-248113/zones/us-central1-a/disks/vm' is protected with a customer supplied encryption key, but none was provided., resourceIsEncryptedWithCustomerEncryptionKey

  on main.tf line 21, in resource "google_compute_instance" "vm":
  21: resource "google_compute_instance" "vm" {

Expected Behavior


Added a new disk of 2Gb using the encryption key provided

Actual Behavior


Get Bad Request error. It complains about the encryption key was not provided when try to set up the scheduling policy.

Steps to Reproduce

  1. terraform apply -auto-approve -var-file="./tfvars/stage1.tfvars"
  2. terraform apply -auto-approve -var-file="./tfvars/stage2.tfvars"

Important Factoids


the encryptionkey and the account_file were passed as TF_VAR_account_file and TF_VAR_encryptionkey environment variables

The 2nd disk was created, but not attached to the vm

References

  • #0000
bug

All 7 comments

I think this is actually unrelated to the second disk, I think it's just updating the compute instance while having a CSEK encrypted disk. It looks like what's happening is the compute instance is incorrectly detecting that the scheduling info for the instance needs updated, and trying to update it. But for reasons I don't understand (I can't find any documentation about it!) you can't change a compute instance's scheduling info if it has a CSEK encrypted disk attached?

It sounds like there are two parts to this fix:

  1. Figure out why we're getting the spurious scheduling info update in the first place.
  2. Figure out why the scheduling info API call is failing when a CSEK encrypted disk is attached, and try to offer a better error message. [edit] or, based on the error message, it sounds like we could potentially supply an encryption key with that request and have it succeed? So that may be a possible solution to this part, too.

Yes, you are right.
I try with out other disk only with the boot one , encrypted, and I have the same behavior in the 2d apply.

Also try to update the schedule with the gcloud cli with the same error.

Hi, maybe the scheduling info is updating because the definition of node_affinities:

"node_affinities": {
    Type:                       schema.TypeSet, 
    Optional:                  true,
    ForceNew:               true,
    Elem:                       instanceSchedulingNodeAffinitiesElemSchema(),
    DiffSuppressFunc:   emptyOrDefaultStringSuppress(""),
},

in https://github.com/terraform-providers/terraform-provider-google/blob/master/google/resource_compute_instance.go#L468-L473

does not have the Set function like in the helpers

https://github.com/terraform-providers/terraform-provider-google/blob/master/google/compute_instance_helpers.go#L132

I think that looks how it should. If I had to guess offhand, I'd say that the diff squashing behavior broke between 0.11 and 0.12 and we hadn't noticed yet, but I'll keep investigating and see if I can track it down.

@paddycarver ,
Did you find out the problem, and any clue when we might have a fix?

@eddytrex @fstutz-pp I am able to repro the issue with the provider v2.11.0, but not with v3.16.0. Looks like this issue has been fixed. Could you please upgrade your provider and see if that helps? I am closing this issue based on the testing result. Please feel free to reopen it if you still see the issue with new versions of the provider. Thanks

I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.

If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!

Was this page helpful?
0 / 5 - 0 ratings