terraform version
Terraform v0.12.5
provider "google" {
credentials = "${file(var.account_file)}"
project = "light-reality-248113"
region = "us-central1"
version = "~>2.11.0"
}
resource "google_compute_disk" "data-disk" {
count = "${length(var.disksconfig)}"
name = "${format("data-disk%03d", count.index)}"
zone = "us-central1-a"
type = "pd-ssd"
size = "${var.disksconfig[count.index].size}"
disk_encryption_key {
raw_key = "${var.encryptionkey}"
}
}
resource "google_compute_instance" "vm" {
name = "vm"
zone = "us-central1-a"
machine_type = "f1-micro"
boot_disk {
disk_encryption_key_raw = "${var.encryptionkey}"
initialize_params {
image = "debian-cloud/debian-9"
}
}
network_interface {
network = "default"
}
dynamic "attached_disk" {
for_each = google_compute_disk.data-disk.*
content {
source = attached_disk.value.self_link
device_name = attached_disk.value.name
disk_encryption_key_raw = "${var.encryptionkey}"
}
}
}
variables.tf
variable "disksconfig" {
type = list(object(
{
mount = string
size = number
}
))
}
variable "encryptionkey" {
default = ""
}
variable "account_file" {
}
stage1.tfvars
disksconfig = [
{
mount = "/mount/point"
size = 2
}
]
stage2.tfvars
disksconfig = [
{
mount = "/mount/point"
size = 2
}, {
mount = "/mount/point2"
size = 2
}
]
https://gist.github.com/eddytrex/6b1ba1f44fac795901b1b3436c7fba5d
Error: Error updating scheduling policy: googleapi: Error 400: 'projects/light-reality-248113/zones/us-central1-a/disks/vm' is protected with a customer supplied encryption key, but none was provided., resourceIsEncryptedWithCustomerEncryptionKey
on main.tf line 21, in resource "google_compute_instance" "vm":
21: resource "google_compute_instance" "vm" {
Added a new disk of 2Gb using the encryption key provided
Get Bad Request error. It complains about the encryption key was not provided when try to set up the scheduling policy.
terraform apply -auto-approve -var-file="./tfvars/stage1.tfvars"terraform apply -auto-approve -var-file="./tfvars/stage2.tfvars"
the encryptionkey and the account_file were passed as TF_VAR_account_file and TF_VAR_encryptionkey environment variables
The 2nd disk was created, but not attached to the vm
I think this is actually unrelated to the second disk, I think it's just updating the compute instance while having a CSEK encrypted disk. It looks like what's happening is the compute instance is incorrectly detecting that the scheduling info for the instance needs updated, and trying to update it. But for reasons I don't understand (I can't find any documentation about it!) you can't change a compute instance's scheduling info if it has a CSEK encrypted disk attached?
It sounds like there are two parts to this fix:
Yes, you are right.
I try with out other disk only with the boot one , encrypted, and I have the same behavior in the 2d apply.
Also try to update the schedule with the gcloud cli with the same error.
Hi, maybe the scheduling info is updating because the definition of node_affinities:
"node_affinities": {
Type: schema.TypeSet,
Optional: true,
ForceNew: true,
Elem: instanceSchedulingNodeAffinitiesElemSchema(),
DiffSuppressFunc: emptyOrDefaultStringSuppress(""),
},
does not have the Set function like in the helpers
I think that looks how it should. If I had to guess offhand, I'd say that the diff squashing behavior broke between 0.11 and 0.12 and we hadn't noticed yet, but I'll keep investigating and see if I can track it down.
@paddycarver ,
Did you find out the problem, and any clue when we might have a fix?
@eddytrex @fstutz-pp I am able to repro the issue with the provider v2.11.0, but not with v3.16.0. Looks like this issue has been fixed. Could you please upgrade your provider and see if that helps? I am closing this issue based on the testing result. Please feel free to reopen it if you still see the issue with new versions of the provider. Thanks
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!