Hi there,
Thank you for opening an issue. Please note that we try to keep the Terraform issue tracker reserved for bug reports and feature requests. For general usage questions, please see: https://www.terraform.io/community.html.
$terraform -v
Terraform v0.11.1
+ provider.google v1.4.0
+ provider.local v1.0.0
Please list the resources as a list, for example:
resource "google_container_cluster" "bolcom-sbf3pp" {
name = "bolcom-sbf3pp"
zone = "europe-west1-c"
project = "bolcom-frieps"
min_master_version = "1.8.4-gke.0"
enable_legacy_abac = false
network = "sbf3pp-network"
subnetwork = "bolcom-sbf3pp-subnetwork"
master_auth {
username =""
password =""
}
addons_config {
http_load_balancing {
disabled = false
}
horizontal_pod_autoscaling {
disabled = true
}
kubernetes_dashboard {
disabled = true
}
}
maintenance_policy {
daily_maintenance_window {
start_time = "02:00"
}
}
ip_allocation_policy {
cluster_secondary_range_name = "bolcom-sbf3pp-pods"
services_secondary_range_name = "bolcom-sbf3pp-services"
}
master_authorized_networks_config {
cidr_blocks = [
{ cidr_block = "91.195.1.40/30", display_name = "office" },
{ cidr_block = "185.14.171.68/32", display_name = "shd-gcp-jump-001" },
]
}
lifecycle {
ignore_changes = [ "node_count" ]
}
node_pool {
name = "default-pool"
node_count = 1
autoscaling {
min_node_count = 1
max_node_count = 3
}
node_config {
preemptible = false
disk_size_gb = 100
machine_type = "n1-standard-1"
oauth_scopes = [
"https://www.googleapis.com/auth/compute",
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring",
]
tags = ["bolcom-sbf3pp-default-pool"]
}
}
}
See https://gist.github.com/wmuizelaar/b32116014ac72767b9dd26e6e585b5e1
Nothing should have happened, since the node_count change should be ignore due to the lifecycle configuration.
The nodecount was changed, and therefore the current node-count will be decreased to 1 again.
terraform applyterraform apply again. The I would think that the initial_node_count could be used to 'fix' this issue, but that option is deprecated. So maybe I'm doing things wrong, but if I create a cluster without the node_count parameter, the cluster-size gets set to '0', and auto-scaling doesn't scale it up to '1'.
Confirmation that using 'initital_node_count' fixes the issue, but since that one is deprecated, I would like some advice on how to setup this config :-)
Question- if you set autoscaling but not node_count, what would you expect to happen? It seems like we have two options:
I like the second one so that users don't have to worry about whether they set initial_node_count or node_count for their node pools (in case they add autoscaling later, for example), but I also only want to do that if it doesn't lead to unexpected behavior.
I like the second one best as well.
I'm going to lock this issue because it has been closed for _30 days_ ⏳. This helps our maintainers find and focus on the active issues.
If you feel this issue should be reopened, we encourage creating a new issue linking back to this one for added context. If you feel I made an error 🤖 🙉 , please reach out to my human friends 👉 [email protected]. Thanks!
Most helpful comment
Confirmation that using 'initital_node_count' fixes the issue, but since that one is deprecated, I would like some advice on how to setup this config :-)