When running the following command against my aks cluster:
az aks update \
--resource-group Development \
--name Test \
--enable-cluster-autoscaler \
--min-count 1 \
--max-count 3
I get the following error message:
> Operation failed with status: 'Bad Request'. Details: AgentPool 'agentpool' has set auto scaling as enabled but is not on Virtual Machine Scale Sets, this is not allowed
I am wondering what this means not allowed on Virtual Machine Scale Sets? When creating the cluster I used the portal and just created and provisioned my VMs directly from the "Scale" tab. There was no option to enable/disable VM Scale sets?
⚠Do not edit this section. It is required for docs.microsoft.com ➟ GitHub issue linking.
Thanks for reaching out. We are currently investigating and will update you shortly.
As noted in the opening section:
AKS clusters that support the cluster autoscaler must use virtual machine scale sets and run Kubernetes version 1.12.4 or later. This scale set support is in preview. To opt in and create clusters that use scale sets, install the aks-preview Azure CLI extension
Creating an AKS cluster through the portal doesn't meet these requirements. You must install the aks-preview
extension and create the cluster through the CLI. You can do this from the Cloud Shell in the portal if you prefer.
As also noted in the article:
The cluster autoscaler is a Kubernetes component. Although the AKS cluster uses a virtual machine scale set for the nodes, don't manually enable or edit settings for scale set autoscale in the Azure portal or using the Azure CLI. Let the Kubernetes cluster autoscaler manage the required scale settings.
What you're currently seeing on a regular AKS cluster without scale set support is manually scaling the number of nodes. When you create a cluster that uses scale sets, you don't use any of the scale options in the portal. Let the cluster autoscaler manage settings from within the cluster itself.
@Karishma-Tiwari-MSFT #please-close
@JoshLefebvre We will now close this issue. If there are further questions regarding this matter, please tag me in a comment. I will reopen it and we will gladly continue the discussion.
This should not be closed, I installed the aks-preview extension and followed step-by-step this guide: https://docs.microsoft.com/en-us/azure/aks/cluster-autoscaler, however, I obtain the exactly same message:
"Operation failed with status: 'Bad Request'. Details: AgentPool 'agentpool' has set auto scaling as enabled but is not on Virtual Machine Scale Sets, this is not allowed."
What else can I try?
@diegotrujillor We are looking into your issue and will update you shortly.
Thanks, I was able to use the autoscaler once I deleted the cluster and recreated again. You can skip the looking into the issue.
Any chances in the future to enable the autocluster on existing ones created without the --enable-vmss from its beginning? Or probably an update that accepts --enable-vmss so, --enable-cluster-autoscaler, in this case would not needed delete the cluster at all.
@diegotrujillor You can directly use the Kubernetes cluster autoscaler for clusters that aren't using VMSS - https://github.com/kubernetes/autoscaler/tree/master/cluster-autoscaler/cloudprovider/azure
The cluster autoscaler behavior built-in to AKS is unlikely to have a migration to VMSS as the underlying infrastructure components are quite different. Rather than individual VM resources for each of the Kubernetes nodes, a virtual machine scale set resource is used to control the create and delete operations of VM nodes and handle the required network connections.
This should not be closed, as I am facing the same issue.
az aks update --resource-group prod-rg --name prod-aks-cluster --enable-cluster-autoscaler --min-count 3 --max-count 4
AKS Cluster version : 1.13.12 Azure CLI Version : 2.0.77
Operation failed with status: 'Bad Request'. Details: AgentPool 'agentpool' has set auto scaling as enabled but is not on Virtual Machine Scale Sets, this is not allowed. Please see https://aka.ms/aks-vmss-enablement for more details.
yes, please reopen this
az aks nodepool add \
> --resource-group WMS \
> --cluster-name aks2 \
> --name aks2pool2 \
> --node-count 3 \
> --kubernetes-version 1.14.7 \
> --max-pods 30
Operation failed with status: 'Bad Request'. Details: AgentPool APIs supported only for clusters with VMSS agentpools. For more information, please check https://aka.ms/multiagentpoollimitations
@sumanentc @stel-lest How was your AKS cluster created, using VMSS? Please let me know.
If it is not, please refer to this document on troubleshooting: https://docs.microsoft.com/en-us/azure/aks/troubleshooting#im-receiving-errors-trying-to-use-features-that-require-virtual-machine-scale-sets
Same Issue here when I try to upgrade node count from 3 to 4 using terraform
Output Terraform
Error: Error updating Default Node Pool "svl-development" (Resource Group "svl-development-rg"): containerservice.AgentPoolsClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="AgentPoolAPIsNotSupported" Message="AgentPool APIs supported only for clusters with VMSS agentpools. For more information, please check https://aka.ms/multiagentpoollimitations
variable "agent_pools" {
type = list(object({
name = string
node_count = number
vm_size = string
type = string
os_disk_size_gb = number
}))
default = [
{
name = "pool1"
node_count = 3
vm_size = "Standard_D4_v3"
type = "AvailabilitySet"
os_disk_size_gb = 30
}
]
}
Most helpful comment
Thanks, I was able to use the autoscaler once I deleted the cluster and recreated again. You can skip the looking into the issue.
Any chances in the future to enable the autocluster on existing ones created without the --enable-vmss from its beginning? Or probably an update that accepts --enable-vmss so, --enable-cluster-autoscaler, in this case would not needed delete the cluster at all.