AKS support for rbac and azure ad integration is now available in azure cli, but the azurerm_kubernetes_cluster does not have any way to use the same features.
An example using azure cli for getting simple aks up with rbac and az ad integration would be
az aks create --resource-group "$name" --name "$name" \
--no-ssh-key \
--enable-rbac \
--aad-server-app-id "$aad-server-app-id" \
--aad-server-app-secret "$aad-server-app-secret" \
--aad-client-app-id "$aad-client-app-id" \
--aad-tenant-id "$aad-tenant-id" \
--service-principal "$service-principal" \
--client-secret "$client-secret"
Of these options only the configuration of service principal/secret is available in the current resource template.
# Copy-paste your Terraform configurations here - for large Terraform configs,
# please use a service like Dropbox and share a link to the ZIP file. For
# security, you can also encrypt the files using our GPG public key.
Dependent on #1389
π
I'd like to help/work on this issue. Could this be a potential updated resource configuration? Assumption would be that the user creates a Server and Client application with necessary permissions prior to applying the Terraform plan.
resource "azurerm_kubernetes_cluster" "test" {
name = "testaks"
location = "${azurerm_resource_group.testrg.location}"
resource_group_name = "${azurerm_resource_group.testrg.name}"
dns_prefix = "mytestaks23"
linux_profile {
admin_username = "adam.user"
ssh_key {
key_data = "..."
}
}
agent_pool_profile {
...
}
service_principal {
...
}
azure_ad_rbac {
server_app_id = "00000000-0000-0000-0000-000000000000"
server_app_secret = "000000000000000000000000000000000000000000"
client_app_id = "00000000-0000-0000-0000-000000000000"
tenant_id = "00000000-0000-0000-0000-000000000000"
}
tags {
Environment = "Development"
}
}
:+1:
@timcurless
I'd like to help/work on this issue
Awesome, thanks π
Could this be a potential updated resource configuration
Possibly. The design of the Azure CLI gives the impression there's room for other RBAC providers in the future - so I'd probably suggest this becomes:
resource "azurerm_kubernetes_cluster" "test" {
rbac {
enabled = true
azure_ad {
client_id = "..."
server_app_id = "..."
server_app_secret = "..."
tenant_id = "..."
}
}
}
What do you think?
@tombuildsstuff Great point. I went back on forth about adding the enabled=true
logic, but your point about multiple providers makes it a no brainer. I'll proceed with your example. Thanks!
What about something like the addonProfiles implementation? (#1502)
resource "azurerm_kubernetes_cluster" "test" {
rbac_config {
provider_name = "azure_ad" # Or just 'name'
enabled = true
config {
Key = "Value" # Open ended config block
}
}
}
This way the resource can support any future implemetations of rbac
enabled
is not strictly necessary as it's implied, but I left it in the addonProfile implementation.
It does make it slightly more difficult to fail fast but also means less maintenance in the future π π
I see your point, @lfshr. I need to study the addonProfiles implementation a little more, but I can see the advantages with regards to future upkeep. I also like the idea of approaching this in the same manner for consistency.
Interestingly the docs (link) still show the --enable-rbac
parameter, while the cli help notes it is deprecated in favor of --disable-rbac
, because RBAC is enabled by default.
I am running az cli v. 2.0.41, and I don't see a specific version called out for the aks plugin/package.
I think this would suggest, to me at least, the following configs:
1) Disabling RBAC
```
resource "azurerm_kubernetes_cluster" "test" {
rbac_config { # Or just 'rbac'
enabled = false
}
}
2) Enabling AAD integration with RBAC
resource "azurerm_kubernetes_cluster" "test" {
rbac_config {
# enabled = true implied
provider_name = "azure_ad"
config {
Key = "Value"
}
}
}
```
@timcurless @lfshr so in general we avoid passing in a key-value pair block, for a few reasons:
ForceNew
in the schema where fields can't be modified; since changes to this will cause the API Errors returned in the previous point to be returnedAs such I believe this would be the better approach here:
resource "azurerm_kubernetes_cluster" "test" {
rbac {
enabled = true
azure_ad {
client_id = "..."
server_app_id = "..."
server_app_secret = "..."
tenant_id = "..."
}
}
}
Thinking about it that approach probably also makes the most sense for #1502 too - what do you think @lfshr?
@tombuildsstuff Yeah, I ended up taking a pretty similar approach. Currently, I went with the following. My thought was that rbac_enabled is really a Kubernetes thing, while the aad_profile is really an Azure thing. RBAC can be enabled/disabled irrespective of a provider.
resource "azurerm_kubernetes_cluster" "test" {
enable_rbac = true # optional, force new, default true
aadProfile { # optional but need to add a validation function
client_id = "..."
server_app_id = "..."
server_app_secret = "..."
tenant_id = "..."
}
}
Right now I'm just working through some of the kube_config
and kube_config_raw
logic that changes depending on the use of RBAC.
@tombuildsstuff my only issue with this for #1502 is that the layout of addonProfiles seems to indicate there are going to be many more addonProfile types added in the future. Removing the name and key/value pair config, and including the config types in the schema will likely be a nightmare to keep up with. The reasoning for doing it this way was down to two reasons, to allow for future addonProfiles to be added by Microsoft without any input from the Terraform side, and that the documentation of the config schemas is fairly poor. Upon reflection I think applying the same model here probably wouldn't be the best option. aadProfile is well documented and has a well defined schema.
If you look at the ARM reference you'll see what I mean. addonProfiles is just... empty.
https://docs.microsoft.com/en-us/azure/templates/microsoft.containerservice/managedclusters
Ultimately it's your decision though, I'll follow your lead.
@lfshr
If you look at the ARM reference you'll see what I mean. addonProfiles is just... empty.
Unfortunately most of the API docs are auto-generated at this point, so there's generally not that much more information in them than exists in the SDK (since they both come from the same Swagger).
@tombuildsstuff my only issue with this for #1502 is that the layout of addonProfiles seems to indicate there are going to be many more addonProfile types added in the future. Removing the name and key/value pair config, and including the config types in the schema will likely be a nightmare to keep up with. The reasoning for doing it this way was down to two reasons, to allow for future addonProfiles to be added by Microsoft without any input from the Terraform side, and that the documentation of the config schemas is fairly poor. Upon reflection I think applying the same model here probably wouldn't be the best option. aadProfile is well documented and has a well defined schema.
So I'm not opposed to making it a key-value pair and I'd agree there's potentially more coming in the future, but I think in this instance it'd be better to go with the strongly typed object (we can always change it in the future if needs-be) since that allows us to support nested objects where needed (vs a map which requires us to have all values of the same type [e.g. string] at present) / it also allows us to work around the case of fields not being returned from the API (such as API Keys) by using Sets where needed.
There's a tradeoff either way, but I think in this instance we'd have a better UX with a native object, until we have a reason to say otherwise?
@tombuildsstuff hmm. You make a fair point about nesting. I can't guarantee there will never be a case where a nested parameter is added to the config.
You've swayed me. I'll have a think about how I can sensibly achieve this in #1502.
Enabling RBAC with any provider appears to modify the Users section of the kube_config. In other words, with RBAC enabled the following kube_config attributes are no longer present:
Instead the kube_config users section will contain the Azure AD server/client IDs and server secret. Naturally I expect other RBAC providers will contain further unique data.
How is this typically handled in Terraform with regards to unstructured attributes? For example, the following output will error when RBAC is enabled:
output "client_certificate" {
value = "${azurerm_kubernetes_cluster.aks_container.kube_config.0.client_certificate}"
}
By Terraform standards can an Output be allowed to return nil? I can't say I recall ever seeing that.
I'm just trying to layout potential solutions before actually jumping in and writing code.
Thanks
EDIT: Here are two resultant kube_configs with RBAC disabled and enabled respectively:
users:
- name: clusterUser_demo-rg_aks2
user:
client-certificate-data: ...
client-key-data: ...
token: 1234123412341234123412341234
users:
- name: clusterUser_demo-rg_aks1
user:
auth-provider:
config:
apiserver-id: bb...30f
client-id: ae...cc
tenant-id: b8...02
name: azure
Hi,
Any updates on this? Do you know when this Issue/feature will be closed?(implemented)
I got a little side tracked but still have most of this done. I'm just looking for guidance on how to handle selectively allowing outputs for configurations where they will blank. Is this just something to address in the docs?
In short, how would one handle the following output in an RBAC situation where this variable is empty or nil (with RBAC enabled the Azure API won't return a client cert)?
output "client_certificate" {
value = "${azurerm_kubernetes_cluster.aks_container.kube_config.0.client_certificate}"
}
@tombuildsstuff Do you know the answer to this?
@timcurless in this case we should be able to set these values to an empty string, which is our default case when there's no value / should allow this to work as expected :)
@timcurless, any updates on this? Thanks for looking into this!
any update? also would like this included in my terraform pipeline
Is there a branch for this?
Sorry for the delay. I worked out most of the rest of this this morning but
havenβt been able to test. Hoping to do that tonight. I can submit a PR for
what I have.
On Thu, Aug 23, 2018 at 3:27 PM Luis Davim notifications@github.com wrote:
Is there a branch for this?
β
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/terraform-providers/terraform-provider-azurerm/issues/1429#issuecomment-415560231,
or mute the thread
https://github.com/notifications/unsubscribe-auth/APrymZtbD70olSNc-9JUGAtvRycvpkhUks5uTxBIgaJpZM4UzxBm
.>
Tim Curless, MBA | Senior Technical Architect - Cloud | VCDX
http://vcdx.vmware.com #114 | Ahead
Mobile: 608.577.0682 | Office: 608.807.1396
tim.[email protected] tim.curless@thinkahead.com |
www.thinkahead.com | @timcurless https://twitter.com/timcurless
π
Support for Role Based Access Control has been added by @timcurless in #1820 which has been merged and should will be form a part of the v1.19 release of the AzureRM Provider. Thanks again for this contribution @timcurless - apologies this took a little while for us to get too!
Thanks!
Yes! Thanks everyone for the effort, this will make a big difference with a number of customers!
Thanks for all the work to verify, test, and merge from everyone here! This is my first TF contribution and hoping for many more!
Thanks for the effort @timcurless ! Is it possible to enable rbac but without AD integration?
Looks like it is addressed in https://github.com/terraform-providers/terraform-provider-azurerm/pull/2347
Is there anyway of outputting the clusterAdmin kubeconfig? The attributes exported appear to be for clusterUser.
At the moment it seems one must run "az aks get-credential --admin" in order to retrieve the clusterAdmin credentials.
My use case is to create rolebindings via "kubernetes_cluster_role_binding" in the kubernetes provider.
@haodeon I'm facing the same issue you are, for the same use case.
I was just looking at the code and it appears that swapping this hardcoded variable might do the trick:
https://github.com/terraform-providers/terraform-provider-azurerm/blob/master/azurerm/resource_arm_kubernetes_cluster.go#L518
I just ran a test config I had here with the "patched" plugin and successfully got the clusterAdmin
kubeconfig :) maybe they can parameterize that value in a future version.
In the mean time, you can just patch it manually and build the provider yourself.
@Jack64 Thanks for the info and letting me know of your test result.
I opened a feature request a few days ago #2421 to export clusterAdmin. In the issue someone also proposed local-exec as a workaround too.
I think exporting clusteradmin separately is the best long term solution and hope the feature gets implemented.
Since support for RBAC within Kubernetes Clusters has shipped and there's new issues open tracking the requests for new functionality to the Kubernetes Cluster resource - rather than adding additional comments here (since this issue is resolved) I'm going to lock this issue for the moment - please open a new issue if you're looking to support any new functionality :)
Most helpful comment
PR Open: https://github.com/terraform-providers/terraform-provider-azurerm/pull/1820