Is your feature request related to a problem? Please describe.
Currently Managed identity is supported for storage & cosmos db. But the node js api doesnt seem to have support for this yet. I am only able to create the client objects by passing the the secret keys as mentioned in Cosmosdb & storage-queue
For queue, I was able to use access token that i got from Managed Identity authentication by using TokenCredential like this:
```
const creds = await msRestNodeAuth.loginWithVmMSI({ resource: 'https://storage.azure.com/' });
const token = await creds.getToken()
const tokenCred = new storageQueue.TokenCredential(token.accessToken);
const pipeline = storageQueue.StorageURL.newPipeline(tokenCred, {
});
```
(but with this workaround we may need to manage the token expiration).
For cosmos:
Looks like cosmos db doesn't natively support ad auth yet. (https://docs.microsoft.com/en-us/azure/active-directory/managed-identities-azure-resources/tutorial-windows-vm-access-cosmos-db) this doc just says we can get the master/ secondary key & use that for authentication.
@XiaoningLiu, @vinjiang from the storage team, can you please take a look at this issue?
@southpolesteve, @christopheranderson from the cosmos team, can you please take a look at this issue?
@devaradhanm I believe TokenCredential is the way storage-queue JS SDK works with Managed Identity authentication. It's by design to let SDK users control the expiry of tokens. We had a long discussion to implement a auto refresh inside storage SDK, however finally we gave up because we don't want to customer unexpected memory leaking due to there is no good week reference provided in JavaScript.
@XiaoningLiu, shouldn't this behavior be consistent across other azure services? for eg, to create keyvaultclient, i don't manage the cred token expiration, since I pass the credentials object.
import { KeyVaultClient } from '@azure/keyvault';
import * as msRestNodeAuth from '@azure/ms-rest-nodeauth';
const creds = await msRestNodeAuth.loginWithVmMSI({ resource: 'https://vault.azure.net });
const client = new KeyVaultClient(credentials);
in the above code credentials object takes care of refreshing & caching token. keyvaultClient just seems to get the token from credentials object everytime it makes a rest call.
Thanks for bringing up this. Currently, all storage SDKs uses a similar design. Will investigate how to improve.
This behavior is expected for Cosmos. We currently do not support managed identity via the SDK. You'll need to pass the Cosmos specific access keys.
Thanks @southpolesteve . Is there any ETA for this support for cosmosdb?
@XiaoningLiu , is there any update on this? I would like to know how this memory leak if exists is solved in other services that supports credentials object (batch, keyvault etc.) Can you provide more information about the memory leak you were facing? Can you also look at https://github.com/Azure/azure-sdk-for-js/tree/master/sdk/keyvault/keyvault to see if they have the same memory leak issue.
@devaradhanm no ETA at this time. If you need to grab the keys programmatically, you can do that via the ARM APIs https://docs.microsoft.com/en-us/rest/api/cosmos-db-resource-provider/databaseaccounts/listkeys
@devaradhanm
KeyVault sample you mentioned leveraging ms-rest-nodeauth, tries聽to get a new token every time before the request. And this is not acceptable for storage high through put scenarios. See this
https://github.com/Azure/ms-rest-nodeauth/blob/cd6b21b33066b01885dd5d9db4e05aa9d0825e9e/lib/credentials/msiTokenCredentials.ts#L134
Storage parallel upload/download are聽high IOPS operations, and every request聽needs to be signed with token. The best way to refresh token is to set timers instead of getting a new token every time.聽When we design storage SDK, first thinking is to let storage SDK manage a timer refreshing the token periodly. However, in this way, the resource pointed by the timer cannot be GCed without manually stopping the timer. We gave timer design up finally. The memory leak topic referring to about the timer.
@XiaoningLiu , even though they call getToken api, internally it caches the token & gets a new token whenever the token has expired. check this for example - https://github.com/Azure/ms-rest-nodeauth/blob/cd6b21b33066b01885dd5d9db4e05aa9d0825e9e/lib/credentials/applicationTokenCredentials.ts#L47
But i also see that this is not the case for all the types of credentials. For eg, https://github.com/Azure/ms-rest-nodeauth/blob/cd6b21b33066b01885dd5d9db4e05aa9d0825e9e/lib/credentials/msiVmTokenCredentials.ts#L99 doesnt seem to be caching token. Shouldn't this be considered a bug in this code & support caching just like applicationTokenCredentials? If so, then storage can support taking credentials object & have the responsibility of caching token within credentials object rite?
I took a look at ms-rest-nodeauth package to see what all credentials currently support caching & what doesn't.
caching is currently supported for these credentials:
Credentials that do not have caching -
@devaradhanm Yes, token should be cached.
I understand this inconvenience, but I don't have a perfect answer for this.
We can say "ms-rest-nodeauth doesn't provide credentials compatible with storage SDK", or "storage SDK doesn't compatible with ms-rest-nodeauth". The credentials in these 2 SDKs have different meanings and designs.
A good news is both ms-rest-nodeauthand sotrage SDK depend on @azure/ms-rest-js, and sharing the same Pipeline definition. A Pipeline includes array of RequestPolicyFactory
RequestPolicyFactoryServiceClientCredentials. Before sending out request, @azure/ms-rest-js wraps ServiceClientCredentials into RequestPolicyFactoryMy preferred way is ms-rest-nodeauth library could wrap these credentials to RequestPolicyFactory, instead of ServiceClientCredentials. Or provide a helper method for the wrapping, just like above sample code. Then these credentials can be injected into storage SDK pipeline array or used as storage SDK credential very easily.
Or developers could also manually do the wrapping.
Then you may ask can storage SDK accept credentials from ms-rest-nodeauth, or do the wrap? Technically it's possible, but I do have concerns:
ms-rest-nodeauth may vary, some of them may not be supported by storage service yet.TokenCredential design, which accepts a token string.That's the existing boundary.Thanks for explaining @XiaoningLiu. We can add the work around on our end. But, just to make it clear, do you suggest everyone who consumes the storage package to write this wrapper? Do you have this in your backlog & have plans to add this support to make it more aligned with other azure lib package usage?
@devaradhanm I put it into our backlog for more discussion. But currently I don't a accurate date. Before that, you can do the wrap by your self easily like (JavaScript code):
const Azure = require("@azure/storage-blob");
class CustomizedCredential extends Azure.Credential {
constructor(authenticationProvider) {
this.authenticationProvider = authenticationProvider;
}
create(nextPolicy, options) {
return new CustomizedCredentialPolicy(nextPolicy, options, this.authenticationProvider);
}
}
class CustomizedCredentialPolicy extends Azure.CredentialPolicy {
constructor(nextPolicy, options, authenticationProvider) {
super(nextPolicy, options);
this.authenticationProvider = authenticationProvider;
}
signRequest(request) {
return this.authenticationProvider.signRequest(request);
}
}
const creds = await msRestNodeAuth.loginWithVmMSI({ resource: 'https://storage.azure.com/' });
const pipeline = Azure.StorageURL.newPipeline(
new CustomizedCredential(creds)
);
let serviceURL = new Azure.ServiceURL(
"https://account.blob.core.windows.net",
pipeline
);
I didn't test above sample code, but it should work for the wrapping. It wraps credential provided in ms-rest-nodeauth into a storage SDK credential.
Thanks @XiaoningLiu
@XiaoningLiu Thank you for providing work around! Could you please share the work item link? Then we can close this issue. Thanks!
Released v12.0.0-preview.1 already integrates authentication with @azure/identity package, refer to this sample
@XiaoningLiu Thank you for the information! Close the issue. Please reopen if there is more question.