Describe the bug
My entire complaints about MD and storage accounts seems to actually be already solved by Azure, but not documented. Further, after some time, I still can't piece together if it's actually possible with the CLI.
azure disk create ... --for-upload true ... returns a SAS URL that looks like this:
https://md-impexp-t4nfwj1qcc3j.blob.core.windows.net/pk5qjbl3lnhn/abcd?sv=2017-04-17&sr=b&si=1d2b08af-d097-4d3f-b468-a7d4c91c7d04&sig=%2FjClxzQFf3O1ulyHn3xR1287XPdweHLnJKdYfj3ErZI%3D
Does the CLI support using that SAS url directly with az storage blob upload or az storage blob copy start? If not, I don't really get how this scenario is complete under the CLI.
If I try to decompose the URL into its parts, here's what happens:
[azureuser@xxnixos-22711:~/azure/v3]$ strg=md-impexp-t4nfwj1qcc3j
[azureuser@xxnixos-22711:~/azure/v3]$ cntr=pk5qjbl3lnhn
[azureuser@xxnixos-22711:~/azure/v3]$ blob=abcd
[azureuser@xxnixos-22711:~/azure/v3] sastoken=%2FjClxzQFf3O1ulyHn3xR1287XPdweHLnJKdYfj3ErZI%3D
[azureuser@xxnixos-22711:~/azure/v3]$ ./az.sh storage blob upload \
> --account-name "${strg}" \
> --container-name "${cntr}" \
> --name "$blob" \
> --sas-token "$sastoken" \
> --file "$f"
ERROR:
You do not have the required permissions needed to perform this operation.
Depending on your operation, you may need to be assigned one of the following roles:
"Storage Blob Data Contributor (Preview)"
"Storage Blob Data Reader (Preview)"
"Storage Queue Data Contributor (Preview)"
"Storage Queue Data Reader (Preview)"
If you want to use the old authentication method and allow querying for the right account key, please use the "--auth-mode" parameter and "key" value.
ERROR: The command failed with an unexpected error. Here is the traceback:
ERROR: 'CommandResultItem' object is not iterable
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/knack/cli.py", line 212, in invoke
self.output.out(cmd_result, formatter=formatter, out_file=out_file)
File "/usr/local/lib/python3.6/site-packages/knack/output.py", line 132, in out
output = formatter(obj)
File "/usr/local/lib/python3.6/site-packages/knack/output.py", line 38, in format_json
input_dict = dict(result) if hasattr(result, '__dict__') else result
TypeError: 'CommandResultItem' object is not iterable
WARNING:
To open an issue, please run: 'az feedback'
extra note, I tried with --auth-mode key and it failed in the same way.
And trying to test with curl is proving extremely frustrating. The docs for REST storage are near useless: https://docs.microsoft.com/en-us/rest/api/storageservices/put-page Look at the links for those headers and tell me how anyone is supposed to know what to do? The page for Auth gives zero examples, not even acceptable values. The page for versioning gives no examples and only lists the latest version and completely muddles "api-version" "x-ms-version" and the role of SAS tokens and sv query param values. Do you actually expect people to be able to use Storage via REST after reading those docs?
So far all I've managed to get through REST is "ApiNotSupportedForAccount" and "AccountIsDisabled", but it's unlikely that I've actually managed to craft a proper request, so who knows.
sasurl="https://md-impexp-t4nfwj1qcc3j.blob.core.windows.net/pk5qjbl3lnhn/abcd?sv=2017-04-17&sr=b&si=1d2b08af-d097-4d3f-b468-a7d4c91c7d04&sig=%2FjClxzQFf3O1ulyHn3xR1287XPdweHLnJKdYfj3ErZI%3D"
echo "lolazure" > /tmp/test.log
curl -v \
-X PUT \
-H "Content-Type: application/octet-stream" \
-H "x-ms-date: $(date -Ru | sed 's/\+0000/GMT/')" \
-H "x-ms-version: 2018-03-28" \
-H "x-ms-blob-type: BlockBlob" \
--data-binary "/temp/test.log" \
"${sasurl}"
results in:
< HTTP/1.1 400 This API is not supported for the account
< Content-Length: 236
< Content-Type: application/xml
< Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
< x-ms-error-code: ApiNotSupportedForAccount
< x-ms-request-id: f4f836f4-d01e-00ef-78f6-4da601000000
< x-ms-version: 2018-03-28
< Date: Thu, 08 Aug 2019 14:38:29 GMT
<
<?xml version="1.0" encoding="utf-8"?>
<Error><Code>ApiNotSupportedForAccount</Code><Message>This API is not supported for the account
RequestId:f4f836f4-d01e-00ef-78f6-4da601000000
* Connection #0 to host md-impexp-t4nfwj1qcc3j.blob.core.windows.net left intact
Time:2019-08-08T14:38:30.0489889Z</Message></Error>
Maybe I'm completely misunderstood the point of az disk create --for-upload ?
At first, your sas token should be sv=2017-04-17&sr=b&si=1d2b08af-d097-4d3f-b468-a7d4c91c7d04&sig=%2FjClxzQFf3O1ulyHn3xR1287XPdweHLnJKdYfj3ErZI%3D, which is the string after ?. Can you use this SAS token to try again?
And we will support copy using SAS URL in next release, which will happen in next Tuesday. You can try Azure CLI 2.0.71 next Tuesday and hope it can help you.
@Juliehzl Oh wow, just clicked on this to see if there was movement since you were assigned and then your reply popped in. Thanks for the fast response, I'll test ASAP and proceed accordingly! :)
@Juliehzl same issue: (I got a new SAS token/url as well)
./az.sh disk grant-access \
--resource-group $g \
--name $n \
--access-level Write \
--duration-in-seconds $timeout | jq -r .accessSas
https://md-impexp-t4nfwj1qcc3j.blob.core.windows.net/pk5qjbl3lnhn/abcd?sv=2017-04-17&sr=b&si=1d2b08af-d097-4d3f-b468-a7d4c91c7d04&sig=%2FjClxzQFf3O1ulyHn3xR1287XPdweHLnJKdYfj3ErZI%3D
.
strg="md-impexp-t4nfwj1qcc3j"
cntr="pk5qjbl3lnhn"
blob="abcd"
sastoken="sv=2017-04-17&sr=b&si=1d2b08af-d097-4d3f-b468-a7d4c91c7d04&sig=%2FjClxzQFf3O1ulyHn3xR1287XPdweHLnJKdYfj3ErZI%3D"
f="/nix/store/25555pmghk80y69g47w837s85yk544w0-azure-image/disk.vhd"
f="$(readlink "${f}")"
./az.sh storage blob upload \
--account-name "${strg}" \
--container-name "${cntr}" \
--name "$blob" \
--sas-token "$sastoken" \
--file "$f"
ERROR: The command failed with an unexpected error. Here is the traceback:
ERROR: This API is not supported for the account ErrorCode: ApiNotSupportedForAccount
<?xml version="1.0" encoding="utf-8"?>
<Error><Code>ApiNotSupportedForAccount</Code><Message>This API is not supported for the account
RequestId:755ea91e-a01e-0087-44ca-4ec091000000
Time:2019-08-09T15:51:39.1067012Z</Message></Error>
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/knack/cli.py", line 206, in invoke
cmd_result = self.invocation.execute(args)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 578, in execute
raise ex
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 636, in _run_jobs_serially
results.append(self._run_job(expanded_arg, cmd_copy))
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 627, in _run_job
cmd_copy.exception_handler(ex)
File "/usr/local/lib/python3.6/site-packages/azure/cli/command_modules/storage/__init__.py", line 240, in new_handler
handler(ex)
File "/usr/local/lib/python3.6/site-packages/azure/cli/command_modules/storage/__init__.py", line 183, in handler
raise ex
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 606, in _run_job
result = cmd_copy(params)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/commands/__init__.py", line 305, in __call__
return self.handler(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/azure/cli/core/__init__.py", line 485, in default_command_handler
return op(**command_args)
File "/usr/local/lib/python3.6/site-packages/azure/cli/command_modules/storage/operations/blob.py", line 360, in upload_blob
return type_func[blob_type]()
File "/usr/local/lib/python3.6/site-packages/azure/cli/command_modules/storage/operations/blob.py", line 353, in upload_block_blob
return client.create_blob_from_path(**create_blob_args)
File "/usr/local/lib/python3.6/site-packages/azure/multiapi/storage/v2018_11_09/blob/pageblobservice.py", line 1013, in create_blob_from_path
premium_page_blob_tier=premium_page_blob_tier)
File "/usr/local/lib/python3.6/site-packages/azure/multiapi/storage/v2018_11_09/blob/pageblobservice.py", line 1118, in create_blob_from_stream
encryption_data=encryption_data
File "/usr/local/lib/python3.6/site-packages/azure/multiapi/storage/v2018_11_09/blob/pageblobservice.py", line 1477, in _create_blob
return self._perform_request(request, _parse_base_properties)
File "/usr/local/lib/python3.6/site-packages/azure/multiapi/storage/v2018_11_09/common/storageclient.py", line 430, in _perform_request
raise ex
File "/usr/local/lib/python3.6/site-packages/azure/multiapi/storage/v2018_11_09/common/storageclient.py", line 358, in _perform_request
raise ex
File "/usr/local/lib/python3.6/site-packages/azure/multiapi/storage/v2018_11_09/common/storageclient.py", line 344, in _perform_request
HTTPError(response.status, response.message, response.headers, response.body))
File "/usr/local/lib/python3.6/site-packages/azure/multiapi/storage/v2018_11_09/common/_error.py", line 115, in _http_error_handler
raise ex
azure.common.AzureHttpError: This API is not supported for the account ErrorCode: ApiNotSupportedForAccount
<?xml version="1.0" encoding="utf-8"?>
<Error><Code>ApiNotSupportedForAccount</Code><Message>This API is not supported for the account
RequestId:755ea91e-a01e-0087-44ca-4ec091000000
Time:2019-08-09T15:51:39.1067012Z</Message></Error>
WARNING:
To open an issue, please run: 'az feedback'
Also, I see the change you're talking about... I really, really don't get why the CLI is going to just assume 'azcopy' is around and available for basic copy calls to Azure storage... I could just use azcopy myself, no? How many different rapidly changing CLI tools do I need to replicate a VHD into an MD in Azure? oh god, it goes out and downloads binaries on the fly and tries to exec them. no, just no. This won't even work on my system (NixOS).
Can someone from MD at least confirm that this is supported and/or has been tested?
@colemickens We have discussed this problem internal. Now we have supported SAS url in az storage copy command, which is based on azcopy but may not work for your system. We plan to support sas url for original az storage blob upload soon. I hope we can support in next release.
Yeah but how about all of the rest of this bug?
On Wed, Aug 14, 2019, 18:14 Zunli Hu notifications@github.com wrote:
@colemickens https://github.com/colemickens We have discussed this
problem internal. Now we have supported SAS url in az storage copy
command, which is based on azcopy but may not work for your system. We plan
to support sas url for original az storage blob upload soon. I hope we
can support in next release.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-cli/issues/10192?email_source=notifications&email_token=AACP25GHS2B2HSGMQFKG6VDQEQVMXA5CNFSM4IKKU7FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4JJ3WA#issuecomment-521313752,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACP25CLNKEGNUTLXF2ME7LQEQVMXANCNFSM4IKKU7FA
.
I just don't get why an entire extra tool is needed for an operation and
REST call that is already in the CLI? As far as I can tell, regex'ing the
URL would've been far easier than taking a dependency on an entire extra
tool.
Also, the API still doesn't seem to work with an empty MD created for
upload...
On Wed, Aug 14, 2019, 18:30 Cole Mickens notifications@github.com wrote:
Yeah but how about all of the rest of this bug?
On Wed, Aug 14, 2019, 18:14 Zunli Hu notifications@github.com wrote:
@colemickens https://github.com/colemickens We have discussed this
problem internal. Now we have supported SAS url in az storage copy
command, which is based on azcopy but may not work for your system. We
plan
to support sas url for original az storage blob upload soon. I hope we
can support in next release.—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<
https://github.com/Azure/azure-cli/issues/10192?email_source=notifications&email_token=AACP25GHS2B2HSGMQFKG6VDQEQVMXA5CNFSM4IKKU7FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4JJ3WA#issuecomment-521313752
,
or mute the thread
<
https://github.com/notifications/unsubscribe-auth/AACP25CLNKEGNUTLXF2ME7LQEQVMXANCNFSM4IKKU7FA.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/Azure/azure-cli/issues/10192?email_source=notifications&email_token=AACP25A4XR6KZWSY6SYMWKDQEQXKZA5CNFSM4IKKU7FKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOD4JLMDI#issuecomment-521319949,
or mute the thread
https://github.com/notifications/unsubscribe-auth/AACP25AZRVZGMEKGRDNJLC3QEQXKZANCNFSM4IKKU7FA
.
@zezha-msft Do we have API supporting this?
(just in case it helps clarify, my StackOverflow question has some more detail, as well as the specific bits from documentation that leads me to believe it should be possible: https://stackoverflow.com/questions/57479279/does-it-work-az-disk-create-for-upload-az-disk-grant-access-access-level) Thanks for looking into this!
Hi @colemickens, please do not expose your SAS on Github as anyone can access that managed disk now. Please revoke the SAS accordingly.
@Juliehzl could you please clarify? The managed disk (blob) is just a page blob.
(Thanks for the guidance, but the SAS token was only valid for an hour (or at least it was supposed to be, based on the arguments I passed to grant-access) so it is very long since expired. Anyway, it was just 50GB of zeros anyhow.. :smile: . I do take care to wait for them to expire or scrub the SAS/access tokens generally when pasting here and elsewhere.)
@colemickens I've contacted the managed disk PM to shed some lights on your scenario.
@colemickens The direct upload to managed disk feature is currently in the beta phase. We haven't even announced it publicly. That is why you are experiencing rough edges in the customer experience. Thanks for trying out the feature by looking at the rest API. We appreciate it!
Now, lets talk about the problem:
az storage blob upload uses a storage API that first creates a blob on the target and then copies the data from source to the target. We don’t allow clients to create a blob in md-impext- storage account as these are special accounts used only for exporting/importing managed disks. Clients can only call Put Blob on the blobs pre-created by the system. That is why you are getting the error message.
We are planning to fix this problem soon by using Put blob API. @Juliehzl already mentioned about it.
In the mean time, we recommend you to use the command below that uses azcopy under the hood. Latest version of azcopy is already aware of managed disks in upload state. It might work for NixOS.
az storage copy -s https://yourstorageaccountname.blob.core.windows.net/test/d.vhd -d managedDiskSASUri
Thanks for the clarification. However, I still can't get it to work, even after packaging and trying azcopy:
$ f="/home/azureuser/disk.vhd"
$ sasurl="...redacted... output from the grant-access command"
$ azcopy copy "${f}" "${sasurl}"
INFO: Scanning...
Job 252a3040-c949-6c46-485c-e83308db66a5 has started
Log file is located at: /home/azureuser/.azcopy/252a3040-c949-6c46-485c-e83308db66a5.log
0 Done, 1 Failed, 0 Pending, 0 Skipped, 1 Total, 2-sec Throughput (Mb/s): 33.0 Done, 1 Failed, 0 Pending, 0 Skipped0 Done, 1 Failed, 0 Pending, 0 Skipped, 1 Total, (Disk may be limiting speed) 1 Total,
Job 252a3040-c949-6c46-485c-e83308db66a5 summary
Elapsed Time (Minutes): 0.5336
Total Number Of Transfers: 1
Number of Transfers Completed: 0
Number of Transfers Failed: 1
Number of Transfers Skipped: 0
TotalBytesTransferred: 0
Final Job Status: Failed
Turns out someone on StackOverflow actually provided a working example. Despite azcopy being "aware of managed disks in upload state" apparently you still have to tell it to upload a VHD as a pageblob. https://stackoverflow.com/a/57584855/472873
So... new issue. I can't seem to make a SIG Image out of this. It doesn't work when I try to use a managed disk for the disk version instance, and I also can't create an "image" from a managed disk uploaded via the APIS were discussing here, I just this every time:
# diskid is a fully Azure ID of disk that was uploaded with `for-upload`, `grant-access` and `azcopy`
$ ./az.sh image create \
--resource-group "${group}" \
--name "${diskname}" \
--source "${diskid}" \
--os-type "linux" \
--debug
...
DEBUG: msrest.http_logger : {
"startTime": "2019-08-23T14:28:15.8016214+00:00",
"endTime": "2019-08-23T14:28:15.8797671+00:00",
"status": "Failed",
"error": {
"code": "InternalExecutionError",
"message": "An internal execution error occurred."
},
"name": "0c2137e6-5f69-4e84-8bc6-0909fbc59649"
}
@ramankumarlive any help here?
@colemickens thanks for the feedback, we've already fixed the VHD detection and it will be shipped in the next release.
I'll let @ramankumarlive take a look at the other issue, as I'm not very familiar with managed disks.
@colemickens I believe you havent revoked the SAS. The new managed disk is still in the upload state. Having said that the error message is certainly not good. We will get it fixed.
az disk revoke-access -n mydiskname -g resourcegroupname
@ramankumarlive @zezha-msft @Juliehzl : Thank you all for the help. I have successfully upload an image and replicated it to many regions using a SIG. I will be finishing this up and pushing it into NixOS's nixpkgs in the coming week or so, where it could serve as a full e2e example of custom images w/o managing storage accounts in Azure. I really appreciate the direct, quick help here, thanks so much.
(@ramankumarlive also, to circle back, I think you were right. I added the revoke-access to my script and it seems to work reliably.)
One more issue I've run into:
Here's what happens:
az disk create ... succeeds (no error that disk already exists.az disk grant-access command times out and fails with an "unexpected error occurred while processing disk".Actually, no, the same script that was working ~15 days ago is now not working. The grant-access call keeps failing. I updated the docker container I'm running, but changed nothing else:
ERROR: Deployment failed. Correlation ID: 7e73bc1e-a39b-442d-a8d3-c277cf164aaf. Error while creating storage object https://md-impexp-vxpc2srvfczm.blob.core.windows.net/fsqdxbbklb03/abcd
This is a new error that I haven't seen before. cc: @ramankumarlive .
This time I was able to upload the disk, but creating the image failed (despite revoking the access-token as expected)
+ ./az.sh image create --resource-group nixosvhds --name nixos_1903_20190911_103149 --source /subscriptions/aff271ee-e9be-4441-b9bb-42f5af4cbaeb/resourceGroups/nixosvhds/providers/Microsoft.Compute/disks/nixos_1903_20190911_103149 --os-type linux
ERROR: Deployment failed. Correlation ID: 450c6847-6f83-4a44-9fb7-fc3de20cf7a4. An unexpected error occurred while processing disk. Please retry later.
And now I'm 12 minutes into a blob upload that usually takes 30 seconds (and the "upload" is a s2s blob copy from another MD blob, so it's entirely Azure's fault that this transfer of a 1GB image is taking 13+ minutes).
EDIT: Tried again with another new blob. Still 15+ minutes and no signs of success. This is agonizing.
And after nearly 20 minutes, the disk upload finished and then the image creation failed. Again. If there weren't people repeatedly asking for this...
az sig image-version create is similarly taking a huge amount of time. This is incredibly painful - it's very slow and prone to api failures.
The latest copy of scripts and instructions are here: https://github.com/colemickens/nixpkgs/tree/azure/nixos/maintainers/scripts/azure.
Is there an expectation I should have around "how long does it take to replicate an existing OS image (already in Azure) and use it in Azure via a Shared Image Gallery" because best-case scenario right now is over 40 minutes...
Overall, even with the complete removal of manual storage account management, this is a huge exercise in patience compared to the AWS/AMI equivalent process.
This time I even got an internal stack trace:
./az.sh sig image-version create --resource-group nixos-user-vhds --gallery-name nixosvhds --gallery-image-definition nixos --gallery-image-version 1.0.0 --target-regions WestCentralUS WestUS2 WestUS --replica-count 2 --managed-image /subscriptions/aff271ee-e9be-4441-b9bb-42f5af4cbaeb/resourceGroups/nixos-user-vhds/providers/Microsoft.Compute/images/nixos_1903_20190911_103149
ERROR: Deployment failed. Correlation ID: 2111a04e-e68d-4f47-be0d-9c342cd990b6. DiskServiceInternalError: An unexpected error occurred while processing disk. Please retry later.
InternalDetail: Polling on operation https://westus2.disk.compute.azure.com:10019/subscriptions/aff271ee-e9be-4441-b9bb-42f5af4cbaeb/providers/Microsoft.Compute/locations/westus2/DiskOperations/04e1c3aa-55da-4912-8b25-e1303edb6eff?monitor=true&api-version=2018-04-01 timed out. Last error seen: DiskServiceInternalError: An unexpected error occurred while processing disk. Please retry later.
InternalDetail: {"exceptionType":"Microsoft.Azure.XStoreClientHelper.StorageAccountServiceClient.StorageAccountServiceException","errorDetail":"Microsoft.Azure.XStoreClientHelper.StorageAccountServiceClient.StorageAccountServiceException: StorageAccountOperationInternalError: Internal error occurred while accessing storage account 'md-52kphkjcksqv'.\nInternalDetail: {\"ClassName\":\"System.TimeoutException\",\"Message\":\"GET https://md-52kphkjcksqv.account.core.windows.net/keys?include=user,system&subscription=3160d8ec-e595-4090-a6b3-4044847479a4&api-version=2014-10-10 timed out.\",\"Data\":null,\"InnerException\":null,\"HelpURL\":null,\"StackTraceString\":null,\"RemoteStackTraceString\":null,\"RemoteStackIndex\":0,\"ExceptionMethod\":null,\"HResult\":-2146233083,\"Source\":null,\"WatsonBuckets\":null} \n --> System.TimeoutException: GET https://md-52kphkjcksqv.account.core.windows.net/keys?include=user,system&subscription=3160d8ec-e595-4090-a6b3-4044847479a4&api-version=2014-10-10 timed out.\r\n --- End of inner exception stack trace ---\r\n at Microsoft.Azure.XStoreClientHelper.StorageAccountServiceClient.StorageAccountEndpointErrorHandler.HandleTimeout(Tracer tracer, HttpRequestMessage request, TimeoutException timeoutException)\r\n at Microsoft.WindowsAzure.ResourceProvider.Common.ReliableHttpClient.<CallWithRetriesFullResponse>d__14`2.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at Microsoft.WindowsAzure.ResourceProvider.Common.ReliableHttpClient.<CallWithRetriesFullResponse>d__13`2.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at Microsoft.WindowsAzure.ResourceProvider.Common.ReliableHttpClient.<CallWithRetries>d__11`2.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at Microsoft.Azure.XStoreClientHelper.StorageAccountServiceClient.StorageAccountEndpointClient.<GetStorageAccountKeys>d__11.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at Microsoft.Azure.XStoreClientHelper.StorageAccountKeysCache.<>c__DisplayClass10_1.<<GetStorageAccountKeys>b__0>d.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at Microsoft.Azure.XStoreClientHelper.StorageHelper.<CaptureStorageAccountRelatedErrors>d__28.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at Microsoft.Azure.XStoreClientHelper.StorageHelper.<CaptureStorageAccountRelatedErrors>d__28.MoveNext()\r\n--- End of stack trace from previous location where exception was thrown ---\r\n at Microsoft.Azure.XStoreClientHelper.StorageAccountKeysCache.<GetStorageAccountKeys>d__10.MoveNext()\r\n--- End of stack trace from previous loc...
I spent 3+ hours today trying to just upload an image and then replicate it as a user image via SIG. Multiple API failures. Multiple instances where I waited 20+ minutes for an operation to succeed. And then I come back and it has still found a new way to fail.
I don't even know what to say. If there weren't people asking me to push these changes to them, I would rm -rf the entire thing and never look at Azure again.
@colemickens sorry for the inconvenience, could you please submit a support ticket?
@colemickens Sorry for your inconvenience again.
One more issue I've run into:
- Create a disk "nixosDisk1". Upload the blob. Revoke the access token.
- Many days later, try again, with the same disk name.
Here's what happens:
az disk create ...succeeds (no error that disk already exists.- The
az disk grant-accesscommand times out and fails with an "unexpected error occurred while processing disk".
@qwordy is the expert in Compute and I think he can give us some instruction for this. @qwordy Do you know what happened here?
Any update on this? Is azcopy still strangely required for this one blob upload scenario?
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @mjconnection, @Drewm3, @avirishuv, @axayjo, @vaibhav-agar.
add to S169
Sorry for the delay here. The internal dashboard on the compute side was not showing this issue because it was originally tagged as storage...
@yonzhan, could you outline the compute issue here? This appeared to be a storage issue at least through the initial thread.
Hi all, just putting some activity into this thread as I recently have encountered this issue. I have attempted to use both the URI and Query String. I was able to use the same SAS Token with the azcopy tool to upload a blob to my storage container, so @colemickens it appears there is not any changes yet.
@Drewm3 I belive that the reason for the Compute tag is that the functionality is failing to use the SAS token to authenticate the blob upload even when --auth-mode key is set
As an update from me, and in case its helpful to someone else, azcopy can't handle page blobs piped from standard in, so now I use blobxfer. I can now store images compressed with zstd, and then I can zstdcat into blobxfer to upload. blobxfer properly handles the input from stdin and skips the empty sections of the extracted image, so I get the benefits of having a pre-sized image, without having to store it decompressed anywhere.
Another update... blobxfer is basically abandoned and doesn't work with the latest Azure python libraries (and doesn't build at all in nixpkgs right now)... so I'm back to having no sane way to upload a page blob, while keeping it compressed on disk. Calling this frustrating is an understatement; this is trivial for AWS or GCP.
an update: Python track2 SDK supports from_blob_url and stream as upload data input. CLI will support it soon!
@yonzhan is this fixed?
@Juliehzl will provide update shortly.
Most helpful comment
@colemickens The direct upload to managed disk feature is currently in the beta phase. We haven't even announced it publicly. That is why you are experiencing rough edges in the customer experience. Thanks for trying out the feature by looking at the rest API. We appreciate it!
Now, lets talk about the problem:
az storage blob upload uses a storage API that first creates a blob on the target and then copies the data from source to the target. We don’t allow clients to create a blob in md-impext- storage account as these are special accounts used only for exporting/importing managed disks. Clients can only call Put Blob on the blobs pre-created by the system. That is why you are getting the error message.
We are planning to fix this problem soon by using Put blob API. @Juliehzl already mentioned about it.
In the mean time, we recommend you to use the command below that uses azcopy under the hood. Latest version of azcopy is already aware of managed disks in upload state. It might work for NixOS.
az storage copy -s https://yourstorageaccountname.blob.core.windows.net/test/d.vhd -d managedDiskSASUri