Azure.RequestFailedException: Operation could not be completed within the specified time.
RequestId:7ea85520-601f-0095-4c19-79a760000000
Time:2020-08-23T06:51:33.3743708Z
Status: 500 (Operation could not be completed within the specified time.)
ErrorCode: OperationTimedOut
Headers:
Server: Windows-Azure-HDFS/1.0,Microsoft-HTTPAPI/2.0
x-ms-error-code: OperationTimedOut
x-ms-request-id: 7ea85520-601f-0095-4c19-79a760000000
x-ms-version: 2019-07-07
x-ms-client-request-id: b5ab9d35-f982-4316-9da1-3046c8882d0d
Date: Sun, 23 Aug 2020 06:51:32 GMT
Content-Length: 193
Content-Type: application/json; charset=utf-8
at Azure.Storage.Files.DataLake.DataLakeRestClient.Path.AppendDataAsync_CreateResponse(ClientDiagnostics clientDiagnostics, Response response)
at Azure.Storage.Files.DataLake.DataLakeRestClient.Path.AppendDataAsync(ClientDiagnostics clientDiagnostics, HttpPipeline pipeline, Uri resourceUri, Stream body, String version, Nullable`1 position, Nullable`1 timeout, Nullable`1 contentLength, Byte[] transactionalContentHash, String leaseId, String requestId, Boolean async, String operationName, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakeFileClient.AppendInternal(Stream content, Nullable`1 offset, Byte[] contentHash, String leaseId, IProgress`1 progressHandler, Boolean async, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakeFileClient.AppendAsync(Stream content, Int64 offset, Byte[] contentHash, String leaseId, IProgress`1 progressHandler, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakePartitionedUploader.AppendBlockAsync(ChunkedStream block, Int64 offset, String leaseId, IProgress`1 progressHandler, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakePartitionedUploader.UploadInParallelAsync(Stream content, Int32 blockSize, PathHttpHeaders httpHeaders, DataLakeRequestConditions conditions, IProgress`1 progressHandler, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakePartitionedUploader.UploadInParallelAsync(Stream content, Int32 blockSize, PathHttpHeaders httpHeaders, DataLakeRequestConditions conditions, IProgress`1 progressHandler, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakePartitionedUploader.UploadAsync(Stream content, PathHttpHeaders httpHeaders, IDictionary`2 metadata, String permissions, String umask, DataLakeRequestConditions conditions, IProgress`1 progressHandler, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakeFileClient.StagedUploadAsync(Stream content, PathHttpHeaders httpHeaders, IDictionary`2 metadata, String permissions, String umask, DataLakeRequestConditions conditions, IProgress`1 progressHandler, StorageTransferOptions transferOptions, Boolean async, CancellationToken cancellationToken)
at Azure.Storage.Files.DataLake.DataLakeFileClient.UploadAsync(String path, PathHttpHeaders httpHeaders, DataLakeRequestConditions conditions, IProgress`1 progressHandler, StorageTransferOptions transferOptions, CancellationToken cancellationToken)
To Reproduce
Call Azure.Storage.Files.DataLake.DataLakeFileClient.UploadAsync with a ~300MB file using a slow connection (< 2Mbps upload - note mega bits per seconds)
Environment:
Azure.Storage.Files.DataLake 12.2.2.NET Core SDK (reflecting any global.json):
Version: 3.1.401
Commit: 5b6f5e5005
Runtime Environment:
OS Name: Windows
OS Version: 10.0.18363
OS Platform: Windows
RID: win10-x64
Base Path: C:\Program Files\dotnet\sdk\3.1.401\
Host (useful for support):
Version: 3.1.7
Commit: fcfdef8d6b
.NET Core SDKs installed:
1.1.11 [C:\Program Files\dotnet\sdk]
1.1.14 [C:\Program Files\dotnet\sdk]
2.1.202 [C:\Program Files\dotnet\sdk]
2.1.503 [C:\Program Files\dotnet\sdk]
2.1.514 [C:\Program Files\dotnet\sdk]
2.1.515 [C:\Program Files\dotnet\sdk]
3.1.302 [C:\Program Files\dotnet\sdk]
3.1.401 [C:\Program Files\dotnet\sdk]
.NET Core runtimes installed:
Microsoft.AspNetCore.All 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.19 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.All 2.1.21 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.All]
Microsoft.AspNetCore.App 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.19 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 2.1.21 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.AspNetCore.App 3.1.7 [C:\Program Files\dotnet\shared\Microsoft.AspNetCore.App]
Microsoft.NETCore.App 1.0.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.0.16 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.1.10 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 1.1.13 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.0.9 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.18 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.19 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 2.1.21 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.NETCore.App 3.1.7 [C:\Program Files\dotnet\shared\Microsoft.NETCore.App]
Microsoft.WindowsDesktop.App 3.1.6 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Microsoft.WindowsDesktop.App 3.1.7 [C:\Program Files\dotnet\shared\Microsoft.WindowsDesktop.App]
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @sumantmehtams.
Thank you for your feedback. Tagging and routing to the team best able to assist.
Thanks!
Note that azcopy have a similar issue: https://github.com/Azure/azure-storage-azcopy/issues/701
I enabled diagnostics logging on the blob endpoint and was able to track the correlated error for a 290MB file. Fields of interest:
StatusCode: 500
StatusText: ClientTimeoutError
DurationMs: 180267
ServerLatencyMs: 114
URI: https://REDACTED.dfs.core.windows.net/REDACTED/REDACTED/REDACTED.MOV?action=append&position=8388608
UserAgentHeader: azsdk-net-Storage.Files.DataLake/12.2.2 (.NET Core 3.1.6; Microsoft Windows 10.0.18363)
ServiceType: blob
RequestHeaderSize: 2485
RequestBodySize: 6078377
ContentLengthHeader: 8388608
StorageTransferOptions. As far as I can tell from the code and its defaults, that should have meant a single HTTP request with a 290MB body. Plus the content length doesn't match it (and is equally strangely small).timeout query param. Plus, I set both the transport HTTP client and the retry NetworkTimeout timeouts to infinite.Adding @seanmcc-msft to take a look.
Thanks for the feedback! We are routing this to the appropriate team for follow-up. cc @xgithubtriage.
Hi @ohadschn,
I think you are correct that this is a server-side timeout. To avoid this issue, I recommend setting the InitalTransferSize and MaxTransferSize in StorageTransferOptions for your upload. Below is a sample.
-Sean
using Azure.Storage;
using Azure.Storage.Files.DataLake;
using Azure.Storage.Files.DataLake.Models;
using NUnit.Framework;
using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
namespace DataLakeUpload
{
class Program
{
static async Task Main(string[] args)
{
string accountName = "";
string accountKey = "";
Uri serviceUri = new Uri($"https://{accountName}.dfs.core.windows.net");
StorageSharedKeyCredential sharedKeyCredential = new StorageSharedKeyCredential(accountName, accountKey);
DataLakeServiceClient serviceClient = new DataLakeServiceClient(serviceUri, sharedKeyCredential);
DataLakeFileSystemClient fileSystemClient = serviceClient.GetFileSystemClient("ssdadf2345");
try
{
// Create file system.
await fileSystemClient.CreateAsync();
DataLakeFileClient fileClient = fileSystemClient.GetFileClient("myfile");
DataLakeFileUploadOptions options = new DataLakeFileUploadOptions
{
TransferOptions = new StorageTransferOptions
{
InitialTransferSize = 4 * 1024 * 1024,
MaximumTransferSize = 4 * 1024 * 1024
}
};
byte[] data = GetRandomBuffer(16 * 1024 * 1024);
Stream initalStream = new MemoryStream(data);
await fileClient.UploadAsync(initalStream, options: options);
// Downloading and verifying data.
MemoryStream downloadedStream = new MemoryStream();
await fileClient.ReadToAsync(downloadedStream);
AssertSequenceEqual(data, downloadedStream.ToArray());
}
finally
{
await fileSystemClient.DeleteAsync();
}
}
public static byte[] GetRandomBuffer(long size)
{
Random random = new Random(Environment.TickCount);
var buffer = new byte[size];
random.NextBytes(buffer);
return buffer;
}
public static void AssertSequenceEqual<T>(IEnumerable<T> expected, IEnumerable<T> actual)
{
Assert.AreEqual(expected.Count(), actual.Count(), "Actual sequence length does not match expected sequence length");
(int index, T expected, T actual)[] firstErrors = expected.Zip(actual, (e, a) => (expected: e, actual: a)).Select((x, i) => (index: i, x.expected, x.actual)).Where(x => !x.expected.Equals(x.actual)).Take(5).ToArray();
Assert.IsFalse(firstErrors.Any(), $"Actual sequence does not match expected sequence at locations\n{string.Join("\n", firstErrors.Select(e => $"{e.index} => expected = {e.expected}, actual = {e.actual}"))}");
}
}
}
Thank you @seanmcc-msft, I am aware of this option - the question is what should be done about this server issue other than such a workaround. Specifically, the docs suggest we should never hit such a limit:
Calls to write a blob, write a block, or write a page are permitted 10 minutes per megabyte to complete. If an operation is taking longer than 10 minutes per megabyte on average, it will time out.
InitialTransferSize / MaximumTransferSize and/or performing more retries?