Azure-sdk-for-net: Azure.Storage.Blobs.BlobClient.UploadAsync throws System.Net.Sockets.SocketException

Created on 19 Dec 2019  路  18Comments  路  Source: Azure/azure-sdk-for-net

Appears to be related to, but not exactly the same as #9142

Azure.Storage.Blobs.BlobClient.UploadAsync(Stream, Boolean, CancellationToken) is intermittently throwing the following exception with file sizes 36.9 MB and 37.5 MB

Exception

System.Threading.Tasks.TaskCanceledException: The operation was canceled.
 ---> System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request..
 ---> System.Net.Sockets.SocketException (995): The I/O operation has been aborted because of either a thread exit or an application request.
   --- End of inner exception stack trace ---
   at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.ThrowException(SocketError error, CancellationToken cancellationToken)
   at System.Net.Sockets.Socket.AwaitableSocketAsyncEventArgs.GetResult(Int16 token)
   at System.Net.Security.SslStream.<FillBufferAsync>g__InternalFillBufferAsync|215_0[TReadAdapter](TReadAdapter adap, ValueTask`1 task, Int32 min, Int32 initial)
   at System.Net.Security.SslStream.ReadAsyncInternal[TReadAdapter](TReadAdapter adapter, Memory`1 buffer)
   at System.Net.Http.HttpConnection.FillAsync()
   at System.Net.Http.HttpConnection.ReadNextResponseHeaderLineAsync(Boolean foldedHeadersAllowed)
   at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
   --- End of inner exception stack trace ---
   at System.Net.Http.HttpConnection.SendAsyncCore(HttpRequestMessage request, CancellationToken cancellationToken)
   at System.Net.Http.HttpConnectionPool.SendWithNtConnectionAuthAsync(HttpConnection connection, HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
   at System.Net.Http.HttpConnectionPool.SendWithRetryAsync(HttpRequestMessage request, Boolean doRequestAuth, CancellationToken cancellationToken)
   at System.Net.Http.RedirectHandler.SendAsync(HttpRequestMessage request, CancellationToken cancellationToken)
   at System.Net.Http.HttpClient.FinishSendAsyncUnbuffered(Task`1 sendTask, HttpRequestMessage request, CancellationTokenSource cts, Boolean disposeCts)
   at Azure.Core.Pipeline.HttpClientTransport.ProcessAsync(HttpMessage message)
   at Azure.Core.Pipeline.RequestActivityPolicy.ProcessNextAsync(HttpMessage message, ReadOnlyMemory`1 pipeline, Boolean isAsync)
   at Azure.Core.Pipeline.RequestActivityPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline, Boolean isAsync)
   at Azure.Core.Pipeline.BufferResponsePolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline)
   at Azure.Core.Pipeline.LoggingPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline, Boolean async)
   at Azure.Core.Pipeline.LoggingPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline)
   at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline)
   at Azure.Core.Pipeline.RetryPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline, Boolean async)
   at Azure.Core.Pipeline.RetryPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline, Boolean async)
   at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline)
   at Azure.Core.Pipeline.HttpPipelineSynchronousPolicy.ProcessAsync(HttpMessage message, ReadOnlyMemory`1 pipeline)
   at Azure.Storage.Blobs.BlobRestClient.BlockBlob.UploadAsync(ClientDiagnostics clientDiagnostics, HttpPipeline pipeline, Uri resourceUri, Stream body, Int64 contentLength, Nullable`1 timeout, Byte[] transactionalContentHash, String blobContentType, String blobContentEncoding, String blobContentLanguage, Byte[] blobContentHash, String blobCacheControl, IDictionary`2 metadata, String leaseId, String blobContentDisposition, String encryptionKey, String encryptionKeySha256, Nullable`1 encryptionAlgorithm, Nullable`1 tier, Nullable`1 ifModifiedSince, Nullable`1 ifUnmodifiedSince, Nullable`1 ifMatch, Nullable`1 ifNoneMatch, String requestId, Boolean async, String operationName, CancellationToken cancellationToken)
   at Azure.Storage.Blobs.Specialized.BlockBlobClient.UploadInternal(Stream content, BlobHttpHeaders blobHttpHeaders, IDictionary`2 metadata, BlobRequestConditions conditions, Nullable`1 accessTier, IProgress`1 progressHandler, Boolean async, CancellationToken cancellationToken)
   at Azure.Storage.Blobs.Specialized.BlockBlobClient.UploadAsync(Stream content, BlobHttpHeaders httpHeaders, IDictionary`2 metadata, BlobRequestConditions conditions, Nullable`1 accessTier, IProgress`1 progressHandler, CancellationToken cancellationToken)
   at Azure.Storage.Blobs.PartitionedUploader.UploadAsync(Stream content, BlobHttpHeaders blobHttpHeaders, IDictionary`2 metadata, BlobRequestConditions conditions, IProgress`1 progressHandler, Nullable`1 accessTier, CancellationToken cancellationToken)
   at Azure.Storage.Blobs.BlobClient.StagedUploadAsync(Stream content, BlobHttpHeaders blobHttpHeaders, IDictionary`2 metadata, BlobRequestConditions conditions, IProgress`1 progressHandler, Nullable`1 accessTier, Nullable`1 singleUploadThreshold, StorageTransferOptions transferOptions, Boolean async, CancellationToken cancellationToken)

Code Snippet

var blobClient = new Azure.Storage.Blobs.BlobClient(new Uri("https://<storageAcct>.blob.core.windows.net/<container>/<blobName>?<sas>"));

await using (var fileStream = System.IO.File.OpenRead("<path>"))
{
    await blobClient.UploadAsync(fileStream, true, CancellationToken.None);
}

Setup

  • OS: Windows 10 Version 1803 (OS Build 17134.1184)
  • IDE : Visual Studio 2019 v16.4.2
  • Azure.Storage.Blobs 12.1.0
  • .NET Core 3.1 Console Application
Client Storage customer-reported

Most helpful comment

I have also stuck with this issue. Its fix is very simple, just set your stream position as 0.

fileStream.Position = 0;

All set, it will work.

All 18 comments

Thank you for reporting this issue, @beckettk.

I've slightly modified your sample to the following loop, and have yet to hit the exception.

var containerClient = new Azure.Storage.Blobs.BlobContainerClient(new Uri(""https://<storageAcct>.blob.core.windows.net/<container>?<containersas>""));
int i = 0;
while (true)
{
    var blobClient = containerClient.GetBlobClient($"MyBlob-{1}");
    await using (var fileStream = System.IO.File.OpenRead("<my 37MB file>"))
    {
        await blobClient.UploadAsync(fileStream, true, CancellationToken.None);
    }
}

Is it possible for you to provide a Fiddler trace of the failed upload, or anything that allows me to view the web request itself? (Don't forget to redact any signatures/auth headers)

Additionally, how intermittent is this issue? I've done 50 attempts so far in this loop and haven't encountered the issue.

@reyou I am also unable to reproduce this issue with the file mentioned in your SO thread. A web trace of the failed request would prove helpful in tracking down the cause.

@jaschrep-msft
Do you refer to use a tool something like Fiddler to get trace? Or is there any logger related to that?

Fiddler's display of the raw request/response is perfect.

@jaschrep-msft, I can reliably recreate this issue on my system.

I have traced the request/response with Fiddler. Here is a sample request/response scenario.

REQUEST

    PUT https://<storageAcct>.blob.core.windows.net/<container>/<blobName>?<sas> HTTP/1.1
    Host: <storageAcct>.blob.core.windows.net
    x-ms-blob-type: BlockBlob
    x-ms-version: 2019-02-02
    x-ms-client-request-id: e81bc21c-5331-4684-8907-ffd6fa1f2208
    x-ms-return-client-request-id: true
    User-Agent: azsdk-net-Storage.Blobs/12.1.0+ee51b9a6328321b8bc491824116ec9038f2fed5b (.NET Core 3.1.1; Microsoft Windows 10.0.17134)
    Content-Length: 38771467

RESPONSE - Overall Elapsed: 0:07:25.119

    HTTP/1.1 201 Created
    Content-Length: 0
    Content-MD5: WHHcDGfrHi9e8zJpVQ8sCw==
    Last-Modified: Wed, 15 Jan 2020 02:32:25 GMT
    ETag: "0x8D79963285CD733"
    Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
    x-ms-request-id: c3479df1-501e-0131-0b4a-cb8caa000000
    x-ms-client-request-id: e81bc21c-5331-4684-8907-ffd6fa1f2208
    x-ms-version: 2019-02-02
    x-ms-content-crc64: xsaeQ/qoKB4=
    x-ms-request-server-encrypted: true
    Date: Wed, 15 Jan 2020 02:32:24 GMT

Note that the response captured/returned is 201, but the BlobClient returned a "System.Threading.Tasks.TaskCanceledException: The operation was canceled." after exactly 100 seconds.

When attempting to copy this file (stored locally on system SSD drive), the attempt fails with the same error I reported every single time after exactly 100 seconds.

When attempting to copy this file (stored remotely on SharePoint and accessed via WebDAV protocol), the attempt fails with the same error I reported every single time but with variation (E.g. 206s, 152s, 229s, 163s).

As long as the process that is uploading files is still running, it appears that the socket for these requests stays open and eventually the file uploads with the Blob service returning 201 after the client upload was cancelled.

I suspected that Fiddler (and the SSL Tunnel) may be influencing the results of these tests. But...

With Fiddler, requests to upload local files on system SSD drive always fail after exactly 100 seconds (despite continuing to upload).

Without Fiddler, requests to upload local files on system SSD, the error becomes more intermittent, but nevertheless still occurs. And it appears that all failures occur after exactly 100 seconds. All successful attempts take less than 100 seconds.

So it would seem that I am hitting a default timeout for web requests.

NOTE: I see that the request to upload the file is being done in one single PUT operation. Under what conditions should I expect to see the file being chunked?

I should add, that I am working with a pretty slow connection with sub 5 Mbps upload speeds.

It very well could be that this issue could be ameliorated by having a faster connection. But I do still wonder what is required to force a file to be uploaded in small chunks rather than in one go.

I think my issue was that I was using .net core 3 application on Free Tier at Azure.
While I am not certain what was the cause, I was getting same error on local, but it just worked yesterday (Maybe I was using Fiddler, and it somehow fixed it locally). Then I tried pushing file through Web App at Azure, I received 404 error, probably Free Tier Apps are not accepting more than certain amount of Content-Length

I will investigate more on this.

@beckettk regarding chunked uploads, the file needs to be larger than 256MB, as that is the largest size a single upload can handle. We currently have settings for how to divide up the load once you surpass that threshold, but a single upload is used until you hit that point. We are exploring options for allowing users to configure when to chunk uploads.

Regarding this issue, can you please clarify whether or not your data has been successfully uploaded to storage? I'm not seeing this mentioned and it would be good to know if data isn't getting through or if we're holding onto something we shouldn't after completion.

@jaschrep-msft - Yes, the data was successfully loaded. I have found the file in storage with the same ETag value reported in the response above. I have also confirmed that the Content-MD5 reported in the response above matches the MD5 hash of my local file.

To recap. A request is made to upload the file (from local system drive) using the following code.

var blobClient = new Azure.Storage.Blobs.BlobClient(new Uri("https://<storageAcct>.blob.core.windows.net/<container>/<blobName>?<sas>"));

await using (var fileStream = System.IO.File.OpenRead("<path>"))
{
    await blobClient.UploadAsync(fileStream, true, CancellationToken.None);
}

In Fiddler, I see the following session (request only) with _storageAcct_.blob.core.windows.net

PUT https://<storageAcct>.blob.core.windows.net/<container>/<blobName>?<sas> HTTP/1.1
Host: <storageAcct>.blob.core.windows.net
x-ms-blob-type: BlockBlob
x-ms-version: 2019-02-02
x-ms-client-request-id: e81bc21c-5331-4684-8907-ffd6fa1f2208
x-ms-return-client-request-id: true
User-Agent: azsdk-net-Storage.Blobs/12.1.0+ee51b9a6328321b8bc491824116ec9038f2fed5b (.NET Core 3.1.1; Microsoft Windows 10.0.17134)
Content-Length: 38771467

100 seconds later (give or take a tenth) the application logs the following error.

System.Threading.Tasks.TaskCanceledException: The operation was canceled.
 ---> System.IO.IOException: Unable to read data from the transport connection: The I/O operation has been aborted because of either a thread exit or an application request..
 ---> System.Net.Sockets.SocketException (995): The I/O operation has been aborted because of either a thread exit or an application request.

The application moves to upload more files. 7 minutes and 25 seconds later (my application is still executing as it continues to upload files), Fiddler logs the following __201 Created response__ in the _storageAcct_.blob.core.windows.net session.

HTTP/1.1 201 Created
Content-Length: 0
Content-MD5: WHHcDGfrHi9e8zJpVQ8sCw==
Last-Modified: Wed, 15 Jan 2020 02:32:25 GMT
ETag: "0x8D79963285CD733"
Server: Windows-Azure-Blob/1.0 Microsoft-HTTPAPI/2.0
x-ms-request-id: c3479df1-501e-0131-0b4a-cb8caa000000
x-ms-client-request-id: e81bc21c-5331-4684-8907-ffd6fa1f2208
x-ms-version: 2019-02-02
x-ms-content-crc64: xsaeQ/qoKB4=
x-ms-request-server-encrypted: true
Date: Wed, 15 Jan 2020 02:32:24 GMT

Why is the Task being cancelled after 100 seconds when clearly the upload is still being carried out even after the System.Threading.Tasks.TaskCanceledException is raised?

The fact that the upload continues and succeeds is great. That is exactly what I want.

The fact that a System.Threading.Tasks.TaskCanceledException is raised is bad, as the application (and the end user) thinks the upload has failed.

I've slightly modified your sample to the following loop, and have yet to hit the exception.

var containerClient = new Azure.Storage.Blobs.BlobContainerClient(new Uri(""https://<storageAcct>.blob.core.windows.net/<container>?<containersas>""));
int i = 0;
while (true)
{
    var blobClient = containerClient.GetBlobClient($"MyBlob-{1}");
    await using (var fileStream = System.IO.File.OpenRead("<my 37MB file>"))
    {
        await blobClient.UploadAsync(fileStream, true, CancellationToken.None);
    }
}

I've done 50 attempts so far in this loop and haven't encountered the issue.

@jaschrep-msft - You indicate below that you ran this test on 37 Mb file, but I have a feeling your connection is more than capable of uploading a file that size easily under 100 seconds.

As you can see, with my connection it takes over 7 minutes to upload. This may be due to the fact that my application has moved on to uploading other files, but it is probably more down to the fact that the upload performance over my connection (WISP) is very slow.

Have you tried a larger file? One that is less than 256 Mb to avoid chunking? I am curious to see if you get the same System.Threading.Tasks.TaskCanceledException after 100 seconds.

@tg-msft can you shed light on this issue? It looks like Azure.Storage just passes a CancellationToken down to Azure.Core and lets it handle timeouts and the like.

100 seconds is the HttpClient Timeout default. You need to tweak your Transport if you want a longer timeout. You can do that via BlobClientOptions when creating your client:

C# new BlobClientOptions { Transport = new HttpClientTransport( new HttpClient { Timeout = TimeSpan.FromSeconds(102) }) };

The best practice would be to create a static HttpClient instance with a longer Timeout somewhere that you share across all your Azure clients.

@tg-msft - Thank you. Once I observed that System.Threading.Tasks.TaskCanceledExceptions were being raised right at 100 seconds, I figured that this was down to the underlying HttpClient used to interact with the Blob Service REST API. Due to time constraints, I hadn't been able to dig further into what options are available configure the blob client. Thanks for pointing me in the right direction.

@jaschrep-msft @seanmcc-msft - Give me chance to try using BlobClientOptions to pass an extended timeout value and I'll let you know how it goes.

I can verify that using BlobClientOptions to pass a singleton HttpClient with an extended timeout period resolves my issue.

Thanks to everyone involved in helping resolve this issue.

I have also stuck with this issue. Its fix is very simple, just set your stream position as 0.

fileStream.Position = 0;

All set, it will work.

100 seconds is the HttpClient Timeout default. You need to tweak your Transport if you want a longer timeout. You can do that via BlobClientOptions when creating your client:

new BlobClientOptions
{
    Transport = new HttpClientTransport(
        new HttpClient { Timeout = TimeSpan.FromSeconds(102) })
};

The best practice would be to create a static HttpClient instance with a longer Timeout somewhere that you share across all your Azure clients.

Unfortunately, 100 seconds also happens to be the default RetryOptions.NetworkTimeout value, which meant for me that I was still timing out with the above even when I set Timeout.InfiniteTimeSpan. This did the trick though:

new DataLakeClientOptions // or whatever options class suitable for your client
{
    Transport = new HttpClientTransport( new HttpClient { Timeout = Timeout.InfiniteTimeSpan } ),
    Retry = { NetworkTimeout = Timeout.InfiniteTimeSpan } // <== this was the key
}

100 seconds is the HttpClient Timeout default. You need to tweak your Transport if you want a longer timeout. You can do that via BlobClientOptions when creating your client:

new BlobClientOptions
{
    Transport = new HttpClientTransport(
        new HttpClient { Timeout = TimeSpan.FromSeconds(102) })
};

The best practice would be to create a static HttpClient instance with a longer Timeout somewhere that you share across all your Azure clients.

Is there a way to achieve the same when using the HttpWebRequest-based transport which is the default on .NET Framework since Azure.Core 1.5.0?

Was this page helpful?
0 / 5 - 0 ratings