Mvc: FileStreamResult very slow

Created on 29 Mar 2017  Â·  61Comments  Â·  Source: aspnet/Mvc

On a local machine, create brand new ASP Net Core website.
In any controller action do:

        new FileExtensionContentTypeProvider().TryGetContentType("test.m4v", out string contentType);
        Response.Headers["Content-Disposition"] = "attachment; filename=\"movie.m4v\"";
        return new FileStreamResult(System.IO.File.OpenRead(@"C:\somemovie.m4v"), contentType);

This will popup a download save dialog on request. The local SSD can do 1800MB/s. The ASP Net Core download tops out at 17MB/s.

3 - Done investigate

Most helpful comment

On Windows 10 x64, I downloaded video content of size 987MB using the code here.

I tried ASP.NET Core 1.1 and the current dev branch (there have been recent changes as part of this PR which I thought may affect the results; like use of StreamCopyOperation)

Note: When Content-Length is not set, the Transfer-Encoding header is set to "chunked"
Here's what I found:

filestreamdownload

Conclusions:

  • Setting Content-Length helps when using IIS
  • Increasing the buffer size helps when using IIS and Transfer-Encoding is chunked
  • Kestrel is faster

cc @Eilon

All 61 comments

On a local box you are doing both send and receive and 17MB/s is a network transfer rate of 136Mbit/s and both disk save and load.

Are you running in release mode -c Release; are you going direct to Kestrel or via IIS/IISExpress/Ngnix; are you running from command line or visual studio.

Is the file being virus scanned https://github.com/aspnet/Mvc/issues/6042#issuecomment-290718585 both on load and save?

Are you downloading direct, or via browser where its doing its own secondary scanning etc

Direct through kestrel and tried IIS and IIS Express all the same results.

Disabled all antivirus

Tried debug and release mode

Tried command line and VS

Tried on a Windows 10 machine (64GB RAM, i7 6950X, SSD) and a Windows Server 2016 watercooled beast)

Tried via browser as that's how the users will download from a website. Firefox.

Tested SSD, manually copy/paste the 1GB file takes less than a second.

If I read it into memory first:

            var memory = new MemoryStream();
            result.Response.ContentStream.CopyTo(memory);
            memory.Seek(0, SeekOrigin.Begin);

            // Return the file stream
            return new FileStreamResult(memory, result.Response.ContentType);

Then I get 100MB/s if the file is < 900MB. Above that and it goes back to 17MB/s or worse (probably due to haveing over 1GB of stuff in RAM.

Still, it should download at well over that. I can download files from the other side of the world on a loaded server at 210MB/s, so locally I should be seeing practically the SSD speeds, or at worst 200MB/s, not 17.

Does setting up the FileStream to be read async (https://github.com/aspnet/FileSystem/blob/dev/src/Microsoft.Extensions.FileProviders.Physical/PhysicalFileInfo.cs#L48-L57) make a difference?

I'm also using the very latest VS and dotnet 1.1.1

Setting it up as:

new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 1, FileOptions.Asynchronous | FileOptions.SequentialScan);

Slows it down to 7MB/s

Increase buffer to 10240 increases to 39MB/s
Increase buffer to 102400 increases to 92MB/s
Increase buffer to 409600 causes the same staggering effect as loading over 1GB into RAM, and it pauses, goes, pauses, goes and averages 40MB/s

So we are getting somewhere. It is how its reading the file from the disk, yet reading the entire file from that same stream using CopyTo is instant (as fast as reading it from the SSD), but passing that same stream into FileStreamResult gives much worse performance

Just to get the max throughput metrics of your loopback could you run NTttcp

You want to extract the file exe then in two command windows enter the NTttcp-v5.33\amd64fre directory and run for server

ntttcp.exe -s -m 1,*,127.0.0.1 -l 128k -a 1 -t 15

Then for client

ntttcp.exe -r -m 1,*,127.0.0.1 -rb 2M -a 1 -t 15

And you should get an output similar to

   Bytes(MEG)    realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
    11926.000000      15.000       1409.344          795.067

On windows 10 machine
Copyright Version 5.33
Network activity progressing...

Thread Time(s) Throughput(KB/s) Avg B / Compl
====== ======= ================ =============
0 15.001 234465.436 17856.987

Totals:

Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
3434.781250 15.001 1394.081 228.970

Throughput(Buffers/s) Cycles/Byte Buffers
===================== =========== =============
3663.522 47.876 54956.500

DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
============= ============= =============== ==============
1965.936 87.604 75595.294 2.278

Packets Sent Packets Received Retransmits Errors Avg. CPU %
============ ================ =========== ====== ==========
2583424 2583515 54 23 19.171

On server 2016
Copyright Version 5.33
Network activity progressing...

Thread Time(s) Throughput(KB/s) Avg B / Compl
====== ======= ================ =============
0 15.002 748273.732 26348.401

Totals:

Bytes(MEG) realtime(s) Avg Frame Size Throughput(MB/s)
================ =========== ============== ================
10962.502472 15.001 1379.706 730.785

Throughput(Buffers/s) Cycles/Byte Buffers
===================== =========== =============
11692.556 3.758 175400.040

DPCs(count/s) Pkts(num/DPC) Intr(count/s) Pkts(num/intr)
============= ============= =============== ==============
21.199 26199.673 14214.452 39.073

Packets Sent Packets Received Retransmits Errors Avg. CPU %
============ ================ =========== ====== ==========
8331504 8331496 4 0 62.600

Both max download at 92MB/s with the 102400 buffer on the stream. Yet the server is capable of 730MB/s

Running through the web (over a fiber line though) the speed is insanely slow. 1MB/s. So it gets worse when going over the internet too.

What if you go a bit more wild?

// using System.IO.MemoryMappedFiles

new FileExtensionContentTypeProvider().TryGetContentType("test.m4v", out string contentType);
Response.Headers["Content-Disposition"] = "attachment; filename=\"movie.m4v\"";

var mmf = MemoryMappedFile.CreateFromFile(@"C:\somemovie.m4v");

Response.OnCompleted(
    (state) => {
        ((MemoryMappedFile)state).Dispose();
        return Task.CompletedTask;
    }, mmf);

return new FileStreamResult(mmf.CreateViewStream(), contentType);

Runs at around 70MB/s for 1-2 seconds, then pauses for 3-4 seconds, then does it again, over and over. Similar to the large buffer issue. So averages back out to the same speed

You have server GC on?

Out of the box web template from VS. How do I enable/disable it?

Just for curiosity, why is the buffer size 4KB in FileStreamResultExecutor and 64KB in StaticFileContext?

Ok so I found a culprit. Kaspersky created a network adapter it was routing everything through. Removed that and I see improvements.

Original code: 38MB/s
MemoryMapped: 127MB/s
Async Filestream with 102400 buffer: 113MB/s

However, on the Windows server machines with faster loopbacks and running Server 2016, I get far less improvement. They didn't have Kaspersky on, and I disabled Defender, but that changed nothing. The servers get worse:

Original code: 14MB/s
MemoryMapped: 13MB/s
Async Filestream with 102400 buffer: 34MB/s

I've tested this on 2 different Windows 2016 servers with identical results, and my faster Windows 10 (when using memory mapped or async file) I have only tested on 1 windows 10 PC. I'll test on another tomorrow.

So even on my Windows 10 dev machine, 127MB/s is half the speed the local loop is capable of, and on the servers its 21x slower than the loop is capable of.

Also the other bug with the MemoryMapped is Response.OnCompleted never fires, at all. So after running it once, the file is locked with the previous memory mapped file and obviously we get memory usage too.

You can try using the TwoGBFileStream to test without File IO and see what that does.

Also OverlappedFileStreamResult from AspNetCore.Ben.Mvc to see what overlapping reads and writes does with a larger buffer size (usage in samples)

Though I didn't see much difference between them, with Chrome being the main bottleneck (at 100MB/s) on loopback:

TaskManager

But it may help isolate the issue...

Using wrk from WSL (Bash on Ubuntu on Windows 10)

ben@BEN-LAPTOP:~$ wrk -c 1 -t 1 http://localhost:5000/Home/OverlappedFileStreamResult  --latency --timeout 120
Running 10s test @ http://localhost:5000/Home/OverlappedFileStreamResult
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.81s     0.00us  12.81s   100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  Latency Distribution
     50%   12.81s
     75%   12.81s
     90%   12.81s
     99%   12.81s
  1 requests in 12.81s, 2.00GB read
Requests/sec:      0.08
Transfer/sec:    159.90MB
ben@BEN-LAPTOP:~$ wrk -c 1 -t 1 http://localhost:5000/Home/FileStreamResult  --latency --timeout 120
Running 10s test @ http://localhost:5000/Home/FileStreamResult
  1 threads and 1 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    12.89s     0.00us  12.89s   100.00%
    Req/Sec     0.00      0.00     0.00    100.00%
  Latency Distribution
     50%   12.89s
     75%   12.89s
     90%   12.89s
     99%   12.89s
  1 requests in 12.89s, 2.00GB read
Requests/sec:      0.08
Transfer/sec:    159.22MB

Both types come out about the same for me. 159MB/s being about 1.27 GBit/s on the network/loopback

Short test on Windows 10 machine so far...
Firefox using TwoGBFileStream gets 113MB/s
Chrome however hits the local loop limit (almost) of 209MB/s using 8% CPU. So I think it's safe to say that removed the bottleneck there and it's in the reading of the file.

wrk for bash doesnt seem to be there any more. Got developer mode on, installed bash, can run it, but typing wrk gives invalid operation as its not a command, says:

luke@LUKE-M:/mnt/c/Windows/System32$ wrk
No command 'wrk' found, did you mean:
Command 'wrc' from package 'wine1.6' (universe)
Command 'ark' from package 'ark' (universe)
Command 'irk' from package 'irker' (universe)
Command 'wmk' from package 'wml' (universe)
wrk: command not found

I spun up a ubuntu docker instead but that doesnt have wrk either

I'll do the tests with OverlappedFileStream and both on Windows server machines (which had much worse performance originally) and see what I get.

So on the servers totally different result:

Original
firefox: 17MB/s
chrome: 18MB/s

With 102400 buffer
firefox: 36MB/s
chrome: 48MB/s

With TwoGBFileStream
firefox: 13MB/s
chrome: 14MB/s

With overlapped:
firefox: 33MB/s
chrome: 36MB/s
(identical to original change of just adding a 102400 async buffer)

Now run all this again without it going through IIS (so kestrel direct from command line)

Original
firefox: 17MB/s (no change)
chrome: 18MB/s (no change)

With 102400 buffer
firefox: 36MB/s (no change)
chrome: 56MB/s (slight improvement)

With TwoGBFileStream
firefox: 38MB/s (3x improvement)
chrome: 48MB/s (4x improvement)

With overlapped:
firefox: 38MB/s (slight improvement)
chrome: 48MB/s (slight improvement)

So it seems on the server 2016 we have totally different bottlenecks, however no matter any of the tests so far the server 2016 (which is what it will run on) cannot get above 40-50MB/s even when not reading from disk.

I ran this test on a local watercooled server, and a server on the amazon AWS, two totally different specs, both very powerful, and identical results.

I did notice though on the servers, the bottleneck seemed to be the IIS Worker and the browsers both reaching 35% CPU. Go straight through kestrel and the bottleneck is still the browser. Yet the same setup running on Windows 10 the download goes at 210MB/s and the browsers CPU is only 8%

What's worse is accessing those server 2016 websites from anything but local (so over the internet) the download speed is between 300kb and 1MB max. The servers processes show now CPU usage (3-4% max) and the client doesn't either, so there is another bottleneck in that process too.

wrk for bash doesnt seem to be there any more.

You have to install it per the linux steps https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux

What's worse is accessing those server 2016 websites from anything but local (so over the internet) the download speed is between 300kb and 1MB max.

That's more likely due to the nature of TCP protocol and the round trip time

If you want to optimize for large file transfer, on a single connection, with round-trip latency at the expense of more memory consumed you will have to tune a few things.

If you are on the Windows Server 2016 Anniversary Update you'll already have the Initial Congestion Window at 10 MSS, otherwise you may want to increase it with the following 3 powershell commands (WinServer 2012+)

New-NetTransportFilter -SettingName Custom -LocalPortStart 80 -LocalPortEnd 80 -RemotePortStart 0 -RemotePortEnd 65535
New-NetTransportFilter -SettingName Custom -LocalPortStart 443 -LocalPortEnd 443 -RemotePortStart 0 -RemotePortEnd 65535
Set-NetTCPSetting -SettingName Custom -InitialCongestionWindow 10 -CongestionProvider CTCP

On Windows Server 2016 you should be able to go up to

 -InitialCongestionWindow 64

However you probably don't want to go too high.

You can also configure the amount Kestrel will put on the wire before awaiting (defaults to 64 KB) e.g.

.UseKestrel(options =>
{
    // 3 * 1MB = 3MB
    options.Limits.MaxResponseBufferSize = 3 * 1024 * 1024;
})

Then you want to check the network adapter settings and bump up anything like:
Max Transmit, Send Descriptors, Transmit Buffers, Send Buffers etc varies by NIC

You way also want to adjust other settings like your TCP receive window (assuming you've done the eariler ones) e.g.

Set-NetTCPSetting -SettingName Custom -AutoTuningLevelLocal Experimental -InitialCongestionWindow 10 -CongestionProvider CTCP 

But this is more about TCP tuning than the http server itself...

Ok I'll try all that, but we still have the underlying issue that the speed is slow when actually reading a file?

As @Yves57 pointed out earlier the buffer size is likely too small at 4KB per read; Kestrel will only put 3 uncompleted writes on the wire so that may be 12KB on the wire max using FileStreamResult. Though is what am trying to determine.

Also it uses Stream.CopyToAsync which I think has been improved in .NET Core App 2.0; but a larger buffer size would help. There's also StreamCopyOperation which StaticFiles uses and it should probably use but doesn't.

Re: garbage collection; if you are using a project.json project, ensure it has

  "runtimeOptions": {
    "configProperties": {
      "System.GC.Server": true
    }
  },

And if you are using the newer csproj then the Web project type should already cover it

<Project Sdk="Microsoft.NET.Sdk.Web">

Also you can try new FileResult which may use the SendFile api (for example behind IIS) and see how that goes

Send file no longer works on IIS since it isn't in proc. It does work with WebListener though

@angelsix as an aside to whether FileStreamResult is slow, which hopefully can be resolved if so.

Other options to consider to serve the files are add ResponseCaching if you have memory on the go; the StaticFileMiddleware; or get IIS to serve the static files directly. The latter two are covered in @RickStrahl's blog: More on ASP.NET Core Running under IIS

More generally, I was asking myself why all XxxxResultExecutor classes are in the Internal namespace and not in a ResultExecutor namespace? All code (like here is developped to be compatible with code injection. So for example if FileStreamResultExecutor.ExecuteAsync() where virtual, it would be possible to call "services.AddSingleton();" on startup. Same remark for PhysicalFileResult / PhysicalFileResultExecutor.
It would be nice to optimize performance in some specific cases, add custom logging, add custom cache, etc.

If FileStreamResultExecutor.ExecuteAsync() where virtual

Yeah was thinking that when making the test overlapped version; its methods should either be virtual or it should resister with interface e.g. IFileStreamResultExecutor rather than registering against a non-overridable class. Instead you have to create a new result type and call that.

Ok so with increasing the file stream buffer to 64k using new FileStream(path, FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 65536, FileOptions.Asynchronous | FileOptions.SequentialScan); I get an acceptable speed. Around 120-150MB/s stable on Windows 10

I'd like to see more, but I think that will do for now.

Changing the max response buffer in Kestrel did nothing. All the other suggestions didn't improve on this speed either so this seems the fastest speed possible right now.

Doing the TwoGBFile stream direct from RAM reached the local link limit on Windows 10 so I think the bottleneck in the current situation is still the reading of the file stream, but all suggestions so far have not gained us more than 150MB/s and I know the network can at least reach the 230MB/s we see with the TwoGBFile stream.

I'm moving onto trying to fix the Windows Server 2016 issues now. Nothing I've done so far can get the files to transfer faster than 58MB/s locally, and I cannot even get files to transfer faster than 1MB over the internet, even on a fresh Amazon AWS 2016 server and my fiber internet, so there is definitely issues there.

The changes for TCP/Kestrel response buffers would be for network rather than loopback

Ah ok I'll add that back and do some internet speed testing too

@davidfowl @rynowak Is there any sense for you to create an issue / PR about the possibility to be able to create derived classes of XxxxResultExecutor classes (see comments above)?
I think that there is no compatibility problem because they are currently internal classes.

@Yves57 why would you want those classes to be public? Why not just write a custom IActionResult?

If there isn't an alterable api what's the point in looking it up from DI?

Might as well just call a static method... (I'd argue looking it up from DI is good; but there is no point if it can't ever be changed)

@davidfowl I agree with @benaadams: why add them to DI if it is "forbidden" to customize them?

DI has nothing to do with end users being able to customize or override things. If the types were truly internal (with the keyword) we would do the same thing. The extensibility point is IActionResult. Those executor classes IMO might as well be static helpers (as @benaadams eluded to).

They are also pubternal so you can override them it you want. You just can't build 3rd party components assuming that those APIs won't break. The way I see it, we leave APIs here so that application can access them but libraries that take a dependency on them need to potentially update every time we make new versions of things.

Moving them out of the .Internal namespace requires us to make sure they are extensible and supported across releases.

They are also pubternal so you can override them it you want.

You can't override them. The types are registered against a concrete class; so an override needs to inherit from it. However the methods are not virtual so can't be overridden. Overriding with new instead of overrides won't work as they are referenced by the parent type.

Don't mind about them being in an .Internal namespace; but just some way to override the registration with an alternative class.

So if you wanted to test or use different scenarios; whether overlapped read and send; read through cache from slow disk to fast disk (if you have a bulk of assets that you are carrying around and aren't sure what's in use anymore); memory mapped files, logging specific resource usage etc it may be useful to override the XxxxResultExecutor registration rather than doing a search an replace of all uses of XxxxResult in the code to do this.

Another example, read from large cloud cold storage to local fast disk if not present on demand for new web node setup without copying all the assets at start up - but still use common return types (E.g. FileResult)

if (IsDevelopment())
{
    services.AddSingleton<PhysicalFileResultExecutor, DevStorageFileResultExecutor>();
}
else if (IsInternal)
{
    services.AddSingleton<PhysicalFileResultExecutor, InternalStorageFileResultExecutor>();
}
else
{
    services.AddSingleton<PhysicalFileResultExecutor, LocalCacheOrAzureStorageCoolFileResultExecutor>();
}

Like I said, they aren't abstractions, were never meant to be overridable. Just copy the file, it's like 30 lines of code.

Ok so although the server is still slower, I don't know why it didn't dawn on me before but I've never checked running the site through full IIS on the Windows 10 machine thats quicker. I've just done that and IIS on Windows 10 is also slow. Not quite as slow as the server but still slow. So I'd say IIS itslef is making things a lot slower. I've played with all settings I can find and no change.

Windows 10 / Windows Server (MB/s)
Kestrel 174 / 48
IIS 79 / 38

I've tried:

  • Toggling TCP Fast Open
  • Toggling ECN
  • Changing buffer size for kestrel response
  • Change number of worker processes in app pool

Any ideas why IIS is so much slower and how to improve it?

Also any suggestions on improving the localhost/127.0.0.1 loopback speed on Windows 10? Out of the box all my machines top out at 180MB/s. I read that Windows 7 had NetDMA which could easily reach 800MB/s but it was removed in Windows 8 and later, with Fast TCP, which isn't natively supported it has to be software supported, so regular TCP loopback tests still top out at 180MB/s? I think thats the limit I am reaching on windows 10 when directly running kestrel.

Nevermind, I just realised this would be kind of pointless anyway as the end-purpose will be over real ethernet, at best 1GB, which is 125MB/s anyway so I'm above the speed I need to be on the Windows 10 machines at least. Just need to solve the issue with the 2016 server machines.

IIS is still limited to 10 simultaneous connections on Windows. It won’t blow up and throw errors when it runs over as it used to but it will not create more than 10 connections from what I understand. You can generate a ton of traffic with those 10 connections, but there’s an upper limit. I’m using WebSurge and topping out at 30,000 requests with IIS, unless I get into kernel cashed content where I can get to 75,000 plus on the local machine (because that somehow appears to bypass IIS or perhaps it’s just serving that much faster from Kernel Cache).

+++ Rick ---

From: angelsix [mailto:[email protected]]
Sent: Sunday, April 2, 2017 9:05 AM
To: aspnet/Mvc Mvc@noreply.github.com
Cc: Rick Strahl rstrahl@west-wind.com; Mention mention@noreply.github.com
Subject: Re: [aspnet/Mvc] FileStreamResult very slow (#6045)

Ok so although the server is still slower, I don't know why it didn't dawn on me before but I've never checked running the site through full IIS on the Windows 10 machine thats quicker. I've just done that and IIS on Windows 10 is also slow. Not quite as slow as the server but still slow. So I'd say IIS itslef is making things a lot slower. I've played with all settings I can find and no change.

Windows 10 / Windows Server (MB/s)
Kestrel 174 / 48
IIS 79 / 38

I've tried:

  • Toggling TCP Fast Open
  • Toggling ECN
  • Changing buffer size for kestrel response
  • Change number of worker processes in app pool

Any ideas why IIS is so much slower and how to improve it?

—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub https://github.com/aspnet/Mvc/issues/6045#issuecomment-291007322 , or mute the thread https://github.com/notifications/unsubscribe-auth/ABT3PeUfa_Jy2zvHCsJaUOXq6U9Clr2Lks5rr_FzgaJpZM4MtgTR . https://github.com/notifications/beacon/ABT3PayOyDF7pKhonqoNMG07yxPM8HV6ks5rr_FzgaJpZM4MtgTR.gif

I don't think that relates to my issue though. I run the kestrel through IIS on a new 2016 server, no connections, just one, to download a file, and it maxes out at 38MB/s on a machine that tests capable of TCP traffic up to 125MB/s

Also the exact same code runs on Windows 10 in IIS at 75MB/s so totally different results. Then remove IIS on windows 10 and you get 125MB/s, and 58MB/s on server 2016

@jbagga - can you take a look?

On Windows 10 x64, I downloaded video content of size 987MB using the code here.

I tried ASP.NET Core 1.1 and the current dev branch (there have been recent changes as part of this PR which I thought may affect the results; like use of StreamCopyOperation)

Note: When Content-Length is not set, the Transfer-Encoding header is set to "chunked"
Here's what I found:

filestreamdownload

Conclusions:

  • Setting Content-Length helps when using IIS
  • Increasing the buffer size helps when using IIS and Transfer-Encoding is chunked
  • Kestrel is faster

cc @Eilon

/cc @pan-wang @shirhatti

A few takeaways from the experiment done by @jbagga

  • We should set the content length where possible. It makes a big difference if you're behind IIS. and is a modest improvement if you're not. It also has better behavior with browser progress bars.

  • We should use a larger buffer size than 1kb. Bumping up the buffer size showed a big improvement with chunked responses, and was a wash or small improvement with fixed content length. This is a relatively easy way to improve the scenario where we're worst today (19.1mb/s -> 47mb/s).

6347 results in expected speed of ~60 MB/s for IIS and ~134 MB/s for Kestrel for the same resource as my comment above

I think what we have here is a great improvement, so I think it's ok to close this bug.

@angelsix - please let us know if you have any further feedback on this change.

Thanks I will test this coming week but I expect to see the same improvements everyone else has. Thanks for looking into this

Is there any chance of this fix getting into ASP.NET Core 2.0? I am doing a middleware which looks at Content-Length after a response has finished and logs some statistics of some file serving requests into a database - Content-Length not being there except for Range requests (which explicitly sets it as part of handling ranges) limits the usability of this.

(In addition, it would speed up the site when run with Visual Studio Code attached as a debugger, where sending a 26 MB file with FileStreamResult in my testing is on the order of ~30 seconds for my scenario, compared to 0.6 seconds (50x faster) when still running the unoptimized Debug version with Development "environment" but without a debugger attached. This is not a primary reason and this is probably more Visual Studio Code's or the C# extension's fault, but I can't deny that it would help with this too.)

@JesperTreetop https://github.com/aspnet/Mvc/blob/dev/src/Microsoft.AspNetCore.Mvc.Core/Internal/FileResultExecutorBase.cs#L91
Content-Length is set for all requests with a valid file length but is overwritten with the length of the range for range requests. Have you tried the 2.0.0 preview? If you have, and it does not work for you, please file a new issue with more details and I'd be happy to investigate the problem.

@jbagga That's exactly the line I want in 2.0. The same file in 2.0.0 preview 2 (which I'm currently using) doesn't have it (SetContentLength is only called from SetRangeHeaders which of course is only called for range requests) and therefore also does not have this fix. As long as 2.0 is being spun from the dev branch, I'm guessing this fix will make its way in, but I don't know if 2.0 comes from the dev branch or by manually porting fixes to some sort of continuation of the 2.0.0-preview2 tag, which seems like something Microsoft would do with higher bug bars ahead of releases.

@pranavkm Wonderful - thanks!

@jbagga I have tried to changes buffer size and other + changes .net version 1.1 to 2.0 but i am getting max 20 MB/sec speed to download file.
Can anyone have git repo to test this issue ?
Thanks in advance.

@chintan3100 I don't have a repo, but I did see massive slowdown when running with a debugger attached from VS Code. Running without a debugger attached (dotnet run in a terminal or Ctrl+F5 in Visual Studio) sped it up a lot for me, even without changing the build configuration to Release.

@JesperTreetop : I have checked on Azure with release mode with S3 plan. Tested on Azure VM.
But It show only max 20MB/sec speed. I tried a lot but now change in download speed :(

@chintan3100 there are other factors also; how fast can you read from the disk in that setup? Is it faster than 20MB/s? Can you save to disk faster than 20MB/s on the client.

20MB/s is 160 Mbps; is the bandwidth capped either by receiver or server? etc

Also what is the RTT latency between client and server as that will determine the maximum throughput due to the TCP Bandwidth-delay product

I tested same api with .net framework 4.0 it is having 150 MB/sec download speed whereas in .net core it is 20 MB/sec.

@chintan3100 Posting the code you're using for the action in both ASP.NET MVC 5 and ASP.NET Core 2 would probably help diagnosing this, at least how the action result is constructed.

c# public ActionResult Index() { var stream = new FileStream(@"You File Path", FileMode.Open, FileAccess.Read, FileShare.ReadWrite, 65536, FileOptions.Asynchronous | FileOptions.SequentialScan); return File(stream, "application/octet-stream", "aa.zip"); }

Please run same code in new .net core project and asp.net project.
It's simple read file and return stream. Please specify and large file in path.

This is what i am using in both asp.net MVC 5. and .net core 2.0.
Result
20 Mb/sec .net core
150+ Mb/sec in asp.net MVC5 api

Was this page helpful?
0 / 5 - 0 ratings