Aspnetcore: Batch support

Created on 19 Jun 2017  Â·  40Comments  Â·  Source: dotnet/aspnetcore

_From @rosieks on January 8, 2015 17:19_

ASP.NET WebAPI 2 introduced support for batching. There was issue for batch implementation in Mvc but it was closed: https://github.com/aspnet/Mvc/issues/508

Despite that it would be great to have this usefull feature.

_Copied from original issue: aspnet/Home#262_

affected-few area-middleware enhancement severity-major

Most helpful comment

@Tratcher - I think Batch Support would still be useful because one could treat the entire batch as a single transaction, committing or rolling back related changes together.

All 40 comments

_From @Praburaj on January 8, 2015 18:20_

/CC @danroth27 @yishaigalatzer

_From @damienbod on May 27, 2015 21:7_

How will this be supported?
Greetings Damien

_From @OneCyrus on May 20, 2016 11:12_

+1
batch support is essential for modern web apps. any progress or new information?

_From @rdefreitas on May 20, 2016 11:35_

the original issue suggested doing this through Middleware. It might not be too difficult to implement.

_From @brianoconnell on August 25, 2016 17:32_

Has there been any further plans made to support batching either via middleware or otherwise?

_From @gilmishal on June 4, 2017 11:39_

This shouldn't be too difficult, Would have done it my self if it wasn't for my lack of time... It seems like it requires a router implementation mapping /batch to multiple RouteContexts by formatting the text input (json, http format text) each with their own Handler based on the default configuration (i.e MVC).
it might be a little bit more difficult, because it would require support for multiple responses for an HttpContext (as far as I know it seems like it will be shared across Controllers, which kinda makes it not possible at the moment).

_From @josh-sachs on June 14, 2017 22:56_

+1 important.

_From @khushalpatel1981 on June 18, 2017 12:22_

+2 Very Important, it's stopped us to migrate from .net 4.5 to core 1.1 and
move to production

On Jun 15, 2017 4:26 AM, "Joshua Sachs" notifications@github.com wrote:

+1 important.

—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/aspnet/Home/issues/262#issuecomment-308581156, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AI2eQCaUnRUW--ZxTjAoWW9QyqZLlFvQks5sEGU1gaJpZM4DP986
.

I've managed to parse a batch request, I suppose it's fairly straight forward, but what escapes me at the moment is as to how I can forward all the requests internally and batch their responses. Any idea as to how to make internal requests from middle ware and record responses?

I chatted with @Tratcher and we think it would look something like this:

  • Create a new DefaultHttpContext
  • Mock out any features you need to
  • Call next with their new context
  • Serialize back the response.

@danroth27 @Tratcher Thank you for the quick reply. Seems like a solution to me but I can't really wrap my head around "Mock out any features you need", could you elaborate or point me to any sample. Thanks again.

That bit is going to be pretty in depth, you have to re-implement or copy many of the underlying features provided by the server. See https://github.com/aspnet/HttpAbstractions/tree/dev/src/Microsoft.AspNetCore.Http.Features

At a bare minimum you must implement IHttpRequestFeature and IHttpResponseFeature, or at least provide the associated data using a default implementation: https://github.com/aspnet/HttpAbstractions/blob/dev/src/Microsoft.AspNetCore.Http/Features/HttpRequestFeature.cs

As I'm one of those who need a BatchHandler too, I made a ProofOfConcept in a few hours, which basically copies much of the code from the Testserver https://github.com/aspnet/Hosting/tree/dev/src/Microsoft.AspNetCore.TestHost

Technically it is fairly simple, at least as far as I tried, most of the TestServer code actually does pretty much the same.
Main problem is, I had to add the old System.Http.Formatting.dll back in, because it contains all the nice MultipartContent helpers, including HttpMessageContent and all the other stuff required to properly parse a Multipart batch operation and create the response
Simply said, without anyone porting all those nice helpers over to Core, this will never work and as far as I know there are no plans to port those over.

POC:
BatchHandler.zip

It basically just calls a few GET methods on the demo values controller.

My bottom line on this POC is simple, I can't port my WebAPI 2.0 application to ASP.NET Core 2.0 without a batch handler. Personally I think that ASP.NET Core 2.0 is still lacking many nice "Quality-of-Life" features which were provided by HttpRequestMessage and HttpResponseMessage.

@Tornhoof do you need to parse incoming multipart or generate outgoing multipart? ASP.NET Core has a few different layers of API for parsing incoming multipart. See HttpRequest.Form, MultipartReader, etc..

There isn't a multipart writer, but that part is much easier to write.

@Tratcher
I need to do both :)
Let's look at the steps for a second:

  1. Splitting Incoming Multipart Request: MultiPartReader/Section (incl. in .ASP.NET Core)
  2. Parsing the application/http multipart section (the individual batched http request): HttpMessageContent (not included in ASP.NET Core), using: https://github.com/ASP-NET-MVC/aspnetwebstack/blob/master/src/System.Net.Http.Formatting/HttpMessageContent.cs
  3. Sending it to the ASP.NET Core pipeline (taken from the TestServer project, incl. in ASP.NET Core)
  4. Producing MultiPart output (not included in ASP.NET Core, using parts from the old stack for that)

The old stack gives me methods and classes to parse the section into a HttpRequestMessage, a typed object with all the nice properties describing request and methods and classes to serialize a HttpResponseMessage into a HttpContent with all the proper headers etc. in place.

That's what I meant with "Quality-of-Life" features, no ReadAsByteAsync, ReadAsStringAsync, no nice typed headers etc. With the old stack, it was way easier to work with HttpRequestMessage/ResponseMessage, sending it via HttpClient, receiving/responding in a Controller. Now the new HttpRequest and HttpResponse are completely separated and no easy method of conversion exists (yet).

I see serialization code in HttpMessageContent, but not parsing code. Where's that?

Yes, this ist the Parser method. Sorry, I forgot to mention that it is implemented in the Extension

@davidfowl it looks like your generic HTTP parsers would be useful here.

Smaller update from my side

  • I replaced the old WebAPI 2 reading/writing part in the middleware with my own code. The Tests still include the old api.

I updated the POC for that:
BatchHandler.zip

I'm not really sure if the complicated async offload, responsestream etc. is really necessary as this is still copied from the TestServer project. As there is no "client" and the batch is done sequentially for now, I guess it could be simplified quite a bit.

@Tornhoof would it be possible to put your PoC on Github for ease of browsing?

Update from my side:
The POC in https://github.com/Tornhoof/HttpBatchHandler is starting to look ok, all my tests pass (both in the handler and in my main application):
What's in there?

  1. OnXXX Events are added
  2. Tests are added

What's missing?

  1. Error Cases (those are a bit undefined in the old webapi handler, the basic ones exist)
  2. Conversion of tests not to use old webapi nugets to create multipart messages

I'm asking myself where to put the handler as soon as it is in a good enough shape for public consumption.
I currently see four options:

  1. Try to contribute to this repository, including the necessary code changes etc.
  2. Try to get into WebAPIContrib.Core
  3. Create my own nuget package
  4. Hope that my repo is enough of a starting point from someone "official" to do work on this issue.

I guess the first one is probably preferable, as it will increase the code quality, since obviously people with vastly more experience in ASP.NET Core are present.

https://github.com/aspnet-contrib would also be an option

Thanks for the suggestion of aspnet-contrib.

So I replaced the old webapi nugets for multipart messages with new implementations, so this should be independent of the old libraries now.

One question: Why are there no ConfigureAwait(false) in the Middlewares of ASP.NET Core?
As far as I understood it, it's best practice to use ConfigureAwait(false) in libraries and ASP.NET core has no custom context which would require ConfigureAwait(true). Just asking, because I usually add explicit ConfigureAwait to everything and I don't know what to do with the middleware.

There is a difference I saw on a test system recently between my batch handler for Core and the one from the old ASP.NET.
Assume the following URL http://www.mysystem.com/webapp/api/mystuff
In the old handler the batched request would look like this:

Content-Type: application/http; msgtype=request

POST /webapp/api/mystuff HTTP/1.1
Host: mysystem
Content-Type: application/json; charset=utf-8

{"Id":129,"Name":"Name4752cbf0-e365-43c3-aa8d-1bbc8429dbf8"}

The host is mysystem and the uri for the Http request line includes the PathQuery part of the full uri (including webapp)

In the handler for core, kestrel apparently does not really know much about the webapp part and routing internally only works without the webapp part. So it looks like this

Content-Type: application/http; msgtype=request

POST /api/mystuff HTTP/1.1
Host: mysystem
Content-Type: application/json; charset=utf-8

{"Id":129,"Name":"Name4752cbf0-e365-43c3-aa8d-1bbc8429dbf8"}

Now there are two questions:

  1. Is that correct behaviour?, it's different from the old one, I guess some compatibility system should be in place?
  2. What about calculated urls in the response (e.g. location header), I guess the URL should be a valid external one (e.g. http://www.mysystem.com/webapp/api/mystuff)

@Tornhoof you have lots of .ConfigureAwait(false) in your code, but I don't think it's necessary because ASP.Net Core does not have SynchronizationContext.

@SlimShaggy Yes, but I prefer explicit ConfigureAwaits over implicit ConfigureAwaits and I annotate them with false as much as possible. The general recommendation still is (afaik) to use ConfigureAwait(false) in Library code (where possible).

Hi all,
I've implemented a sample client/server project for this. It uses a standard middleware and a few other things. The code is available here https://github.com/GiancarloLelli/aspnetbatchingmiddleware happy to discuss and answer questions

@GiancarloLelli , in my opinion, using HttpClient internally to make loopback requests sort of defeats the purpose of the batch mode from the performance point of view. Each connect will take time and resources to perform.

I want to add my 2 cents and state that batch support is really important for us as well. Would love to see it implemented soon.

Any progress on official support for a batch apis?

Has anyone done an updated analysis on the usefulness of batching, especially now that HTTP/2 is achieving broad adoption? Batching in HTTP/1.1 seems to get you two things: 1) Maximum TCP packet density, and 2) fewer round trips to mitigate high latency connections.

For 1, HTTP/2 has a number of request compression techniques to reduce the number of bytes sent on the wire. While not a direct replacement for batching's packet density, it should offset those savings.

Case 2 is where it gets really interesting. HTTP/1.1 suffered over high latency connections because connections were poorly utilized. In most scenarios the client sends one request at a time, waits for the response, and then sends the next request. On high latency connections this multiplies the cost of every request. HTTP pipelining addressed this using an approach similar to batching, but was not widely adopted.

HTTP/2 addresses connection utilization via multiplexing. It allows hundreds of requests to be sent before you have to stop and wait for responses, and those responses can come as soon as they're ready, they don't have to wait for in-order delivery. This avoids the HTTP/1.1 latency issues by parallelizing transmission and processing, and the out of order responses can give you a big performance boost over pipelining or batching.

HTTP/2 is enabled by default in all modern browsers, in IIS and HttpSys (Win10), and Kestrel (3.0).

@Tratcher - I think Batch Support would still be useful because one could treat the entire batch as a single transaction, committing or rolling back related changes together.

For us Batch is a way how we can process long requests which might fail otherwise with Url too long exception, caused by spec limitation for GET requests.

For us Batch is a way how we can process long requests which might fail otherwise with Url too long exception, caused by spec limitation for GET requests.

A batch implementation should enforce similar restrictions as a server or risk introducing exploits. Batching should not be used to bypass server restrictions.

@Tratcher not sure I get your point. It's similar to the way how OData Batch supposed to be used by official spec.

@Lonli-Lokli restrictions on fields like url lengths are implemented to ensure that components in the stack don't spend an arbitrary amount of time and resources processing the url. Data structures and algorithms designed for 1kb urls, or for 0-100 query key-value pairs are not going to hold up if the url is suddenly 32kb or there are 1000 query key-value pairs. This can result in performance, reliability, and security issues in your site.

Was this page helpful?
0 / 5 - 0 ratings