With 1.0.0-preview bits, it is currently harder than it needs to be to use ML.NET inside an ASP.NET service or application. The first problem users hit is whether they can cache a PredictionEngine statically and reuse it for multiple requests. As described in #1789, you cannot use a PredictionEngine on multiple threads at the same time. Doing so will cause problems in your application.
Thus the recommendation is to use a pooling technique, but writing one from scratch is rather hard and potentially error prone.
Also, by default the MLContext's Log operation is not aware of any logging infrastructure currently used by ASP.NET apps/services. Thus the log goes nowhere, and is lost.
We propose to add a new library (Microsoft.ML.Extensions?, Microsoft.Extensions.ML?) that is aware of both Microsoft.ML and Microsoft.Extensions.DependencyInjection/Microsoft.Extensions.Logging and glues the two together. This should make it much easier to consume ML.NET models inside ASP.NET apps/services, as well as any other app model that integrates with the Microsoft.Extensions.* libraries.
Adding a new ML.NET model into an ASP.NET application could be as simple as two steps:
```C#
// This method gets called by the runtime. Use this method to add services to the container.
public void ConfigureServices(IServiceCollection services)
{
services
.AddPredictionEnginePool
// other service configuration
}
2. In any controller that needs to make a prediction, inject the PredictionEngine pool in the constructor, and use it where necessary:
```C#
[ApiController]
public class PredictionController : ControllerBase
{
private PredictionEnginePool<SentimentIssue, SentimentPrediction> _predictionEnginePool;
public PredictionController(PredictionEnginePool<SentimentIssue, SentimentPrediction> predictionEnginePool)
{
_predictionEnginePool = predictionEnginePool;
}
[HttpGet()]
public ActionResult<SentimentPrediction> GetSentiment([FromQuery]SentimentIssue input)
{
return _predictionEnginePool.Predict(input);
}
}
.zip file from sources other than a file path@glennc @CESARDELATORRE @glebuk @TomFinley
I certainly support this feature. 馃憤
Related to this Blog Post explaining the scenario, but this feature (.NET Integration Package working on DI) will simplify it a lot for the users:
I like this. In terms of where it should be, I vote for Microsoft.ML.SOMETHING as the namespace. For the foreseeable future, ML.NET will move faster than ASP.NET, and keeping / building the integration on the ML.NET side is probably more stable.
As far as distributing the work among the pool, would it be random based on resources available or would there be a strategy? Would that functionality instead be better served by something like a load balancer rather than logic built into the pool?
@markus Agree, we should host this in the place it will be able to evolve more agile.
@luisquintanilla- Load balancing makes sense when you are distributing load across multiple machines/servers, or distributed nodes in an orchestrator like when using containers. But in this case, it is a single pool of objects in a single hardware machine, same memory, same resources. I don鈥檛 think we need any load balancing here, it would be making it more complex without really needing it.
Most helpful comment
I like this. In terms of where it should be, I vote for
Microsoft.ML.SOMETHINGas the namespace. For the foreseeable future, ML.NET will move faster than ASP.NET, and keeping / building the integration on the ML.NET side is probably more stable.