Quarkus: MP-Context-Propagation support

Created on 10 Jan 2019  路  75Comments  路  Source: quarkusio/quarkus

Right now, RESTEasy relies on context propagation with reactivecontexts, which supports only RESTEasy. MP-Context-Propagation should supercede this, but has not been released yet.

There's an implementation in SmallRye-Context-Propagation which already supports CDI context propagation (via Weld).

We should look into supporting Arc and transaction (and security when it lands) context propagation, especially since we are going to push reactive programming, so people will notice that they can't do CDI, transactions, etc inside their reactive pipeline.

arereactive kinenhancement

Most helpful comment

MERGED!

All 75 comments

This could be required for the first release, but given that we don't have support for propagation for more than just RESTEasy, and that MP-Concurrency is not released yet (we could fork) and that it would require a merge in RESTEasy anyway, it might be really hard to get this done in time for the first release. We can just hope nobody uses the reactive API in the first release ;)

Ping @cescoffier for his opinion of Grand Reactive Guru.

I would say: not required for the first release, but soon after that (March-April). Do you think it would be a possible target?

About forking MP-Concurrency, I'm mostly doing that for reactive messaging.

Yeah, that's my thought as well, I agree.

But that means we start looking at it ;)

About resteasy, we can look at the converters at the same time (thinking about using them in reactive messaging BTW).

@manovotn I've pushed an initial impl at https://github.com/FroMage/quarkus/tree/smallrye-context-propagation but we will need a special module for Arc context propagation, and possibly a new one for Jta too, because ATM it depends on Weld.

There's two things not working ATM:

  • the CDI extension (Arc does not support extensions I think)
  • the context propagation

Do you think you could take a look please?

The JTA bits are test only, we can maybe move the tests to /tests/ module and cheat it that way? Otherwise it uses simple CDI API things like CDI.current() which are supported by Arc as well.

Other than that, I will take over the Arc context propagation bit, starting from your branch.
Should find some time for on Tue/Wed! :-)

Oh, I did not know that Arc supported CDI.current, then you're right, we could move the Weld deps to the test scope, no?

It has test scope even now - https://github.com/smallrye/smallrye-context-propagation/blob/master/jta/pom.xml#L41
What I meant was moving those tests to smallrye-context-propagation/tests module, since then you are left with only CDI dependency and should be able to reuse the module. At least I think so.

@FroMage I implemented the Arc context propagation here (two commits) - https://github.com/manovotn/quarkus/tree/contextPropagationArc
It's on top of your branch and passes the ContextUnitTest you added for it. I've also updated the test with small correction and another test case for cleared CDI context.
What else do we need before we make a PR out of it?

OK let met pull them into my branch and rebase and see where we stand.

Well, how do we do without the CDI extension? Is it key? Can we rewrite it for ArC?

I am not 100% sure but I think it should be possible. I know you can add beans via extensions, but we also need to read injection point which I am not sure is possible (@mkouba do you think this little extension can be Arc-ed?).
I wouldn't consider it a blocker for the scope of integration in this issue though.

Hehe, little extension? Well, you can inspect all injection points and register custom beans. So I believe it should be possible.

@manovotn so it looks like I have to do at least three things:

  • merge 04d3d8acf33c74444a1408badef17e22b4118a49 to extend the request context life for ArC in case of async requests (not sure why it works ATM)
  • change our transactional support, this one is much tougher because it relies on CDI interceptors, which don't really have a notion of async
  • integrate the quarkus thread pool into the smallrye cp managed executor

And it turns out that https://github.com/stuartwdouglas/quarkus-old/commit/04d3d8acf33c74444a1408badef17e22b4118a49 does not work because exchange is null, in DeploymentManagerImpl:

            }).call(null, null);

Huh, it's called twice with a null exchange, but those are not real requests. I can just bail out of those calls.

@manovotn or @mkouba how can I pass a servlet context from UndertowDeploymentTemplate to TransactionalInterceptorBase? It looks like I need to stuff it in InvocationContext but I'm really not sure how.

@manovotn so it looks like I have to do at least three things:

* merge [04d3d8a](https://github.com/quarkusio/quarkus/commit/04d3d8acf33c74444a1408badef17e22b4118a49) to extend the request context life for ArC in case of async requests (not sure why it works ATM)

* change our transactional support, this one is much tougher because it relies on CDI interceptors, which don't really have a notion of async

So what we currently have in SR for transactions isn't good enough? I thought Quarkus uses Narayana as well and we had some example with async there (the fullstack test I think?).

* integrate the quarkus thread pool into the smallrye cp managed executor

So we want to leverage one big pool from quarkus and allow to build all ManagedExecutor instances from that? If so then it's going to be quite difficult to manage several ME instance and their limitations on top of this one pool. I thought we are more in favor of ThreadContext approach here.

@manovotn or @mkouba how can I pass a servlet context from UndertowDeploymentTemplate to TransactionalInterceptorBase? It looks like I need to stuff it in InvocationContext but I'm really not sure how.

If you mean javax.interceptor.InvocationContext, then you can use the data there (getContextData()) to pass some key/value throughout the interceptor invocation chain.

So what we currently have in SR for transactions isn't good enough? I thought Quarkus uses Narayana as well and we had some example with async there (the fullstack test I think?).

Well, propagation is one thing, but we also have to make sure the server part that commits/rollbacks transactions is done at the end of the async request, and not earlier. So that's outside the scope of the thread provider, but still has to be done.

So we want to leverage one big pool from quarkus and allow to build all ManagedExecutor instances from that? If so then it's going to be quite difficult to manage several ME instance and their limitations on top of this one pool. I thought we are more in favor of ThreadContext approach here.

Why would it be harder? They just use another executor service provided by quarkus. Makes it safer to use.

If you mean javax.interceptor.InvocationContext, then you can use the data there (getContextData()) to pass some key/value throughout the interceptor invocation chain.

Yeah, but I want to pass the value from the UndertowDeploymentTemplate and I've no access to the interceptor context there. Only the request context.

Ah, I succeeded with ServletRequestContext.current() but it means quarkus-narayana-jta has to depend on quarkus-undertow.

it means quarkus-narayana-jta has to depend on quarkus-undertow

-1 :-(

Well, either that, or we make it an optional dependency, or we make an abstraction that can tell us if there's a request that turned async and how to get notified when it's done, OR WORSE we add support for async CDI interceptors ;)

Unless you have an other idea?

@manovotn so about the managed executors, what I want to be able to do is to make sure that users don't go around creating executor services and thread pools when quarkus has a thread pool for that.

For context for those not familiar with MP-CP: you can build ManagedExecutor instances which allow you to get CompletionStage instances that have context propagation and run on specific (non JDK default) thread executors when you call any CS.*Async() method. ATM, SmallRye-CP will always create a new FixedThreadPoolExecutor for each ManagedExecutor, but in Quarkus I think we want to use the Quarkus thread pool as much as possible.

Now, I can do it two ways:

  • Extend SmallRye-CP so that if a certain (new) service provider is found, we delegate all managed executor building to it, and Quarkus would return managed executors that use the same quarkus thread pool and cannot be shut down. But then it becomes impossible to create new thread pools via this mechanism (which I think is a good thing).
  • Just declare CDI producers in Quarkus that produce a managed executor that uses the quarkus thread pool, which would be injected via the normal @Inject ManagedExecutor (especially if we don't have the CDI extension, but I wonder how it would work with the extension which ATM does the building/name-registering), but users would still be able to build managed executors using the API which would not use the Quarkus thread pool.

WDYT? And I guess I probably need to ask @vietj @dmlloyd and @stuartwdouglas if they have any input about this question.

There could be good reasons to allow users to make separated thread pools. Solution 2 seems better in that regard, though I don't know if we want to support doing it programmatically (as opposed to having e.g. named thread pools configured in the configuration the same way the built-in pool is configured). Either way though, we would want all managed executors to use our thread pool implementation class if possible.

@FroMage for the extension from SR CP I talked to Martin and this is nor currently possible. I created https://github.com/quarkusio/quarkus/issues/2243 to allow Arc to do that. Once that's in place, it should be pretty easy to achieve.

As for the executors, I understand what you want to do and why, I just don't see how you are doing to control, say, 5 different ManagedExecutor instances which all realistically run on _the same_ underlying Quarkus pool. Since they all allow to impose certain restrictions such as queues and maxAsync.

* Extend SmallRye-CP so that if a certain (new) service provider is found, we delegate all managed executor building to it, and Quarkus would return managed executors that use the same quarkus thread pool and cannot be shut down. But then it becomes impossible to create new thread pools via this mechanism (which I think is a good thing).

Not being able to shut down the executor is against spec BTW.
I also don't understand what do you mean by "impossible to create new thread pools via this mechanism"? If you use the builder pattern to create new one, what happens?
You are supposed to be able to have several different executors each of which has varying settings as maxAsync and maxQueued.

* Just declare CDI producers in Quarkus that produce a managed executor that uses the quarkus thread pool, which would be injected via the normal `@Inject ManagedExecutor` (especially if we don't have the CDI extension, but I wonder how it would work with the extension which ATM does the building/name-registering), but users would still be able to build managed executors using the API which would not use the Quarkus thread pool.

-1, this solves next to nothing and you are basically re-doing what the extension does. It would also block us from bringing in that extension (it would clash in many cases). Once we bring the extension, it can build executors from Quarkus thread pool just like the builder will (however that will be).

So it sounds like MP-CP supports wrapping the same pool with multiple different executors? I guess I missed that. :)

Would that be in addition to the ability to define separate underlying pools?

So it sounds like MP-CP supports wrapping the same pool with multiple different executors? I guess I missed that. :)

Yeah, and the IBM implementors told us that's what they're doing. Which makes a lot of sense IMO.

As for the executors, I understand what you want to do and why, I just don't see how you are doing to control, say, 5 different ManagedExecutor instances which all realistically run on the same underlying Quarkus pool. Since they all allow to impose certain restrictions such as queues and maxAsync.

I guess I'll find a "view" executor service somewhere on stack overflow ;) It's bound to exist. Or we make a single injectable ManagedExecutor that uses the Quarkus thread pool and does not care about min/max settings (it can't be configured itself), and then any users of the API who create ManagedExecutors will get new thread pools.

Not being able to shut down the executor is against spec BTW.

Pretty sure it's not, for pre-defined container-managed executors. IBM said they were doing exactly this and it makes a lot of sense.

So it sounds like MP-CP supports wrapping the same pool with multiple different executors? I guess I missed that. :)

Yeah, and the IBM implementors told us that's what they're doing. Which makes a lot of sense IMO.

So maybe this is a dumb question, and if so I apologize, but: what happens when you shut down an executor which is wrapping a pool that is shared? Surely it's allowed to just shut down the wrapping executor?

I guess I'll find a "view" executor service somewhere on stack overflow ;) It's bound to exist.

I think I already have some code in jboss-threads that might be reusable for this purpose. If not though, it's not really difficult to implement.

what happens when you shut down an executor which is wrapping a pool that is shared? Surely it's allowed to just shut down the wrapping executor?

I'll go ahead and claim that you can't shut down a container-managed executor. It should either silently pretend to be shut down and do nothing, or throw.

what happens when you shut down an executor which is wrapping a pool that is shared? Surely it's allowed to just shut down the wrapping executor?

I'll go ahead and claim that you can't shut down a container-managed executor. It should either silently pretend to be shut down and do nothing, or throw.

What I meant is that you need to pretend you did shut it down, basically keep a boolean around and forbid any further use once user called shutdown. There are some TCKs for this I think. Obviously, you are not going to kill whole Quarkus tread pool on a whim ;-)

As for the executors, I understand what you want to do and why, I just don't see how you are doing to control, say, 5 different ManagedExecutor instances which all realistically run on the same underlying Quarkus pool. Since they all allow to impose certain restrictions such as queues and maxAsync.

I guess I'll find a "view" executor service somewhere on stack overflow ;) It's bound to exist. Or we make a single injectable ManagedExecutor that uses the Quarkus thread pool and does not care about min/max settings (it can't be configured itself), and then any users of the API who create ManagedExecutors will get new thread pools.

Making just one defeats the purpose - you might as well then use ThreadContext and delegate on Quarkus thread pool directly. Having several differently set executors has its use cases.

The view layer is IMO best approach, so if it ain't that difficult, all the better!

Frankly I don't think min/max make _any_ sense if the underlying thread pool is shared and container-managed. I can't think of any use-case for that.

What I can totally accept is that there may be need for several ManagedExecutor if Quarkus gains multiple thread pools. Vert.x has a worker pool and an IO pool, for example.

you might as well then use ThreadContext and delegate on Quarkus thread pool directly

Well, if you _really_ want to call async methods (and frankly in tests I do a lot, though I would never do that in production code) then it's much easier to use the ManagedExecutor API because you don't need to pass it around at every method.

What I meant is that you need to pretend you did shut it down, basically keep a boolean around and forbid any further use once user called shutdown. There are some TCKs for this I think. Obviously, you are not going to kill whole Quarkus tread pool on a whim ;-)

Well I mean if we do support "view" executors with their own tasks, limits, etc. then it makes sense to be able to shut down the "view", including things like returning submitted-but-not-run tasks from shutdownNow etc. I think it's probably going to be somewhat more complex than just a boolean though.

Or we make a single injectable ManagedExecutor that uses the Quarkus thread pool and does not care about min/max settings (it can't be configured itself), and then any users of the API who create ManagedExecutors will get new thread pools.

Making just one defeats the purpose - you might as well then use ThreadContext and delegate on Quarkus thread pool directly. Having several differently set executors has its use cases.

The view layer is IMO best approach, so if it ain't that difficult, all the better!

I think both should be supported. A view layer works well for many cases, but separate pools have advantages as well:

  • Starvation due to blocking tasks becomes impossible (I know, reactive blah blah but I think these thread pools will naturally be used for blocking tasks as well, because if you're 100% reactive then you normally don't even need a thread pool to begin with)
  • They can be used for priority management and task categorization (e.g. a low-priority pool which handles 95% and a high-priority pool which handles 5%)

So I think it's smart to be able to support both.

I think I already have some code in jboss-threads that might be reusable for this purpose. If not though, it's not really difficult to implement.

Where? @dmlloyd ? Not immediately locatable in https://github.com/jbossas/jboss-threads/tree/master/src/main/java/org/jboss/threads

Ah, I succeeded with ServletRequestContext.current() but it means quarkus-narayana-jta has to depend on quarkus-undertow.

So the funny thing is that the servlet async listener gets notified of completion/error but its notion of error is different to the notion of user error: if RESTEasy maps the exception to a Response then it will not throw to the container, and so the servlet async listener sees completion but no error. That's not acceptable in our case, because here it would mean the transaction interceptor does not see the exceptions.

I think I already have some code in jboss-threads that might be reusable for this purpose. If not though, it's not really difficult to implement.

Where? @dmlloyd ? Not immediately locatable in https://github.com/jbossas/jboss-threads/tree/master/src/main/java/org/jboss/threads

I was thinking of https://github.com/jbossas/jboss-threads/blob/2.x/src/main/java/org/jboss/threads/OrderedExecutor.java which was deleted in the current master. But upon reread it looks like this doesn't do what I remember it doing (it's focused on sequential execution rather than being an actual "view").

OK thanks, pity.

If you guys want me to, I can look into implementing a view executor. I have a pretty clear idea of what is involved, at any rate.

I'd love if you could, because you would have to review my bug-ridden code and fix it anyway and it would take longer ;)

Okay :)

Should I just add it to jboss-threads or is there a place you prefer it to be located?

Dunno, should I use it or copy it? Is jboss-threads already a dependency of quarkus-core?

It is. We use it for the thread pool implementation already.

OK, in that case that's already a dependency of anything Quarkus, then yes that's a fine place.

OK, so I made it work. Now I want to make narayana not depend on undertow+jaxrs, which means I need to abstract these functionality:

  • is a request running?
  • is it async?
  • notify me when it is done

I suspect I can make some service which will be implemented by jaxrs and undertow to provide notifications (I really need both to get exceptions properly), but I'm not sure where to put this abstract? quarkus-core or quarkus-arc or other?

@stuartwdouglas @dmlloyd opinions?

Once I solve this issue I can make a mergeable PR.

It could live in ArC because this is used by a CDI interceptor, and perhaps other interceptors will need that abstraction. @mkouba WDYT?

@FroMage The problem is that undertow (deployment module) depends on ArC...

Undertow can depend on ArC and implement a service provider for it, so it can tell ArC how to listen for async request notifications. I don't see the problem.

Ok. What I meant is that the abstraction API could not contain anything from undertow if you put the abstraction in the ArC deployment module.

Sure, I cannot make ArC depend on undertow, of course :)

The view executor implementation is complete, I'm just writing tests now.

OK, I have a version ready for review at #2321.

@dmlloyd great! But I'm not sure how we will be able to use it since ATM the configuration annotations are blocked by #2243, and the builder API does not allow for custom ExecutorService, which is why for now I've hard-coded that the injectable ManagedExecutor is backed by the Quarkus thread pool, but it is not built using the MP-CP builder API.

We can extend the SmallRye-CP builder API, or push for an upstream change to support this.

Why couldn't we use the builder API with the executor service? E.g. build new ManagedExecutors from Quarkus pool?
We should be able to just provide these services once to bootstrapping SR CP via some config or perhaps extend SmallRyeContextManager to have a getter for it which in SR impl returns null and here we override it in extending class. You would then use the services whenever its not null.

I mean, we sure don't want to get to the point where you get different things from builder API and from injection, right?

Why couldn't we use the builder API with the executor service? E.g. build new ManagedExecutors from Quarkus pool?

Because there's no API in place for that in MP-CP builder API: you can specify min/max threads but not an ExecutorService. So either we add this API in SmallRye-CP, or I make it a global deployment option, but then nobody will be able to use the builder API to get a non-quarkus-backed thread pool.

I haven't gotten to review the PR yet, or even the SmallRye implementation, but it seems to me that we absolutely need to be able to specify a custom ExecutorService implementation one way or another. Maybe not as a plain ExecutorService though because that API doesn't allow for passing configuration in to the implementation, but maybe some factory interface that accepts the configuration and/or builder and returns the ExecutorService to use.

Well, in SmallRye-CP or MP-CP? I agree (doesn't matter where), but I also think this does not need to block this PR, since it's already useful and we want it to be tested ASAP.

SmallRye IMO

https://github.com/smallrye/smallrye-context-propagation/issues/50 then, but that will probably mean something like:

((SmallRyeManagedExecutorBuilder)ManagedExecutor.builder())
 .min(...).max(...).executor(...).build()

In order to get the new methods. I don't care that much, I think this is rare code.

That's basically what I'm doing for config already.

Yeah, I don't see any better option.

Why couldn't we use the builder API with the executor service? E.g. build new ManagedExecutors from Quarkus pool?

Because there's no API in place for that in MP-CP builder API: you can specify min/max threads but not an ExecutorService. So either we add this API in SmallRye-CP, or I make it a global deployment option, but then nobody will be able to use the builder API to get a non-quarkus-backed thread pool.

My initial understanding was that you want quarkus-backed thread pools only. That's why I suggested baking it into it. Is there a use/need for creating other pools as well?

That's what I mentioned originally: we currently don't have any API to specify if we want a new thread pool or not. I can make it so the quarkus thread pool is always the default, and allow users to override it (for the API, not for container-managed instances).

@dmlloyd I've tried to use ViewExecutor in SmallRye-CP to constrain any custom passed ExecutorService but it's only an Executor and not an ExecutorService, so I'm not sure how to use it, since I need all the extra methods.

It extends AbstractExecutorService so it should implement ExecutorService?

DUH

I blame my IDE and poor assign-to-local-variable.

MERGED!

Was this page helpful?
0 / 5 - 0 ratings