Describe the bug
Creating a clustered app (we need the eventbus to communicate and add some shared lock between different instances), choosing Infinispan as cluster manager (adding the dependency io.vertx:vertx-infinispan).
When the application starts, it throws a NoSuchMethodError trying to parse the default infinispan config file and the quarkus start crashes.
The dependency version is the one defined in the universe bom (vertx.version 3.9.2)
https://github.com/quarkusio/quarkus/blob/master/bom/application/pom.xml#L106
which has defined the infinispan version 9.4.10.Final :
https://github.com/vert-x3/vertx-infinispan/blob/3.9.2/pom.xml#L33
But in the quarkus universe bom the dependency version is overridden for the 10.1.5.Final
https://github.com/quarkusio/quarkus/blob/master/bom/application/pom.xml#L125
(my educated guess is because of the infinispan-client extension needs them)
In the ParserRegistry class of the newer version there's no method parse(InputStream is) which is invoked by the ClusterInfinispanManager for the configured file (see the stacktrace)
Expected behavior
The application should start and look for an existent cluster with the default configuration
2020-07-21 16:45:54.595 INFO [,,] 80514 --- [org.inf.CLUSTER] (vert.x-worker-thread-0) ISPN000094: Received new cluster view for channel ISPN: [juanzu-45490|1] (2) [juanzu-45490, juanzu-8256]
2020-07-21 16:45:54.609 INFO [,,] 80514 --- [org.inf.rem.tra.jgr.JGroupsTransport] (vert.x-worker-thread-0) ISPN000079: Channel ISPN local address is juanzu-8256, physical addresses are [127.0.0.1:7800]
Actual behavior
_The application does NOT start_ and crashes ...
Caused by: java.lang.NoSuchMethodError: 'org.infinispan.configuration.parsing.ConfigurationBuilderHolder org.infinispan.configuration.parsing.ParserRegistry.parse(java.io.InputStream)'
at io.vertx.ext.cluster.infinispan.InfinispanClusterManager.lambda$join$8(InfinispanClusterManager.java:228)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:313)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
Look at the end for the full stacktrace
To Reproduce
Steps to reproduce the behavior:
-Dquarkus.vertx.cluster.clustered=true -Dquarkus.vertx.cluster.host=127.0.0.1 -Dquarkus.vertx.cluster.port=7800 -Djava.net.preferIPv4Stack=trueConfiguration
# Add your application.properties here, if applicable.
quarkus.vertx.cluster.clustered=true
Mainly the only config value needed is the clustering enablement option.
Environment (please complete the following information):
uname -a or ver: Linux juanzu.fedora 5.7.7-100.fc31.x86_64 #1 SMP Wed Jul 1 20:37:05 UTC 2020 x86_64 x86_64 x86_64 GNU/Linuxjava -version: openjdk version "11.0.7" 2020-04-14
OpenJDK Runtime Environment 18.9 (build 11.0.7+10)
OpenJDK 64-Bit Server VM 18.9 (build 11.0.7+10, mixed mode, sharing)
mvnw --version or gradlew --version): Apache Maven 3.6.3 (cecedd343002696d0abb50b32b541b8a6ba2883f)
Maven home: /home/jzuriaga/.m2/wrapper/dists/apache-maven-3.6.3-bin/1iopthnavndlasol9gbrbg6bf2/apache-maven-3.6.3
Java version: 11.0.7, vendor: Oracle Corporation, runtime: /usr/lib/jvm/java-11-openjdk-11.0.7.10-1.fc31.x86_64
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "5.7.7-100.fc31.x86_64", arch: "amd64", family: "unix"
Additional context
A workaround is to override the infinispan dependencies in the pom ... then the application starts and the cluster is created.
```pom.xml
...
...
**Full Stacktrace**
Caused by: java.lang.RuntimeException: Failed to start quarkus
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:679)
at io.quarkus.runtime.Application.start(Application.java:90)
at io.quarkus.runtime.ApplicationLifecycleManager.run(ApplicationLifecycleManager.java:91)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:61)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:38)
at io.quarkus.runtime.Quarkus.run(Quarkus.java:106)
at io.quarkus.runner.GeneratedMain.main(GeneratedMain.zig:29)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:566)
at io.quarkus.runner.bootstrap.StartupActionImpl$3.run(StartupActionImpl.java:145)
at java.base/java.lang.Thread.run(Thread.java:834)
Caused by: java.util.concurrent.CompletionException: java.lang.NoSuchMethodError: 'org.infinispan.configuration.parsing.ConfigurationBuilderHolder org.infinispan.configuration.parsing.ParserRegistry.parse(java.io.InputStream)'
at java.base/java.util.concurrent.CompletableFuture.reportJoin(CompletableFuture.java:412)
at java.base/java.util.concurrent.CompletableFuture.join(CompletableFuture.java:2044)
at io.quarkus.vertx.core.runtime.VertxCoreRecorder.initialize(VertxCoreRecorder.java:119)
at io.quarkus.vertx.core.runtime.VertxCoreRecorder$VertxSupplier.get(VertxCoreRecorder.java:313)
at io.quarkus.vertx.core.runtime.VertxCoreRecorder$VertxSupplier.get(VertxCoreRecorder.java:300)
at io.quarkus.vertx.runtime.VertxRecorder.configureVertx(VertxRecorder.java:36)
at io.quarkus.deployment.steps.VertxProcessor$build-626852132.deploy_0(VertxProcessor$build-626852132.zig:130)
at io.quarkus.deployment.steps.VertxProcessor$build-626852132.deploy(VertxProcessor$build-626852132.zig:36)
at io.quarkus.runner.ApplicationImpl.doStart(ApplicationImpl.zig:538)
... 12 more
Caused by: java.lang.NoSuchMethodError: 'org.infinispan.configuration.parsing.ConfigurationBuilderHolder org.infinispan.configuration.parsing.ParserRegistry.parse(java.io.InputStream)'
at io.vertx.ext.cluster.infinispan.InfinispanClusterManager.lambda$join$8(InfinispanClusterManager.java:228)
at io.vertx.core.impl.ContextImpl.lambda$executeBlocking$2(ContextImpl.java:313)
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
... 1 more
```
/cc @karesti, @wburns
Which version of Quarkus are you using?
Current 1.6.1.Final is using Vert.x 3.9.1.
My guess would be that Vert.x Infinispan is using an older version of the Infinispan client that we are using in Quarkus (10.1.5.Final).
@karesti @wburns @vietj should you update the Infinispan used in Vert.x Infinispan?
Which version of Quarkus are you using?
Current 1.6.1.Final is using Vert.x 3.9.1.
@gsmet I've detected it in 1.6.0.Final and checked that in 1.6.1.Final is also happening.
My guess would be that Vert.x Infinispan is using an older version of the Infinispan client that we are using in Quarkus (10.1.5.Final).
Yes, the problem is the universe-bom is forcing the newer dependency.
Something similar happens when you use Hazelcast, see here:
@juazugas have you tried to force the infinispan-core version in your POM? Like this:
<dependency>
<groupId>org.infinispan</groupId>
<artifactId>infinispan-core</artifactId>
<version>9.4.20.Final</version>
</dependency>
To better understand your use case, what kind of Quarkus application are you developing? Does it use Vert.x APIs directly? Or just the event bus and @ConsumeEvent methods?
Hi @tsegismont,
Thanks for the response. Yes, I have tried and it works, that is the workaround that allows running the application with the Infinispan cluster manager (Actually overriding the infinispan-core and infinispan-commons in the dependencyManagement)
To better understand your use case, what kind of Quarkus application are you developing? Does it use Vert.x APIs directly? Or just the event bus and
@ConsumeEventmethods?
This application needs to coordinate access to a registration resource. Specifically, I'm using the shared data API from vert.x to create a distributed Lock. This application needs to have at least two instances running so I took advantage of the Vert.x clustering. Also, there are future plans to use the event bus also for sending "messages" to all the running instances.
@juazugas have you tried to force the
infinispan-coreversion in your POM? Like this:<dependency> <groupId>org.infinispan</groupId> <artifactId>infinispan-core</artifactId> <version>9.4.20.Final</version> </dependency>To better understand your use case, what kind of Quarkus application are you developing? Does it use Vert.x APIs directly? Or just the event bus and
@ConsumeEventmethods?
Actually, I have a problem now with consuming messages from a clustered eventbus.
I've configured and made the cluster working with Quarkus and Hazelcast. But when I produce and consume a message nothing happens at the consuming side.
Here is my code, I hope someone can help me:
My producer:
`
@Path("/producer")
public class EventbusProducer {
@Inject
EventBus bus;
@POST
@Consumes(TEXT_PLAIN)
public Uni<Message<String>> send(String msg) {
log.info("Sending: {}", msg);
return bus.request("hello", msg);
}
}
`
application.properties (same for producer as consumer):
`
quarkus.vertx.cluster.clustered=true
#quarkus.vertx.cluster.port=8081
#quarkus.vertx.cluster.host=localhost
#quarkus.vertx.cluster.public-host=localhost
#quarkus.vertx.prefer-native-transport=true
quarkus.http.port=8080
`
My consumer:
`
@ApplicationScoped
public class EventbusConsumer {
static final Logger log = LoggerFactory.getLogger(EventbusConsumer.class);
@Inject
Vertx vertx;
public void init(@Observes StartupEvent ev) {
log.info("this.vertx.isClustered() = {}", vertx.isClustered());
vertx.eventBus().consumer("hello")
.handler(h -> h.reply("hello from consumer: ", h.body()))
.completionHandler(res -> {
if (res.succeeded()) {
log.info("in completed");
log.info(res.result().toString());
} else if (res.failed()) {
log.error("in failed");
} else {
log.error("in else...");
}
});
}
@ConsumeEvent("hello")
public Uni<String> receive(String msg) {
log.info("Received: {}", msg);
return Uni.createFrom().item(() -> msg.toUpperCase());
}
`
As you can see I have tried 2 ways to consume a message from the eventbus and both are not working.
When I start the producer and the consumer I get the following on the console:
2020-08-18 12:08:44,826 INFO [nl.pro.ver.res.EventbusProducer] (executor-thread-199) Sending: Hello Bob
2020-08-18 12:08:44,861 ERROR [org.jbo.res.res.i18n] (vert.x-eventloop-thread-30) RESTEASY002020: Unhandled asynchronous exception, sending back 500: (NO_HANDLERS,-1) No handlers for address hello
at io.vertx.core.eventbus.impl.EventBusImpl.deliverMessageLocally(EventBusImpl.java:411)
at io.vertx.core.eventbus.impl.EventBusImpl.deliverMessageLocally(EventBusImpl.java:360)
at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.onSubsReceived(ClusteredEventBus.java:240)
at io.vertx.core.eventbus.impl.clustered.ClusteredEventBus.lambda$null$6(ClusteredEventBus.java:224)
at io.vertx.spi.cluster.hazelcast.impl.HazelcastAsyncMultiMap.lambda$null$6(HazelcastAsyncMultiMap.java:156)
at io.vertx.core.impl.ContextImpl.executeTask(ContextImpl.java:366)
at io.vertx.core.impl.EventLoopContext.lambda$executeAsync$0(EventLoopContext.java:38)
at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:834)
2020-08-18 12:08:45,976 INFO [com.haz.int.par.imp.MigrationThread] (hz._hzInstance_1_dev.migration) [192.168.2.31]:5701 [dev] [3.12.2] All migration tasks have been completed. (lastRepartitionTime=Tue Aug 18 12:08:44 CEST 2020, completedMigrations=271, totalCompletedMigrations=542, elapsedMigrationTime=845ms, totalElapsedMigrationTime=1555ms)
Edit: I've changed the PostConstruct into @Observes, and still the same error.
This application needs to coordinate access to a registration resource. Specifically, I'm using the shared data API from vert.x to create a distributed Lock.
@karesti does the Infinispan client support locks/counters now? Perhaps for such cases it would make sense.
Also, there are future plans to use the event bus also for sending "messages" to all the running instances.
Do you have experience with Vert.x and the eventbus already? I'm asking because you need to know that delivery in clustered mode is best-effort so you need to be able to loose messages.
@Serkan80 you need to configure the cluster host otherwise Vert.x will assume localhost.
@tsegismont
@Serkan80 you need to configure the cluster host otherwise Vert.x will assume
localhost.
I've done the following and I still get the same error:
quarkus.vertx.cluster.host=localhost
quarkus.vertx.cluster.public-host=192.168.2.31
I've also tried this:
quarkus.vertx.cluster.host=192.168.2.31
Still no success. But whether I set a value to the _cluster.host_ or not, I see that the cluster can find my both instances during startup. I see this on my console:
Members {size:2, ver:8} [ Member [192.168.2.31]:5701 - dca28d28-99f9-4b4d-ac2a-acc1122bc407 this Member [192.168.2.31]:5702 - 3faef08c-75f6-4ac6-8d3c-9bec5016f781 ]
But when I send a message via the eventbus I still get the same error message (no handler found for address).
@Serkan80 can you share a reproducer on GitHub?
@tsegismont Here you are:
https://github.com/Serkan80/quarkus-vertx-eventbus.git
@Serkan80 so it seems the issue isn't related to config but to the way consumers are registered.
When a method is annotated with @ConsumeEvent("hello") the local attribute defaults to true and so the Vert.x consumer registration isn't propagated to the other nodes.
Then due to a limitation of Vert.x 3, only the first registration of a specific address is propagated to an address, but only if it's not a local one.
And in Quarkus annotated consumers are registered before any method of your bean is invoked.
This is why no consumer is recorded in the cluster manager at all.
Change the annotation to @ConsumeEvent(value = "hello", local = false) and it should work. At least it works on my machine with vertx-infinispan CM (with all POM workarounds).
@tsegismont thanks ! It's working now, but it would be very helpfull if this get documented in the Eventbus guide. Now it looks like some hidden feature that nobody knows about, while it is build and meant to be used.
@Serkan80 a former tech lead of mine used to say "if it's not documented it doesn't exist" :smile:
Seriously, the ability to use the cluster manager from Vert.x has not been announced in blogs or documented on purpose:
vertx-infinispan and vertx-hazelcast dependencies versions do not align with those of Quarkus extensionsIn other words, the clustering config and extension points are present, but there is no "Quarkus-ready" cluster manager yet.
Most helpful comment
@gsmet I've detected it in 1.6.0.Final and checked that in 1.6.1.Final is also happening.
Yes, the problem is the universe-bom is forcing the newer dependency.