I've started a inspector session using the command line args:
java -Dpolyglot.inspect.Suspend=false -Dpolyglot.inspect=9229 ...
And I see the output:
Debugger listening on port 9229.
To start debugging, open the following URL in Chrome:
chrome-devtools://devtools/bundled/js_app.html?ws=127.0.0.1:9229/462d5aee-12ffc44f9f
Debugger listening on port 9229.
To start debugging, open the following URL in Chrome:
chrome-devtools://devtools/bundled/js_app.html?ws=127.0.0.1:9229/eadd4fb-12d849ff29
So it seems that for each -Dpolyglot.inspect flag you see the log (that is why you see it twice above).
Then on vscode use the following config launch.json:
{
"version": "0.2.0",
"configurations": [
{
"type": "node",
"request": "attach",
"name": "Attach to ES4X",
"port": 9229
}
]
}
It will error out (most of the times with:
Processing of 'Debugger.enable' has caused Engine is already closed.
Which is untrue as the application is still running as expected
A full log of the debugging session is in attachment.
More info, so when you manage to get it to connect (which requires timing to work, most of the times I need to connect asap the JVM is still loading) and we disconnect from the vscode interface you'll see:
Communication with the client broken, or an bug in the handler code java.lang.IllegalStateException: Engine is already closed.
at com.oracle.truffle.polyglot.PolyglotEngineImpl.checkState(PolyglotEngineImpl.java:591)
at com.oracle.truffle.polyglot.PolyglotInstrument.lookup(PolyglotInstrument.java:155)
at com.oracle.truffle.polyglot.PolyglotImpl$EngineImpl.lookup(PolyglotImpl.java:440)
at com.oracle.truffle.api.instrumentation.TruffleInstrument$Env.lookup(TruffleInstrument.java:349)
at com.oracle.truffle.tools.chromeinspector.TruffleRuntime.disable(TruffleRuntime.java:94)
at com.oracle.truffle.tools.chromeinspector.server.InspectServerSession.sendClose(InspectServerSession.java:82)
at com.oracle.truffle.tools.chromeinspector.server.WebSocketServer$InspectWebSocket.onClose(WebSocketServer.java:352)
at fi.iki.elonen.NanoWSD$WebSocket.doClose(NanoWSD.java:162)
at fi.iki.elonen.NanoWSD$WebSocket.readWebsocket(NanoWSD.java:259)
at fi.iki.elonen.NanoWSD$WebSocket.access$200(NanoWSD.java:65)
at fi.iki.elonen.NanoWSD$WebSocket$1.send(NanoWSD.java:88)
at fi.iki.elonen.NanoHTTPD$HTTPSession.execute(NanoHTTPD.java:957)
at fi.iki.elonen.NanoHTTPD$ClientHandler.run(NanoHTTPD.java:192)
at java.lang.Thread.run(Thread.java:748)
Which gives the idea that there is a mismatch between the polyglot context and the thread where the server seems to be running?
You're starting two Engines in your application, right? We did not really test that use-case yet.
Indeed I'm starting 2 contexts, and one is stopped as it runs at startup to collect some engine specific information to prepare the run of the application that is started on the second context that is never stopped until the end of life of the application.
@entlicher I can confirm that using just 1 context makes the issue go away. https://github.com/reactiverse/es4x/pull/51
However there are cases when I need several contexts, one would be performance, in this case I'm using 1 context per core so I can handle several requests in parallel.
Yes, I'll fix that.
Currently, it opens a separate URL for every Engine. It's not greatly flexible, but it provides isolated debugging sessions. We can also explore a possibility to merge debugging of multiple Engines into a single session.
@pmlopes if you create the context with an explicit engine like this:
Context.newBuilder().engine(engine).build()
You can share the debugging session across multiple contexts.
As a side-effect the code cache will be shared for every context.
馃憢 I am creating multiple contexts in parallel (let's say 300 in worst case) and still receiving this error on 19.3.0. It occurs very rarely (0.005% of the time on context create from my calculation). Is this expected behavior with the fix?