Chrome 70 ships with Web Locks:
The Web Locks API allows scripts running in one tab or worker to asynchronously acquire a lock, hold it while work is performed, then release it. While held, no other script executing in the same origin can acquire the same lock, which allows a web app running in multiple tabs or workers to coordinate work and the use of resources.
navigator.locks.request('my_resource', async lock => {
// The lock has been acquired.
await do_something();
await do_something_else();
// Now the lock will be released.
});
Would it make sense for Node to ship such an API for workers (or even child processes)?
cc @nodejs/workers @nodejs/open-standards
Would it make sense for Node to ship such an API for workers (or even child processes)?
I think so, yes. I think an implementation that supports child processes would be significantly different (and harder to do), so if we want to support that, it might help if we decided that upfront.
I think these sort of locks would make a much nicer API than Atomics.wait
for synchronizing threads + I like the fact it's using a disposer.
Some things to note:
AbortSignal
which we currently don't have, arguably we should for other things toocc @inexorabletash @annevk @jakearchibald (if any of you don't like pings let me know and I won't in the future) - would just really appreciate some feedback from people involved.
I think an implementation that supports child processes would be significantly different
I've done some work on a cross process semaphore that might be applicable (no Windows support).
The resource name is a string which I'm not a huge fan of since it means threaded code can accidentally interfere with other threaded code - would be great if it took a symbol or an object.
I don't see how it could be possible. There is no way to have two reference-equal objects between two workers and the only way to do it with a symbol is to do Symbol.for(name)
, which is basically the same as using a string.
I don't see how it could be possible. There is no way to have two reference-equal objects between two workers and the only way to do it with a symbol is to do Symbol.for(name), which is basically the same as using a string.
Hmm, for example:
Similarly, if symbols could be passed to workers and stay equal that would also work.
I totally get the point though.
I am conflicted about this API. One of the greatest feature of Node.js is that it is really hard to create a deadlock. I'm not sure if making it _easy_ is something that we should really look for. In the context of servers, this is extremely powerful and extremely dangerous at the same time. A browser user can always refresh a tab, the 1000s of users of a server can all get stuck.
I would like to ask some questions:
The implementation for this should be extremely easy to debug, I think it should also have some safety measures in place to detect deadlock situations and crash the process.
For diagnostics purposes, it should have at least async_hooks support, and possibly trace events too.
I'm -1 on having this for child processes or cluster. Supporting that cross-platform would be very complicated.
+1 for workers, as long as the implementation does not pollute global scope in any form.
If there are concerns with a locking API using Strings, and with it using Objects. Why not allow both?
// Assume it would try to lock only first index???
locks.requestFromSharedArrayBuffer(SharedArrayBuffer, ...);
locks.request(string, ...);
I'd be interested in taking this up
can this feature be implemented in the ecosystem?
With a native addon, yes. Tho I'm not certain that a native addon would cover everything. It should be explored.
is there significant Browser usage to consider it before adding?
No. Chrome and Opera are the only implementing browsers so far.
has it been required by our community in any form?
Unclear.
is this needed to provide a good API for workers?
I believe it will be. Whether this specific API or not, locks will at some point be necessary.
what happens if we try to acquire the same lock in two different places within the same thread?
That's unclear.
I'd like to see more experimentation with this before it lands.
is this needed to provide a good API for workers?
I believe it will be. Whether this specific API or not, locks will at some point be necessary.
I agree. And while we do have Atomics
available, this API will provide a much better user experience out-of-the-box.
what happens if we try to acquire the same lock in two different places within the same thread?
In Chrome, it seems to have a queue and execute them in the order they were requested:
function wait(s) { return new Promise((res) => setTimeout(res, s*1000)); }
navigator.locks.request('test', async () => {
console.log('before 1');
await wait(1);
console.log('after 1');
});
navigator.locks.request('test', async () => {
console.log('before 2');
await wait(1);
console.log('after 2');
});
Output:
before 1
after 1
before 2
after 2
Hey folks - happy to answer questions. In case you haven't found it, there's an early spec and the repo is at https://github.com/WICG/web-locks which has issues tracking the history and decisions, and also some proposed next steps for the API.
what happens if we try to acquire the same lock in two different places within the same thread
As @targos mentions, it would be blocked; this is covered in the explainer (see _"If a tab is holding an exclusive lock, what happens if another lock request for the same resource is made?"_) but I don't think that's reflected explicitly in the spec.
Can someone clarify the differences with Atomics ? Does anyone use Atomics ?
@GrosSacASac
Can someone clarify the differences with Atomics ? Does anyone use Atomics ?
Atomics are already supported in worker_threads
- you can use them - they're useful for different use cases usually and Web Locks caters for the "I need a mutex" use case much better.
Worker threads are opt-in anyway - most users will not run into deadlocks or race conditions originating in worker_threads anyway. They do however enable a whole new kind of workload in Node.
... must users will not run into deadlocks or race conditions originating in worker_threads ...
While I can understand where the optimism is coming from, this is a dangerous assumption. I've often found that users are far more creative at finding new and amazing new ways of getting themselves in trouble ;-)
The one case that I am most concerned with here, however, is not deadlocking the worker threads but deadlocking the main event loop thread. If we deadlock there then we're in definite trouble.
@benjamingr ... have you given some thought to how the AbortController would work here?
powerful features always have dangerous behaviour, i don't think that deadlocks alone disqualify this.
for abort controllers, i think we should leave the feature out and use https://github.com/tc39/proposal-cancellation, or we can wait for that proposal and then work on the feature.
I don't think anyone is suggesting that deadlocks alone disqualify anything... we just need to have it addressed as part of this moving forward. At the very least, a reliable way of aborting from the main thread needs to be looked at.
Absolutely! We should just take into account deadlocks in the implementation, and provide a safety net for our users (even on a later PR). As an example, you do not want a situation where a process could not exits because there are resources allocated within a deadlock.
We should also strive to provide this info to the inspector - powerful debugging tools are needed.
I would leave cancelation out for the time being, it’s a nice addon.
IMHO those are things we would need for these to get out of experimental, not for the initial PR.
We can provide an API that look warns if a lock is held for more than a given time to find deadlocks or do actual deadlock detection.
As for AbortController - I’m fine with starting without it and add it at a later point. We should bring that up in @nodejs/open-standards
@addaleax had mentioned a few times implementing a technique for deadlock detection. I'm +1 on waiting on the implementation of anything like AbortController
and cancellation in general, but it would need to be addressed before this could come out of experimental status.
That deadlock detection (still something I really want to get in, WIP @ addaleax/node@fc0d2a25d7aba4b67e0a589d1bd6d368e3f1a9d8) would currently account only for Atomics.wait
– I don’t think we’d want to mix these in with the Web Locks, right?
Note that for deadlocks we settled on query()
to detect holders, and the controversial steal
option to grab a lock despite other holders. This may be insufficient for what you're thinking of, though.
@inexorabletash ... so long as there is a thread that is able to call query()
and use steal
, then that works. If there is only the main event loop thread and a single worker, and both are deadlocked, then those won't help without some additional form of deadlock detection.
The API is entirely async so nothing is ever truly blocked - event loop keeps spinning, microtask queues drain, etc.
On Sep 6, 2018, at 1:10 PM, James M Snell notifications@github.com wrote:
@inexorabletash ... so long as there is a thread that is able to call query() and use steal, then that works. If there is only the main event loop thread and a single worker, and both are deadlocked, then those won't help without some additional form of deadlock detection.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
Ah, ok, I think that was the missing bit for me, then :-)
Given the example...
navigator.locks.request('my_resource', async lock => {
// The lock has been acquired.
await do_something();
await do_something_else();
// Now the lock will be released.
});
A deadlock would mean only that the callback would never be invoked as it would be waiting forever, not that the actual thread would deadlock and block. Yes?
If that's the case, I think we're golden but a cancellation mechanism is still ideal later on.
Correct.
You can also await
the request()
call but that's just sugar over getting a Promise back.
A deadlock would mean only that the callback would never be invoked as it would be waiting forever, not that the actual thread would deadlock and block. Yes?
If that's the case, I think we're golden but a cancellation mechanism is still ideal later on.
I think there are a lot of edge case to consider still. Does a thread owning a lock keep a process spinning? Should it?
How should locks behave with respect to 'uncaughtException'
and 'unhandledRejections'
? Specifically, if a developer implements a log-and-keep-going strategy (which we currently have for promises) and the lock is never released?
I think the process should exit with an error if the only thing keeping it alive is the lock.
I can confirm that in my current implementation a deadlock won't keep the process alive. I can try to add an error message to that.
@devsnek I think that would be brilliant.
@devsnek are you still working on this?
cc @kromdaniel
FYI @benjamingr @addaleax @mcollina @aqrln @lundibundi
Here is Web Locks API implementation in pure JavaScript using worker_threads
, Atomics
and SharedArrayBuffer
: https://github.com/metarhia/web-locks
I tried to separate locks from threads as much as possible, we just need to attach workers to LockManager by locks.attach(worker)
after Worker instantiation. Also we can hide attach
call to Worker sub class with object composition or call attach
from Worker constructor directly.
Most helpful comment
FYI @benjamingr @addaleax @mcollina @aqrln @lundibundi
Here is Web Locks API implementation in pure JavaScript using
worker_threads
,Atomics
andSharedArrayBuffer
: https://github.com/metarhia/web-locksI tried to separate locks from threads as much as possible, we just need to attach workers to LockManager by
locks.attach(worker)
after Worker instantiation. Also we can hideattach
call to Worker sub class with object composition or callattach
from Worker constructor directly.