-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
n-api: add API for asynchronous functions (simpler version) #17887
n-api: add API for asynchronous functions (simpler version) #17887
Conversation
abb2e1f
to
ca6f1a8
Compare
This PR does not address this issue. @bnoordhuis suggested that I add a mutex to allow the secondary thread to wait for the call to happen on the loop thread. I can do that, however,
This PR addresses this issue, because it leaves the creation/destruction of the vector holding the
This PR addresses this issue because it leaves the calling and return value processing up to the user. I also made another PR which keeps the marshalling and the return value processing separate. Note that the situation wherein the callback(s) on the loop thread may never get called is one that cannot be avoided in either PR. I mean, that's the nature of the call being asynchronous, right? If you make a call from the secondary thread and the loop thread cleans up by calling |
Correct. |
@gabrielschulhof I think it would be great to allow user to provide additional data/context information in |
hmm.. looks like |
The idea with `napi_call_threadsafe_function()` is that the secondary
thread first calls `napi_get_threadsafe_function_data()`, manipulates the
data at that pointer, then calls `napi_call_threadsafe_function()`. When
the callback gets called on the loop thread, it also calls
`napi_get_threadsafe_function_data()`, retrieves the data created by the
secondary thread, and calls into JS.
This way, each call into JS can be different from the other.
We did indeed discuss adding a queue to the implementation so as to ensure
a one-to-one ratio of calls on the secondary thread to calls on the loop
thread. I'd still rather leave the data management to the implementation,
though.
|
As you've pointed out later, this only works with one call at a time. In the case of having multiple async threads sending events in parallel, the data needs to be a queue otherwise you would be overriding each other. Or even one thread sending events in parallel to libuv event loop - resulting in a race condition.
The problem is that this pattern forces applications with the async event pattern into doing data management work that the API should be doing. The underlying issue seems to be that uv_async_send unfortunately doesn't allow you to pass additional void *. This is a major oversight in libuv in my opinion. For any kind of callback user should be able to pass their own data with each invocation. Without this support, user would be forced into using a queue to compensate for the missing functionality. Ideally uv_async_send should really support sending void * with the handle. N-api should just be consuming it. |
cc @h27han |
ca6f1a8
to
4134195
Compare
It looks like a queue of This means that:
|
Yes, having that would be super awesome. Thanks! |
4134195
to
e3e7324
Compare
@yizhang82 @simon-p-r how does the newest incarnation look? I've added the queue, the |
doc/api/n-api.md
Outdated
<!-- it's very convenient to have all the anchors indexed --> | ||
<!--lint disable no-unused-definitions remark-lint--> | ||
## Asynchronous Thread-safe Function Calls | ||
JavaScript functions can normally only be called from a native addon's main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Can you leave a blank line after each heading?
src/node_api.cc
Outdated
status = napi_generic_failure; | ||
} | ||
destroy_mutex: | ||
uv_mutex_destroy(&ts_fn->mutex); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use the helpers from node_mutex.h
for these things? They really improve readability for C++, all these goto
s are really hard to wrap your head around…
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Those abort if the mutex is not created successfully. N-API returns napi_generic_failure
.
e3e7324
to
33f5e4f
Compare
@addaleax I have replaced the |
33f5e4f
to
a226f89
Compare
doc/api/n-api.md
Outdated
--> | ||
```C | ||
NAPI_EXTERN napi_status | ||
napi_delete_threadsafe_function(napi_env env, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What will happen if the queue is not empty and one calls delete?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then the queue will be deleted shallowly and there will be no further calls into JS. At this point, the secondary threads should already be stopped, if the napi_threadsafe_function
was used correctly.
src/node_api.cc
Outdated
CHECK_ENV(env); | ||
CHECK_ARG(env, ts_fn); | ||
|
||
napi_status status = napi_delete_reference(env, ts_fn->ref); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this result in a crash when there are pending items? Taking the mutex might help.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It probably makes sense to wait for the queue to drain:
- I can protect myself from the race between delete and call, but I can't protect myself from the main event loop draining remaining item clashing with delete. I might be able to keep a count and track my own pending items, and only call delete when the count is 0 (with proper locks), but I can only protect my own function call and when the function returns all bets are off. IMHO this should be handled by the API layer instead of pushing it to all applications that consume this API.
- The function calls in the queue should be eventually gets drained - otherwise user data can leak.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't result in a crash if the napi_threadsafe_function
is used as documented - that is, the threads are terminated before napi_delete_threadsafe_function()
is called. For that same reason I need not lock the mutex. napi_delete_threadsafe_function()
must always be called from the loop thread, so it's synchronous wrt. the idle callback and async callback. So, after napi_delete_threadsafe_function()
completes libuv guarantees that there will be no more calls to the idle handler which would access the queue nor to the async callback which would set up an idle handler.
The items in the queue would indeed leak in such a case.
I don't think it's necessarily a good idea to wait for the queue to drain. I think being able to abort is a good thing - perhaps if there's a fatal error somewhere. In normal usage, the user would be aware of the consequences of deleting the thread-safe function.
So, I think we should either document that it's the user's responsibility to avoid leaks perhaps by making the last item on the queue be a terminator, or provide a delete-once-empty API in addition to napi_delete_threadsafe_function()
, like napi_delete_threadsafe_function_deferred()
.
src/node_api.cc
Outdated
}); | ||
}); | ||
|
||
return napi_clear_last_error(env); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I might be missing it - is there a place ts_fn gets deleted?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ts_fn
== handle
in the innermost lambda. @bnoordhuis advised me to stop making the assumption throughout the code that &ts_fn->async == ts_fn, but I don't believe I can avoid making that assumption when deleting the handle, because all I get from libuv is a uv_handle_t *handle
for the innermost lambda.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I found a way of removing the assumption.
Bundle a `uv_async_t`, a `uv_idle_t`, a `uv_mutex_t`, a `uv_cond_t`, and a `v8::Persistent<v8::Function>` to make it possible to call into JS from another thread. The API accepts a void data pointer and a callback which will be invoked on the loop thread and which will receive the `napi_value` representing the JavaScript function to call so as to perform the call into JS. The callback is run inside a `node::CallbackScope`. A `std::queue<void*>` is used to store calls from the secondary threads, and an idle loop is started by the `uv_async_t` callback on the loop thread to drain the queue, calling into JS with each item. Items can be added to the queue blockingly or non-blockingly. The thread-safe function can be referenced or unreferenced, with the same semantics as libuv handles. Re: nodejs/help#1035 Re: #20964 Fixes: #13512 PR-URL: #17887 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Michael Dawson <[email protected]>
private field 'async_context' is not used [-Wunused-private-field] PR-URL: nodejs#21597 Refs: nodejs#17887 Reviewed-By: Gabriel Schulhof <[email protected]> Reviewed-By: Daniel Bevenius <[email protected]>
private field 'async_context' is not used [-Wunused-private-field] PR-URL: #21597 Refs: #17887 Reviewed-By: Gabriel Schulhof <[email protected]> Reviewed-By: Daniel Bevenius <[email protected]>
Notable changes: * build: * Node.js should now be about 60% faster to startup than the previous version, thanks to the use V8's code cache feature for core modules. [#21405](#21405) * dns: * An experimental promisified version of the dns module is now available. Give it a try with `require('dns').promises`. [#21264](#21264) * fs: * `fs.lchown` has been undeprecated now that libuv supports it. [#21498](#21498) * lib: * `Atomics.wake` is being renamed to `Atomics.notify` in the ECMAScript specification ([reference](tc39/ecma262#1220)). Since Node.js now has experimental support for worker threads, we are being proactive and added a `notify` alias, while emitting a warning if `wake` is used. [#21413](#21413) [#21518](#21518) * n-api: * Add API for asynchronous functions. [#17887](#17887) * util: * `util.inspect` is now able to return a result instead of throwing when the maximum call stack size is exceeded during inspection. [#20725](#20725) * vm: * Add `script.createCachedData()`. This API replaces the `produceCachedData` option of the `Script` constructor that is now deprecated. [#20300](#20300) * worker: * Support for relative paths has been added to the `Worker` constructor. Paths are interpreted relative to the current working directory. [#21407](#21407) PR-URL: #21629
Notable changes: * dns: * An experimental promisified version of the dns module is now available. Give it a try with `require('dns').promises`. [#21264](#21264) * fs: * `fs.lchown` has been undeprecated now that libuv supports it. [#21498](#21498) * lib: * `Atomics.wake` is being renamed to `Atomics.notify` in the ECMAScript specification ([reference](tc39/ecma262#1220)). Since Node.js now has experimental support for worker threads, we are being proactive and added a `notify` alias, while emitting a warning if `wake` is used. [#21413](#21413) [#21518](#21518) * n-api: * Add API for asynchronous functions. [#17887](#17887) * util: * `util.inspect` is now able to return a result instead of throwing when the maximum call stack size is exceeded during inspection. [#20725](#20725) * vm: * Add `script.createCachedData()`. This API replaces the `produceCachedData` option of the `Script` constructor that is now deprecated. [#20300](#20300) * worker: * Support for relative paths has been added to the `Worker` constructor. Paths are interpreted relative to the current working directory. [#21407](#21407) PR-URL: #21629
Notable changes: * dns: * An experimental promisified version of the dns module is now available. Give it a try with `require('dns').promises`. [#21264](#21264) * fs: * `fs.lchown` has been undeprecated now that libuv supports it. [#21498](#21498) * lib: * `Atomics.wake` is being renamed to `Atomics.notify` in the ECMAScript specification ([reference](tc39/ecma262#1220)). Since Node.js now has experimental support for worker threads, we are being proactive and added a `notify` alias, while emitting a warning if `wake` is used. [#21413](#21413) [#21518](#21518) * n-api: * Add API for asynchronous functions. [#17887](#17887) * util: * `util.inspect` is now able to return a result instead of throwing when the maximum call stack size is exceeded during inspection. [#20725](#20725) * vm: * Add `script.createCachedData()`. This API replaces the `produceCachedData` option of the `Script` constructor that is now deprecated. [#20300](#20300) * worker: * Support for relative paths has been added to the `Worker` constructor. Paths are interpreted relative to the current working directory. [#21407](#21407) PR-URL: #21629
Bundle a `uv_async_t`, a `uv_idle_t`, a `uv_mutex_t`, a `uv_cond_t`, and a `v8::Persistent<v8::Function>` to make it possible to call into JS from another thread. The API accepts a void data pointer and a callback which will be invoked on the loop thread and which will receive the `napi_value` representing the JavaScript function to call so as to perform the call into JS. The callback is run inside a `node::CallbackScope`. A `std::queue<void*>` is used to store calls from the secondary threads, and an idle loop is started by the `uv_async_t` callback on the loop thread to drain the queue, calling into JS with each item. Items can be added to the queue blockingly or non-blockingly. The thread-safe function can be referenced or unreferenced, with the same semantics as libuv handles. Re: nodejs/help#1035 Re: nodejs#20964 Fixes: nodejs#13512 PR-URL: nodejs#17887 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Michael Dawson <[email protected]>
private field 'async_context' is not used [-Wunused-private-field] PR-URL: nodejs#21597 Refs: nodejs#17887 Reviewed-By: Gabriel Schulhof <[email protected]> Reviewed-By: Daniel Bevenius <[email protected]>
Bundle a `uv_async_t`, a `uv_idle_t`, a `uv_mutex_t`, a `uv_cond_t`, and a `v8::Persistent<v8::Function>` to make it possible to call into JS from another thread. The API accepts a void data pointer and a callback which will be invoked on the loop thread and which will receive the `napi_value` representing the JavaScript function to call so as to perform the call into JS. The callback is run inside a `node::CallbackScope`. A `std::queue<void*>` is used to store calls from the secondary threads, and an idle loop is started by the `uv_async_t` callback on the loop thread to drain the queue, calling into JS with each item. Items can be added to the queue blockingly or non-blockingly. The thread-safe function can be referenced or unreferenced, with the same semantics as libuv handles. Re: nodejs/help#1035 Re: #20964 Fixes: #13512 Backport-PR-URL: #25002 PR-URL: #17887 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Michael Dawson <[email protected]>
private field 'async_context' is not used [-Wunused-private-field] PR-URL: #21597 Refs: #17887 Backport-PR-URL: #25002 Reviewed-By: Gabriel Schulhof <[email protected]> Reviewed-By: Daniel Bevenius <[email protected]>
Bundle a `uv_async_t`, a `uv_idle_t`, a `uv_mutex_t`, a `uv_cond_t`, and a `v8::Persistent<v8::Function>` to make it possible to call into JS from another thread. The API accepts a void data pointer and a callback which will be invoked on the loop thread and which will receive the `napi_value` representing the JavaScript function to call so as to perform the call into JS. The callback is run inside a `node::CallbackScope`. A `std::queue<void*>` is used to store calls from the secondary threads, and an idle loop is started by the `uv_async_t` callback on the loop thread to drain the queue, calling into JS with each item. Items can be added to the queue blockingly or non-blockingly. The thread-safe function can be referenced or unreferenced, with the same semantics as libuv handles. Re: nodejs/help#1035 Re: #20964 Fixes: #13512 Backport-PR-URL: #25002 PR-URL: #17887 Reviewed-By: Matteo Collina <[email protected]> Reviewed-By: Michael Dawson <[email protected]>
private field 'async_context' is not used [-Wunused-private-field] PR-URL: #21597 Refs: #17887 Backport-PR-URL: #25002 Reviewed-By: Gabriel Schulhof <[email protected]> Reviewed-By: Daniel Bevenius <[email protected]>
PR-URL: #55248 Refs: #17887 Refs: #32997 Reviewed-By: Luigi Pinca <[email protected]> Reviewed-By: Jake Yuesong Li <[email protected]>
PR-URL: #55248 Refs: #17887 Refs: #32997 Reviewed-By: Luigi Pinca <[email protected]> Reviewed-By: Jake Yuesong Li <[email protected]>
PR-URL: nodejs#55248 Refs: nodejs#17887 Refs: nodejs#32997 Reviewed-By: Luigi Pinca <[email protected]> Reviewed-By: Jake Yuesong Li <[email protected]>
PR-URL: nodejs#55248 Refs: nodejs#17887 Refs: nodejs#32997 Reviewed-By: Luigi Pinca <[email protected]> Reviewed-By: Jake Yuesong Li <[email protected]>
Bundle a
uv_async_t
and anapi_ref
to make it possible to call intoJS from another thread. The API accepts a void data pointer and a
callback which will be invoked on the loop thread and which will
receive the
napi_value
resulting from thenapi_ref
so as to performthe call into JS. The callback is run inside a
node::CallbackScope
.Fixes: #13512
Checklist
make -j4 test
(UNIX), orvcbuild test
(Windows) passesAffected core subsystem(s)
n-api
This is an alternative, and perhaps simpler implementation of
napi_threadsafe_function
, alternative to #17809