-
Notifications
You must be signed in to change notification settings - Fork 27
Conversation
The doxbee and parallel benchmarks were using a generic promisify implementation to wrap dummy functions with callbacks into promisified functions that can be used with Promise/async code. This promisify implementation doesn't match the implementation in Node.js, which is not portable across engines (and somewhat coupled to Node.js internals). However with the recent performance improvements the doxbee benchmark is not dominated by this generic promisify, which means it doesn't really measure Promise performance improvements anymore. So to make sure the benchmarks are still meaningful we change them and manually promisfy the dummy functions instead. This will create a new baseline for the performance, but it'll make sure that the benchmarks stay relevant.
function dummy_1() { return Promise.resolve(undefined); } | ||
function dummy_2(a) { return Promise.resolve(undefined); } | ||
|
||
// a queryish object with all kinds of functions |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you describe what "queryish" means or what it's supposed to mock?
Wait, why? Node.js changed promisify to use the same implementation a few months ago. |
@benjamingr Meh, I was staring at the wrong checkout 🙁 I was already puzzled that it was still using the V8 internal APIs since we had a discussion to not do that anymore. Was there any visible performance impact when using this generic promisify? |
@benjamingr Didn't we conclude in Berlin that Node.js should use the |
There was not - which was surprising at the time. You reviewed the changes 😅
We did, but I don't think anyone actually ended up doing it. I'd like to call a meeting of the promises-debugging group after all the people who are in JSI and the Vancouver summit are back to get a better understanding of what works, what doesn't, where we are and how to move forward. |
@benjamingr Yeah, just went through that part of history. And I think it was the performance numbers that convinced me. Ok, so that makes a lot of sense... As far as these benchmarks here are concerned, we need to keep them useful, and right now they are turning into spread call benchmarks with 20-25% of the execution time just going into the promisify wrapped function, which makes the benchmarks less relevant to actual async/promise performance. We'll have to optimize spread calls, for sure, but that'll have its own proper benchmark. |
If the spread call is taking 20%-25% of the execution time maybe we need to remove that spread call from Node? Is there a faster alternative (other than v8-extras which means other baggage in Node core) |
It's not only the spread call, but also the fact that due to some other limitations, the I'm not sure what the impact is on bigger applications. The only alternative I see is to use v8-extras, which should probably be faster. |
Interesting, we've found These tests were in ~2013 and I'm not sure if it's the case anymore. |
Interesting. So the |
Right, and I believe the overhead of calling promisified functions is still significant in real world apps though that assertion needs to be re-validated.
What would be the best way to approach this? It seems like there is a lot of background I am not familiar with in adding v8-extras to node. |
I don't know what's the best way to add it. The code for |
V8 extras functionality is exposed via the extras bindings object. @jasnell probably knows more about how node accesses that. |
The doxbee and parallel benchmarks were using a generic promisify
implementation to wrap dummy functions with callbacks into promisified
functions that can be used with Promise/async code. This promisify
implementation doesn't match the implementation in Node.js, which is
not portable across engines (and somewhat coupled to Node.js internals).
However with the recent performance improvements the doxbee benchmark
is not dominated by this generic promisify, which means it doesn't
really measure Promise performance improvements anymore. So to make
sure the benchmarks are still meaningful we change them and manually
promisfy the dummy functions instead.
This will create a new baseline for the performance, but it'll make
sure that the benchmarks stay relevant.