-
Notifications
You must be signed in to change notification settings - Fork 30k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
tools: js2c-cache #4777
tools: js2c-cache #4777
Conversation
@@ -663,6 +663,7 @@ void TLSWrap::OnReadSelf(ssize_t nread, | |||
Local<Object> buf_obj; | |||
if (buf != nullptr) | |||
buf_obj = Buffer::New(wrap->env(), buf->base, buf->len).ToLocalChecked(); | |||
fprintf(stderr, "%d\n", nread); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what's going on here? accidental debug or something else, same with the return 0
below.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, accidental debug.
would we consider shipping the binary for this, with a more node-specific name than |
We don't need to ship this anywhere, it is run during build time. |
@@ -984,7 +988,8 @@ | |||
|
|||
NativeModule.prototype.compile = function() { | |||
var source = NativeModule.getSource(this.id); | |||
source = NativeModule.wrap(source); | |||
var cached_data = NativeModule.getSourceCachedData(this.id); | |||
source = NativeModule.wrap(source, cached_data); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looking at the definition of NativeModule.wrap
12 lines up - how is cached_data
being used?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It isn't, I'm in the process of making it work.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@indutny oh oops - sorry for the noise!
@indutny so you figure that if anyone wants to make use of this kind of functionality outside of lib/*.js, like NW.js, they'll be using source and hacking Node a bit anyway? |
@rvagg I haven't thought about it that far yet, and so far I'm not sure if it provides any significant speed boost after all. Need to finish it up to do benchmarks. |
4a34cea
to
37d76fa
Compare
Alright, I think the PR is ready now. Let's see what CI (especially ARM bots) think about it: https://ci.nodejs.org/job/node-test-pull-request/1314/ |
@rvagg regarding NW.js, there is a |
Data point, when I experimented with the preparser API a while ago (what corresponds to Vyacheslav later told me it only makes a difference when the source code is 100s or 1000s of kilobytes big. Ours is in the low 100s but it's not loaded all at once. |
@bnoordhuis it seems that my small |
@@ -984,10 +988,12 @@ | |||
|
|||
NativeModule.prototype.compile = function() { | |||
var source = NativeModule.getSource(this.id); | |||
var cached_data = NativeModule.getSourceCachedData(this.id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Style nit: we prefer camel case for JavaScript variables no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack.
@bnoordhuis also, it is not exactly preparser thing. It is caching compiled code, not parser data. |
Yes, I get that. I'm curious to see if it makes a real difference. Full-codegen is pretty quick but maybe not on ARM. |
public: | ||
virtual void* Allocate(size_t length) { | ||
void* data = AllocateUninitialized(length); | ||
return data == NULL ? data : memset(data, 0, length); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Everywhere else nullptr
is used. Is it okay to use NULL
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack.
Looks like this |
New CI: https://ci.nodejs.org/job/node-test-pull-request/1315/ (Previous is mostly green, except windows). |
@@ -387,10 +418,26 @@ def JS2C(source, target): | |||
}) | |||
output.close() | |||
|
|||
def filter_js2c_cache(str): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unused function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack.
Gosh, one more windows failure. Should be fixed now. |
Sorry for the ignorance, but can this impact the speed of |
Pushed fix, new CI: https://ci.nodejs.org/job/node-test-pull-request/1323/ |
@benjamingr possibly no, fs overhead is very likely to be bigger than compilation overhead, but this PR opens new APIs for using code cache in ChildProcess. |
New CI with windows support: https://ci.nodejs.org/job/node-test-pull-request/1327/ (hopefully) |
One more fix, and it should be good to go: https://ci.nodejs.org/job/node-test-pull-request/1329/ |
out += line | ||
|
||
return out | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps use subprocess.Communicate
to simplify? (written without testing)
output, error = p.communicate(input=line)
return output
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Appears to be working locally, thank you for suggestion.
Last CI was finally green. Please let me know if it LGTY. |
@hashseed Sound crazy, Do you mean use v8 as the vm platform for any language that can compile to this "reverse engineered byte code"? A bit of the overlapping to wasm, but I believe js bytecode has more feature than low level wasm |
Use V8 Code caching when
host_arch == target_arch
. This commit mayslightly improve startup time, and definitely improves require times
(when doing many of them).
For example, executing following file:
Takes: ~74ms on master
Takes: ~54ms with this commit
Number were taken on OS X with 64-bit node.js binaries.
cc @nodejs/collaborators @nodejs/v8