-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
WIP fix performance regression #8032
Conversation
That issue should be fixed with |
Yeah, we should figure out why we don't share a worker pool for a whole watch run |
/cc @cpojer @rubennorte @mjesun ideas on how to avoid spinning up new workers between runs in watch mode? |
Codecov Report
@@ Coverage Diff @@
## master #8032 +/- ##
==========================================
+ Coverage 63.31% 64.13% +0.82%
==========================================
Files 263 263
Lines 10266 10206 -60
Branches 2098 1834 -264
==========================================
+ Hits 6500 6546 +46
+ Misses 3273 3253 -20
+ Partials 493 407 -86
Continue to review full report at Codecov.
|
@SimenB you sure about #7963 (comment) / still want me to bisect? Looking at |
We've used |
What I meant is: If the worker would be reused, it wouldn't have to initialize the whole |
But Jest 23 uses micromatch@2. You said you saw a regression between 22 and 23, but neither workers (or rather, the fact we use |
Oh right, that comment before I had investigated in detail. I think that was just referring to the effect of #6647 that would cause reruns not to be |
Ah, OK 👌
It shouldn't take that long though. Increasing |
Could we downgrade micromatch now and make a patch release, and then upgrade to v4 once it is released? We are struggling with this at FB as well and have noticed the performance issues. As for not reusing workers in watch mode: I originally did this to avoid state from being shared between re-runs. I'm sure it can be done but it seemed dangerous at the time, although I cannot remember the specifics unfortunately. |
Having micromatch 2 marks the return of a cve warning. It's also pretty breaking to go from 2 to 3. If we're gonna hack something before micromatch 4 comes out, could we require it on the outside and inject it in? |
I'm no lawyertician, but since it's MIT licensed, isn't vendoring a bundled version of
|
I’m fine with publishing it under @jest for now.
…________________________________
From: wtgtybhertgeghgtwtg <[email protected]>
Sent: Monday, March 4, 2019 17:41
To: facebook/jest
Cc: Christoph Nakazawa; Mention
Subject: Re: [facebook/jest] WIP fix performance regression (#8032)
I'm no lawyertician, but since it's MIT licensed, isn't vendoring a bundled version of micromatch an option? I'm not crazy about the idea, but I know styled-components has a few things they do this with. A quick test showed an okay require improvement.
$ time node -e 'require("micromatch")'
real 0m0.185s
user 0m0.177s
sys 0m0.020s
$ time node -e 'require("./bundled-micromatch")'
real 0m0.121s
user 0m0.102s
sys 0m0.020s
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub<#8032 (comment)>, or mute the thread<https://github.com/notifications/unsubscribe-auth/AAA0KLEyYb7vc1PuJfgMigaTF6BWe3UOks5vTNwqgaJpZM4bbJdD>.
|
Do you want me to make PR for that? |
Sure @wtgtybhertgeghgtwtg, would be lovely! |
What would be in this vendored bundle that solves any issues we have? Downgrading and later upgrading is a breaking change (downgrading also reintroduces a cve warning) |
It doesn't really solve anything, it'd just be a temporary measure to shave off require time without having to downgrade. |
Are we down at the levels seen by OP?
Also, it seems like Micromatch@4 is super close (see the linked issue) |
I can't say for sure, but I doubt it. Bundled or not, the dependency tree of
If it gets released, by all means, the vendored version should be dropped. |
Opened #8046. |
Closing, because this was just to prove the point about micromatch@3 being the drag |
This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs. |
Note: This is more of a PR to discuss how to deal with this problem and not one to merge in the current form, I'd really like to avoid downgrading
micromatch
again.Summary
#6783 and #7963 reported severe performance regressions, especially on reruns in watch mode.
I profiled the issue and found that the key difference is the time it takes to
require
the dependency tree ofbraces
, which we use viajest-{runtime,jasmine2} > jest-message-util > micromatch
. @gsteacy noticed this in #6783 (comment) but was testing reruns specifically, which had other factors such as #6647 playing into it, so the effect of micromatch was not further investigated.micromatch 3 uses braces 2, while micromatch 2 uses braces 1. Here's some data to back up that this is the (primary) culprit for the regression.
Profile subtree for
require('braces')
on the example repo from #7963:[email protected]
:[email protected]
:jest
linked from this branch:Exported cpuprofiles https://gist.github.com/jeysal/6ed666edbb554150310d625ae4c7ee3e
Dependency trees:
micromatch@2
: https://npm.anvaka.com/#/view/2d/micromatch/2.3.11 (38 nodes)micromatch@3
: https://npm.anvaka.com/#/view/2d/micromatch/3.1.10 (83 nodes)More specifically:
braces@1
: https://npm.anvaka.com/#/view/2d/braces/1.8.5 (15 nodes)braces@2
: https://npm.anvaka.com/#/view/2d/braces/2.3.2 (74 nodes)time node -e 'require("micromatch")'
micromatch@2
: ~240msmicromatch@3
: ~500msReruns on #7963 / #6783: *
[email protected]
: 4.5s / 1.7sjest
linked from this branch: <0.1s / <0.1srunInBand
because the test is never <1s with thebraces
initialization time. It could also be fixed by reusing workers to avoid the reloading of all the runtime modules.Test plan