Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP fix performance regression #8032

Closed
wants to merge 1 commit into from
Closed

Conversation

jeysal
Copy link
Contributor

@jeysal jeysal commented Mar 3, 2019

Note: This is more of a PR to discuss how to deal with this problem and not one to merge in the current form, I'd really like to avoid downgrading micromatch again.

Summary

#6783 and #7963 reported severe performance regressions, especially on reruns in watch mode.
I profiled the issue and found that the key difference is the time it takes to require the dependency tree of braces, which we use via jest-{runtime,jasmine2} > jest-message-util > micromatch. @gsteacy noticed this in #6783 (comment) but was testing reruns specifically, which had other factors such as #6647 playing into it, so the effect of micromatch was not further investigated.
micromatch 3 uses braces 2, while micromatch 2 uses braces 1. Here's some data to back up that this is the (primary) culprit for the regression.

Profile subtree for require('braces') on the example repo from #7963:
[email protected]:
image
[email protected]:
image
jest linked from this branch:
image
Exported cpuprofiles https://gist.github.com/jeysal/6ed666edbb554150310d625ae4c7ee3e

Dependency trees:
micromatch@2: https://npm.anvaka.com/#/view/2d/micromatch/2.3.11 (38 nodes)
micromatch@3: https://npm.anvaka.com/#/view/2d/micromatch/3.1.10 (83 nodes)

More specifically:
braces@1: https://npm.anvaka.com/#/view/2d/braces/1.8.5 (15 nodes)
braces@2: https://npm.anvaka.com/#/view/2d/braces/2.3.2 (74 nodes)

time node -e 'require("micromatch")'
micromatch@2: ~240ms
micromatch@3: ~500ms

Reruns on #7963 / #6783: *
[email protected]: 4.5s / 1.7s
jest linked from this branch: <0.1s / <0.1s

  • This is an indirect consequence due to Jest deciding not to runInBand because the test is never <1s with the braces initialization time. It could also be fixed by reusing workers to avoid the reloading of all the runtime modules.

Test plan

@SimenB
Copy link
Member

SimenB commented Mar 3, 2019

That issue should be fixed with micromatch@4: micromatch/micromatch#137. I'll avoid nagging on the poor guy, but hopefully its release is really close

@SimenB
Copy link
Member

SimenB commented Mar 3, 2019

It could also be fixed by avoiding the reloading of all the runtime modules if we find out why a new worker is spawned on rerun.

Yeah, we should figure out why we don't share a worker pool for a whole watch run

@SimenB
Copy link
Member

SimenB commented Mar 3, 2019

/cc @cpojer @rubennorte @mjesun ideas on how to avoid spinning up new workers between runs in watch mode?

@codecov-io
Copy link

Codecov Report

Merging #8032 into master will increase coverage by 0.82%.
The diff coverage is n/a.

Impacted file tree graph

@@            Coverage Diff             @@
##           master    #8032      +/-   ##
==========================================
+ Coverage   63.31%   64.13%   +0.82%     
==========================================
  Files         263      263              
  Lines       10266    10206      -60     
  Branches     2098     1834     -264     
==========================================
+ Hits         6500     6546      +46     
+ Misses       3273     3253      -20     
+ Partials      493      407      -86
Impacted Files Coverage Δ
packages/jest-message-util/src/index.ts 67.16% <ø> (ø) ⬆️
packages/expect/src/jasmineUtils.ts 83.92% <0%> (-0.9%) ⬇️
packages/jest-haste-map/src/index.ts 83.28% <0%> (-0.05%) ⬇️
packages/jest-watcher/src/PatternPrompt.ts 0% <0%> (ø) ⬆️
packages/jest-runtime/src/index.ts 68.98% <0%> (ø) ⬆️
packages/jest-changed-files/src/index.ts 0% <0%> (ø) ⬆️
packages/jest-matcher-utils/src/index.ts 91.83% <0%> (ø) ⬆️
packages/jest-circus/src/formatNodeAssertErrors.ts 14.28% <0%> (ø) ⬆️
packages/jest-watcher/src/BaseWatchPlugin.ts 0% <0%> (ø) ⬆️
packages/jest-runner/src/index.ts 65.11% <0%> (ø) ⬆️
... and 69 more

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 4c25942...af60932. Read the comment docs.

@jeysal
Copy link
Contributor Author

jeysal commented Mar 3, 2019

@SimenB you sure about #7963 (comment) / still want me to bisect? Looking at jest-runner it doesn't seem like reusing workers across watch mode reruns was ever implemented, _createParallelTestRun creates a new farm every time.

@SimenB
Copy link
Member

SimenB commented Mar 3, 2019

We've used jest-worker since Jest 22. So if you see a speed regression between 22 and 23 and think spinning up workers are at least partially to blame, a bisect seems like a good idea, no? Since that overhead shouldn't have changed

@jeysal
Copy link
Contributor Author

jeysal commented Mar 3, 2019

What I meant is: If the worker would be reused, it wouldn't have to initialize the whole micromatch thing again so for reruns it wouldn't matter that that takes over a second. But it seems like reusing workers across watch reruns was never a thing, so there's no git bisect good anywhere in the history, it's a new functionality we'd have to implement, correct?

@SimenB
Copy link
Member

SimenB commented Mar 3, 2019

But Jest 23 uses micromatch@2. You said you saw a regression between 22 and 23, but neither workers (or rather, the fact we use jest-worker) nor micromatch changed between those versions

@jeysal
Copy link
Contributor Author

jeysal commented Mar 3, 2019

Oh right, that comment before I had investigated in detail. I think that was just referring to the effect of #6647 that would cause reruns not to be runInBand. Given that that PR was in 23.4.0, it being introduced in 23.0.0 was also probably false from my side, I didn't realize that installing [email protected] would still get you [email protected], [email protected] etc.
We could still consider increasing the SLOW_TEST_TIME given that we know reruns of the most trivial tests will take more than 1s on many systems with the current time it takes to set up the runtime.
Or we could of course reuse workers across reruns, which should easily push the initialization time below 1s.

@SimenB
Copy link
Member

SimenB commented Mar 3, 2019

Oh right, that comment before I had investigated in detail.

Ah, OK 👌

We could still consider increasing the SLOW_TEST_TIME given that we know reruns of the most trivial tests will take more than 1s on many systems with the current time it takes to set up the runtime.

It shouldn't take that long though. Increasing SLOW_TEST_TIME feels like treating the symptom, not the problem. I think waiting for micromatch 4 and looking into reusing the worker farm between runs are better ways forward. Might be wrong, though!

@cpojer
Copy link
Member

cpojer commented Mar 4, 2019

Could we downgrade micromatch now and make a patch release, and then upgrade to v4 once it is released? We are struggling with this at FB as well and have noticed the performance issues.

As for not reusing workers in watch mode: I originally did this to avoid state from being shared between re-runs. I'm sure it can be done but it seemed dangerous at the time, although I cannot remember the specifics unfortunately.

@SimenB
Copy link
Member

SimenB commented Mar 4, 2019

Having micromatch 2 marks the return of a cve warning. It's also pretty breaking to go from 2 to 3.

If we're gonna hack something before micromatch 4 comes out, could we require it on the outside and inject it in?

@SimenB SimenB mentioned this pull request Mar 4, 2019
@wtgtybhertgeghgtwtg
Copy link
Contributor

I'm no lawyertician, but since it's MIT licensed, isn't vendoring a bundled version of micromatch an option? I'm not crazy about the idea, but I know styled-components has a few things they do this with. A quick test showed an okay require improvement.

$ time node -e 'require("micromatch")'

real    0m0.185s
user    0m0.177s
sys     0m0.020s
$ time node -e 'require("./bundled-micromatch")'

real    0m0.121s
user    0m0.102s
sys     0m0.020s

@cpojer
Copy link
Member

cpojer commented Mar 4, 2019 via email

@wtgtybhertgeghgtwtg
Copy link
Contributor

Do you want me to make PR for that?

@thymikee
Copy link
Collaborator

thymikee commented Mar 4, 2019

Sure @wtgtybhertgeghgtwtg, would be lovely!

@SimenB
Copy link
Member

SimenB commented Mar 4, 2019

What would be in this vendored bundle that solves any issues we have? Downgrading and later upgrading is a breaking change (downgrading also reintroduces a cve warning)

@wtgtybhertgeghgtwtg
Copy link
Contributor

It doesn't really solve anything, it'd just be a temporary measure to shave off require time without having to downgrade.

@SimenB
Copy link
Member

SimenB commented Mar 4, 2019

Are we down at the levels seen by OP?

Reruns on #7963 / #6783: *
[email protected]: 4.5s / 1.7s
jest linked from this branch: <0.1s / <0.1s
?

Also, it seems like Micromatch@4 is super close (see the linked issue)

@wtgtybhertgeghgtwtg
Copy link
Contributor

Are we down at the levels seen by OP?

I can't say for sure, but I doubt it. Bundled or not, the dependency tree of micromatch@3 means it's gonna load slower. Manual optimization is an option (i.e. yank out copies of kind-of), but unless it's run against micromatch's test suite, that's probably not a good idea.

Also, it seems like Micromatch@4 is super close (see the linked issue)

If it gets released, by all means, the vendored version should be dropped.

@wtgtybhertgeghgtwtg
Copy link
Contributor

Opened #8046.

@thymikee
Copy link
Collaborator

Closing, because this was just to prove the point about micromatch@3 being the drag

@thymikee thymikee closed this Mar 18, 2019
@github-actions
Copy link

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 11, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants