You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Jul 3, 2019. It is now read-only.
pacote is built for performance. Performance is meaningless without benchmarks and profiling. So. We need benchmarks.
There should be benchmarks for each of the supported types (note: only registry ones are needed for 0.2.0), with small, medium, and large packages (including some variation for number of files vs size of individual files). All of these for both manifest-fetching and extraction.
We should make sure all the benchmarks run hit the following cases too, for each of the groups described above:
no shrinkwrap, tarball extract required
no shrinkwrap, but with pkg._hasShrinkwrap === false (so no extract)
has shrinkwrap, with alternative fetch (so, an endpoint, git-host, etc)
has shrinkwrap, tarball extract required
cached data, no memoization (lib/cache exports a _clearMemoized() fn for this purpose)
memoized manifest data (tarballs are not memoized)
cached data for package needing shrinkwrap fetch
memoized data for package needing shrinkwrap fetch
stale cache data (so, 304s)
concurrency of 50-100 for all of the above, to check for contention and starvation (this is usually what the CLI will set its concurrency to).
https://npm.im/benchmark does support async stuff and seems like a reasonable base to build this suite upon.
Marking this as starter because while it's likely to take some time to write, you need relatively little context to be able to write some baseline benchmarks for the above cases. The actual calls are literally all variations of pacote.manifest() and pacote.extract() calls: that's the level these benchmarks should run at, rather than any internals. At least for now.
I would also say that comparing benchmark results across different versions automatically is just a stretch goal, because the most important bit is to be able to run these benchmarks at all.
The text was updated successfully, but these errors were encountered:
pacote is built for performance. Performance is meaningless without benchmarks and profiling. So. We need benchmarks.
There should be benchmarks for each of the supported types (note: only registry ones are needed for
0.2.0
), with small, medium, and large packages (including some variation for number of files vs size of individual files). All of these for both manifest-fetching and extraction.We should make sure all the benchmarks run hit the following cases too, for each of the groups described above:
pkg._hasShrinkwrap === false
(so no extract)lib/cache
exports a_clearMemoized()
fn for this purpose)https://npm.im/benchmark does support async stuff and seems like a reasonable base to build this suite upon.
Marking this as
starter
because while it's likely to take some time to write, you need relatively little context to be able to write some baseline benchmarks for the above cases. The actual calls are literally all variations ofpacote.manifest()
andpacote.extract()
calls: that's the level these benchmarks should run at, rather than any internals. At least for now.I would also say that comparing benchmark results across different versions automatically is just a stretch goal, because the most important bit is to be able to run these benchmarks at all.
The text was updated successfully, but these errors were encountered: