Skip to content
This repository has been archived by the owner on Dec 1, 2024. It is now read-only.

Remove fast-future #638

Merged
merged 6 commits into from
Jun 14, 2019
Merged

Remove fast-future #638

merged 6 commits into from
Jun 14, 2019

Conversation

vweevers
Copy link
Member

We can remove fast-future, because the iterator's cache mechanism already prevents event loop starvation (as mentioned by @peakji in #327 (comment)) - unless you have single-byte keys and values. To handle that rare case we can move the internal counter of fast-future to the C++ of leveldown: it eithers stop iterating when it hits the highWaterMark (in bytes) or when the cache is 1000 elements long.

@vweevers vweevers added cleanup Housekeeping semver-patch Bug fixes that are backward compatible labels May 27, 2019
@vweevers vweevers requested review from ralphtheninja and peakji May 27, 2019 22:26
@vweevers vweevers marked this pull request as ready for review May 27, 2019 22:26
Copy link
Member

@peakji peakji left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about the exact same thing! 😂

@peakji
Copy link
Member

peakji commented May 28, 2019

After reading the iterator code again, I think we can further optimize the cache mechanism:

  1. Do not reverse the cache array in C++ land (which might produce "holes" and slow things down?).
  2. Instead of calling .pop() on the cache array, we could just use an offset to access elements and leave the cache array immutable.

Maybe this will give us a performance gain?

@vweevers
Copy link
Member Author

Lemme try to write a benchmark, so we can try those ideas. I'm thinking this benchmark will first insert a small data set (say 10K records), trigger a manual compaction, read once from start to end as a warmup, and then repeatedly read it from start to end.

@vweevers vweevers added the benchmark Requires or pertains to benchmarking label May 30, 2019
@peakji peakji mentioned this pull request May 31, 2019
@peakji
Copy link
Member

peakji commented Jun 13, 2019

Shall we merge this PR so that I can rebase #640?

@vweevers
Copy link
Member Author

Shall we merge this PR so that I can rebase #640?

I wanted to benchmark this first, but seeing as I'm not getting around to that, maybe yeah

@peakji
Copy link
Member

peakji commented Jun 14, 2019

I wanted to benchmark this first, but seeing as I'm not getting around to that, maybe yeah

I've benchmarked these branches with an old script: https://gist.github.com/peakji/0fd6c1529951767697480e708806ae33#file-benchmark-js

After running the script for multiple times, result shows the performance of remove-fast-future is almost identical to master (as expected), but immutable-cache-array is obviously slower.

# master remove-fast-future immutable-cache-array
SUM 36549118185 36484380542 38067624251
CNT 10000 10000 10000
AVG 3654911.8185 3648438.0542 3806762.4251
MIN 932207 905891 911134
MAX 9835033 9715972 12222045

@peakji
Copy link
Member

peakji commented Jun 14, 2019

I think its safe to merge #638, and reject #640.

@vweevers vweevers merged commit ebf9089 into master Jun 14, 2019
@vweevers vweevers deleted the remove-fast-future branch June 14, 2019 06:39
@vweevers
Copy link
Member Author

@peakji I expected remove-fast-future to be very slighty faster because it allocates less functions.

As a next step, I think Level/community#70 is worth exploring.

@peakji
Copy link
Member

peakji commented Jun 14, 2019

I expected remove-fast-future to be very slighty faster because it allocates less functions.

Actually it is 0.17% faster! 😄

As a next step, I think Level/community#70 is worth exploring.

👍 Shall we tag a new release first (maybe including #642)?

@vweevers
Copy link
Member Author

5.1.1

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
benchmark Requires or pertains to benchmarking cleanup Housekeeping semver-patch Bug fixes that are backward compatible
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants