Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

buffer: optimize Buffer.byteLength #1713

Closed

Conversation

brendanashworth
Copy link
Contributor

Buffer.byteLength is called whenever a new string Buffer is created.
UTF8 is used as the default encoding, and base64 is also popular. These
must be fast and take up a relatively significant part of Buffer
instantiation.

This commit moves the Buffer.byteLength calculations into only JS-land,
moving it from C++ land for base64 and UTF8.

It also adds a benchmark for both encodings; the improvements hover
around 40-60% for UTF8 strings and 170% for base64.

@brendanashworth brendanashworth added the buffer Issues and PRs related to the buffer subsystem. label May 16, 2015

bench.start();
for (var i = 0; i < n; i++) {
Buffer.byteLength(str, 'utf8');
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

utf8 is passed explicitly as the encoding because there isn't a default encoding in byteLength. utf8 is picked as the default Buffer encoding here, though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be modified so that str is different for every iteration of the loop. Otherwise the JIT will most likely "overly optimize" execution because it's seeing the exact same input every time, which may not give reliable/accurate benchmark results.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good tip - would a few strings (like 4) work? I think that if they were generated newly for each run of the loop it'd take up the majority of the runtime.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Probably just cycling through each of the 4 strings you already have defined would probably work.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sweet, I'll do that + the mixed one, writing a patch

@brendanashworth
Copy link
Contributor Author

Pushed up a patch @mscdex, the benchmark numbers now sit comfortably at 150-330%.

Also, should I default the JS encoding to utf8 since it does it in the C++? It looks like it interacts with a v8 API that I'm not familiar with though so I'm unsure.

@brendanashworth
Copy link
Contributor Author

New findings: ParseEncoding belongs to node and the only format it parses that isn't parsed by byteLength (JS) is base64. I may remove the parseEncoding call altogether, set the default to utf8 in JS, that way it'll optimize a base64 length call too.


bench.start();
for (var i = 0; i < n; i++) {
Buffer.byteLength(chars[0], 'utf8');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add guards to these? E.g. if (Buffer.byteLength(chars[0], 'utf8') !== 16) throw ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I'll add a mini-test in the beginning, outside the loop.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean inside the loop to avoid unrealistic optimizations, byteLength looks simplistic enough that v8 could determine it's a pure function if not now then in the future

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Alright, I'll do some assert()s. :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this new commit I'm pushing up, I've changed the benchmark a little bit, and it now has 8 different possibilities for each run in the loop. I'm unsure whether asserting in-loop is still the best choice.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The meaning of the guards is not to be a test but to make the benchmark less susceptible to irrelevant optimizations like licm and dce. There is no reason to use assert and the checks must be done in the loop, otherwise they are ineffective.

@brendanashworth
Copy link
Contributor Author

Force pushed a commit:

  • add base64 byte length calculation in JS
  • remove process.binding calls entirely, mimicking the C++ behavior
  • style fixes (thanks @bnoordhuis)
  • benchmark strings more spread out (thanks @mscdex)
  • handle case-insensitive encoding names in JS (ugh backwards compatibility)

I added a note in node_buffer.cc about how ByteLength is not called from JS. Can the function be removed? Is it able to be called from C++ APIs?

Also, pending @petkaantonov's feedback on ensuring the functionality works correctly in the benchmark.

I think the function is now polymorphic in the benchmark or something, because the UTF8 benchmark numbers have dropped somewhat significantly. The base64 is looking great though:

encoding=utf8 len=4: bleed: 4045000 iojs: 2525900 ....... 60.14%
encoding=utf8 len=16: bleed: 4064900 iojs: 2929400 ...... 38.76%
encoding=utf8 len=64: bleed: 4057000 iojs: 2886500 ...... 40.55%
encoding=utf8 len=256: bleed: 4102100 iojs: 2895900 ..... 41.65%
encoding=utf8 len=1024: bleed: 4084700 iojs: 2849100 .... 43.37%
encoding=base64 len=4: bleed: 4074900 iojs: 1537400 .... 165.05%
encoding=base64 len=16: bleed: 4191600 iojs: 1532200 ... 173.57%
encoding=base64 len=64: bleed: 4128100 iojs: 1471200 ... 180.60%
encoding=base64 len=256: bleed: 4119900 iojs: 1535500 .. 168.30%
encoding=base64 len=1024: bleed: 4190700 iojs: 1535700 . 172.89%


bench.start();
for (var i = 0; i < n; i++) {
Buffer.byteLength(chars[n % 4], encoding);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you change the utf8 and base64 objects to arrays, you can replace the hard-coded constant with n % inputs.length.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done!

@brendanashworth
Copy link
Contributor Author

Pushed another commit, removing the node_buffer::ByteLength function, optimizing away from toLowerCase() calls, and making the benchmark better (thanks @petkaantonov and @bnoordhuis). The benchmark for base64 now says the new implementation is a lot faster, but now I'm struggling to keep larger UTF8 strings (256 characters and above) to be faster in the JS implementation. Smaller strings are still faster.

encoding=utf8 len=1: bleed: 5711000 iojs: 2265700 ... 152.06%
encoding=utf8 len=2: bleed: 3900500 iojs: 2028200 .... 92.32%
encoding=utf8 len=4: bleed: 2538000 iojs: 1669100 .... 52.05%
encoding=utf8 len=16: bleed: 808850 iojs: 979290 .... -17.40%
encoding=utf8 len=64: bleed: 219440 iojs: 370360 .... -40.75%
encoding=base64 len=1: bleed: 5886000 iojs: 1149100 . 412.24%
encoding=base64 len=2: bleed: 5993400 iojs: 1155800 . 418.54%
encoding=base64 len=4: bleed: 5750000 iojs: 1157100 . 396.94%
encoding=base64 len=16: bleed: 6203100 iojs: 929540 . 567.33%
encoding=base64 len=64: bleed: 5916100 iojs: 955250 . 519.33%

Maybe someone has a tip for the UTF8 function?

brendanashworth added a commit to brendanashworth/io.js that referenced this pull request May 17, 2015
Previously, there were very few direct tests for Buffer.byteLength(). I
realized I had introduced some breaking changes in nodejs#1713 that the tests
didn't catch, so I've pretty much added almost every possible corner of
the code here.

It also takes any other byteLength tests from test-buffer and moves
them.
@brendanashworth
Copy link
Contributor Author

Shoot, just realized the new UTF8 parser introduces some breaking changes not covered by the test suite. I'll PR / add a new commit to this PR with the new tests and figure out that darn function 😄


// Handle uppercased encodings
if (encoding !== encoding.toLowerCase())
return byteLength(string, encoding.toLowerCase());
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I suggest you cache the value of encoding.toLowerCase() here, to avoid calling it twice?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside: I'm not sure if we have guidelines on whether or not recursion is allowed. I made Buffer#write() iterative instead of recursive to avoid a (probably academic) stack overflow exception when it's getting called when the stack is almost full.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've cached the output. Would you like me to switch to a loop, like you did?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If nothing else, it'd be consistent with what Buffer#write() does.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Okay, sounds good, I'll make the change.

Buffer.byteLength is called whenever a new string Buffer is created.
UTF8 is used as the default encoding, and base64 is also popular. These
must be fast and take up a relatively significant part of Buffer
instantiation.

This commit moves the Buffer.byteLength calculations into only JS-land,
moving it from C++ land for base64 and UTF8. It also removes the
ByteLength function on the C++ Buffer.

It also adds a benchmark for both encodings; the improvements hover for
UTF8, change a lot, but base64 is about
case 'binary':
// Deprecated
case 'raw':
case 'raws':
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This was supported by the C++ version, but isn't currently supported by Buffer.isEncoding.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, I think 'raw' and 'raws' have been deprecated since v0.2 or v0.3. I've never seen them being used in the wild either.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can follow this PR up with a deprecation commit (to keep it semver-patch) - technically speaking only raws was deprecated, not raw, for this function:

> Buffer.byteLength('abc', 'raw')
3
> Buffer.byteLength('abc', 'raws')
'raws' encoding has been renamed to 'binary'. Please update your code.
3

@brendanashworth
Copy link
Contributor Author

OKAY! I think I got it this time. I've added a relatively large test suite (because it was pretty much not tested before), improved the base64 a little bit, and completely ditched the UTF8 length function. (it is very hard to calculate the length when v8 uses UTF16 on strings and says a 4 byte character is 2 3 byte chars, all the while beating C++). I've instead opted to optimize out some calculations in the C++ side to only give the ->Utf8Length() of the string, yielding some pretty fair perf increases all around, though they kind of go away as the size increases.

encoding=utf8 len=1: bleed: 9909300 iojs: 3971800 ..... 149.49%
encoding=utf8 len=2: bleed: 7524900 iojs: 3583800 ..... 109.97%
encoding=utf8 len=4: bleed: 5155400 iojs: 2907300 ...... 77.32%
encoding=utf8 len=16: bleed: 2261600 iojs: 1757700 ..... 28.67%
encoding=utf8 len=64: bleed: 709970 iojs: 668300 ........ 6.23%
encoding=utf8 len=256: bleed: 195430 iojs: 198300 ...... -1.44%
encoding=base64 len=1: bleed: 9585300 iojs: 2248500 ... 326.31%
encoding=base64 len=2: bleed: 8675000 iojs: 2197100 ... 294.83%
encoding=base64 len=4: bleed: 8766100 iojs: 2191400 ... 300.03%
encoding=base64 len=16: bleed: 8791800 iojs: 1772700 .. 395.96%
encoding=base64 len=64: bleed: 9025300 iojs: 1777600 .. 407.73%
encoding=base64 len=256: bleed: 8777600 iojs: 1195800 . 634.05%

Local<String> s = args[0]->ToString(env->isolate());
enum encoding e = ParseEncoding(env->isolate(), args[1], UTF8);
// Fast case: avoid StringBytes on UTF8 string. Jump to v8.
Local<String> str = args[0]->ToString(env->isolate());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's guaranteed to be a string so args[0].As<String>(); should be used

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can I drop the Environment::GetCurrent above then, or must that be kept?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yea the string check and env can be dropped, this is always called with a string from js.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave in a CHECK(args[0]->IsString()) to catch bugs.


uint32_t size = StringBytes::Size(env->isolate(), s, e);
args.GetReturnValue().Set(size);
args.GetReturnValue().Set(len);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you can reduce this function to just:

void ByteLengthUtf8(const FunctionCallbackInfo<Value>& args) {
  CHECK(args[0]->IsString());
  args.GetReturnValue().Set(args[0].As<String>()->Utf8Length());
}

EDIT: s/UTF8/Utf8/ for consistency with other functions/types operating on UTF-8.

@brendanashworth brendanashworth added the c++ Issues and PRs that require attention from people who are familiar with C++. label May 18, 2015
@trevnorris
Copy link
Contributor

Looks alright to me. Will the new base64ByteLength() possibly over allocate like the previous implementation does?

@brendanashworth
Copy link
Contributor Author

@trevnorris I don't think so. I used the old algorithm as reference, and the new one doesn't include the extra bytes like the old one did. Otherwise, they're almost the same algorithm.

if (typeof(string) !== 'string')
string = String(string);
if (typeof string !== 'string')
string = '' + string;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this a change in functionality? ByteLength() would throw if a non string was passed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, String(val) also does coercion. We seem to prefer '' + val though.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bnoordhuis do we care if node aborts if someone does:

process.binding('buffer').byteLengthUtf8(42);

Or do we want to continue throwing?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think so. Fooling around with process.binding() voids the warranty.

@trevnorris
Copy link
Contributor

Had my last question confirmed by @bnoordhuis. LGTM.

@brendanashworth
Copy link
Contributor Author

@trevnorris Thanks!

The original CI is mostly out of date. Should I re-run it?

@Fishrock123
Copy link
Contributor

@brendanashworth yes please.

@brendanashworth
Copy link
Contributor Author

@brendanashworth
Copy link
Contributor Author

The failures on Windows / ARM / Centos 5 look unrelated, the CI is otherwise happy.

@brendanashworth
Copy link
Contributor Author

As the PR stands, for the respective UTF8 and Base64 hello, world! Buffer creations:

# this PR
utf8 x 2,208,160 ops/sec ±0.75% (93 runs sampled)
base64 x 1,881,927 ops/sec ±0.73% (93 runs sampled)
# v2.0.1
utf8 x 1,502,424 ops/sec ±1.10% (95 runs sampled)
base64 x 969,291 ops/sec ±0.75% (95 runs sampled)
# v0.10.38
utf8 x 659,515 ops/sec ±0.37% (99 runs sampled)
base64 x 408,662 ops/sec ±0.53% (97 runs sampled)

assert.equal(Buffer.byteLength('aGVsbG8gd29ybGQ=', 'base64'), 11);
assert.equal(Buffer.byteLength('bm9kZS5qcyByb2NrcyE=', 'base64'), 14);
assert.equal(Buffer.byteLength('aGkk', 'base64'), 3);
assert.equal(Buffer.byteLength('bHNrZGZsa3NqZmtsc2xrZmFqc2RsZmtqcw==', 'base64'), 25);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Long line.

@bnoordhuis
Copy link
Member

LGTM with a style nit.

@brendanashworth
Copy link
Contributor Author

Thanks @bnoordhuis, I shall merge in the morning.

brendanashworth added a commit that referenced this pull request May 22, 2015
Buffer.byteLength is important for speed because it is called whenever a
new Buffer is created from a string.

This commit optimizes Buffer.byteLength execution by:
- moving base64 length calculation into JS-land, which is now much
  faster
- remove redundant code and streamline the UTF8 length calculation

It also adds a benchmark and better tests.

PR-URL: #1713
Reviewed-By: Trevor Norris <[email protected]>
Reviewed-By: Ben Noordhuis <[email protected]>
@brendanashworth
Copy link
Contributor Author

Thank you reviewers! Landed in 9da168b. (I had to push up a small commit to fix a new failure on HEAD).

@rvagg rvagg mentioned this pull request May 23, 2015
andrewdeandrade pushed a commit to andrewdeandrade/node that referenced this pull request Jun 3, 2015
Buffer.byteLength is important for speed because it is called whenever a
new Buffer is created from a string.

This commit optimizes Buffer.byteLength execution by:
- moving base64 length calculation into JS-land, which is now much
  faster
- remove redundant code and streamline the UTF8 length calculation

It also adds a benchmark and better tests.

PR-URL: nodejs/node#1713
Reviewed-By: Trevor Norris <[email protected]>
Reviewed-By: Ben Noordhuis <[email protected]>
@ChALkeR ChALkeR added the performance Issues and PRs related to the performance of Node.js. label Feb 16, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
buffer Issues and PRs related to the buffer subsystem. c++ Issues and PRs that require attention from people who are familiar with C++. performance Issues and PRs related to the performance of Node.js.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants