So, I\'m starting up my very first HTML5 browser-based client-side project. It\'s going to have to parse very, very large
Have you eliminated the reading of .length
from your benchmark results?
I believe V8 has a few representations of a string:
1. a sequence of ASCII bytes
2. a sequence of UTF-16 code units.
3. a slice of a string (result of substring)
4. a concatenation of two strings.
Number 4 is what makes string +=
efficient.
I'm just guessing but if they're trying to pack two string pointers and a length into a small space, they may not be able to cache large lengths with the pointers, so may end up walking the joined link list in order to compute the length. This assumes of course that Array.prototype.join
creates strings of form (4) from the array parts.
It does lead to a testable hypothesis which would explain the discrepancy even absent buffer copies.
EDIT:
I looked through the V8 source code and StringBuilderConcat is where I would start pulling, especially runtime.cc
.
In the case of Spidermonkey (the JS engine in Firefox), a substring()
call just creates a new "dependent string": a string object that stores a pointer to the thing it's a substring off and the start and end offsets. This is precisely to make substring()
fast, and is an obvious optimization given immutable strings.
As for why V8 does not do that... A possibility is that V8 is trying to save space: in the dependent string setup if you hold on to the substring but forget the original string, the original string can't get GCed because the substring is using part of its string data.
In any case, I just looked at the V8 source, ans it looks like they just don't do any sort of dependent strings at all; the comments don't explain why they don't, though.
[Update, 12/2013]: A few months after I gave the above answer V8 added support for dependent strings, as Paul Draper points out.