I\'m writing some javascript code which needs to run fast, and uses a lot of short-lived objects. Am I better off using an object pool, or just creating objects as I need them?<
Object pooling may help, especially if you are churning through a lot of objects. I recently wrote an article on this very subject which might be worth a read.
http://buildnewgames.com/garbage-collector-friendly-code/
I think that it depends on the complexity of your objects. I recently optimized a JavaScript word processor that used JS objects paired with DOM Objects for every element in the document. Before implementing an object pool the load time for my test document was about 480ms. The pooling technique reduced that to 220ms.
This of course is anecdotal, but in my case it increased the snappiness of the app considerably and I now use pools often in applications with high object turnover.
Object pools are used to avoid the instantiation cost of creating new objects by re-using existing ones. This is only going to be useful when the cost of instantiating the object is greater than the overhead incurred by using a pool.
What you've demonstrated is that very simple objects gain no benefit from pooling. As your objects become more complex this may change. My suggestion would be to follow the KISS principle and ignore object pooling until object creation has proved to be too slow.
Generally speaking (in my personal experience), pooling objects is not going to improve speed. Creating objects is typically very cheap. Rather, the purpose of object pooling is to cut down on jank(periodic lag) caused by garbage collections.
As a concrete example (not necessarily for JavaScript, but as a general illustration), think of games with advanced 3D graphics. If one game has an average frame rate of 60fps, that's faster than another game with an average frame rate of 40fps. But if the second game's fps is consistently 40, the graphics look smooth, whereas if the first's is often much higher than 60fps but occasionally dips down to 10fps every now and then, the graphics look choppy.
If you create a benchmark that runs both games for 10 minutes and samples the frame rate every so often, it will tell you that the first game has better performance. But it won't pick up on the choppiness. That's the problem object pools are meant to address.
This isn't a blanket statement that covers all cases, of course. One scenario where pooling can improve not only choppiness but also raw performance is when you are frequently allocating large arrays: by simply setting arr.length = 0
and reusing arr
, you can improve performance by escaping future re-sizings. Similarly, if you're frequently creating very large objects that all share a common schema (i.e., they have a well-defined set of properties, so you don't have to "clean" every object when returning it to the pool), you might see a performance improvement from pooling in that case as well.
As I said, generally speaking though, that is not the primary aim of object pools.
Let me start by saying: I would advice against pools, unless you are developing visualizations, games or other computationally expensive code that actually does a lot of work. Your average web app is I/O bound and your CPU and RAM will be idle most of the time. In that case, you gain much more by optimizing I/O- rather than execution- speed; i.e. make sure, your files load fast and you employ client-side rather than server-side rendering+templating. However, if you are toying around with games, scientific computation or other CPU-bound Javascript code, this article might be interesting for you.
Short Version:
In performance-critical code:
String
), since those will create new objects during state-changing operations you perform on them.Long Version:
First, consider that the system heap is essentially the same as a large object pool. That means, whenever you create a new object (using new
, []
, {}
, ()
, nested functions, string concatenation, etc.), the system will use a (very sophisticated, fast and low-level performance-tuned) algorithm to give you some unused space (i.e. an object), makes sure it's bytes are zeroed out and return it. That is very similar to what an object pool has to do. However, the Javascript's run-time heap manager uses the GC to retrieve "borrowed objects", where a pool gets it's objects back at almost zero cost, but requires the developer to take care of tracking all such objects herself.
Modern Javascript run-time environments, such as V8, have a run-time profiler and run-time optimizer that ideally can (but do not necessarily (yet)) optimize aggressively, when it identifies performance-critical code sections. It can also use that information to determine a good time for garbage collection. If it realizes you run a game loop, it might just run the GC after every few loops (maybe even reduce older generation collection to a minimum etc.), thereby not actually letting you feel the work it is doing (however, it will still drain your battery faster, if it is an expensive operation). Sometimes, the optimizer can even move the allocation to the stack, and that sort of allocation is basically free and much more cache-friendly. That being said, these kinds of optimization techniques are not perfect (and they actually cannot be, since perfect code optimization is NP-hard, but that's another topic).
Let us take games for example: This talk on fast vector math in JS explains how repeated vector allocation (and you need A LOT of vector math in most games) slowed down something that should be very fast: Vector math with Float32Array
. In this case, you can benefit from a pool, if you use the right kind of pool in the right way.
These are my lessons learned from writing games in Javascript:
Instead of
var x = new X(...);
use:
var x = X.create(...);
or even:
// this keeps all your allocation in the control of `Allocator`:
var x = Allocator.createX(...); // or:
var y = Allocator.create('Y', ...);
This way, you can implement X.create
or Allocator.createX
with return new X();
first, and then replace it with a pool later on, to easily compare the speed. Better yet, it allows you to quickly find all allocations in your code, so you can review them one by one, when the time comes. Don't worry about the extra function invocation, as that will be inlined by any decent optimizer tool, and possibly even by the run-time optimizer.
Instead of:
function add(a, b) { return new Vector(a.x + b.x, a.y + a.y); }
// ...
var z = add(x, y);
try:
function add(out, a, b) { out.set(a.x + b.x, a.y + a.y); return out; }
// ...
var z = add(x, x, y); // you can do that here, if you don't need x anymore (Note: z = x)
Avoid:
var tmp = new X(...);
for (var x ...) {
tmp.set(x);
use(tmp); // use() will modify tmp instead of x now, and x remains unchanged.
}
new
(because the run-time has full control over how to allocate things). In case of tight computational loops, you might want to consider doing multiple computations per iteration, rather than just one (that is also known as a partially unrolled loop).Pool Algorithms
Unless you write a very sophisticated pool querying algorithm, you are generally stuck with two or three options. Each of these options are faster in some and slower in other scenarios. The ones I saw most often are:
inUse
flag to true. Unset it when the object is no longer needed.Play around with those options. Unless your linked list implementation is rather sophisticated, you will probably find that the array-based solution is faster for short-lived objects (which is where pool performance actually matters), given, there are no long-lived objects in the array, causing the search for a free object to become unnecessarily long. If you usually need to allocate more than one object at a time (e.g. for your partially unrolled loops), consider a bulk allocation option that allocates (small) arrays of objects, rather than just one, to reduce the lookup overhead for unallocated objects. If you are really hot for a fast pool (and/or just wanna try out something new), look at how system heaps are implemented which are fast and allow for allocations of varying sizes.
Final Words
Whatever you decide to use, keep profiling, researching and sharing successful approaches of making our beloved JS code run even faster!