DEV Community

Alexander Nenashev
Alexander Nenashev

Posted on • Edited on

Pre-allocate arrays to make Javascript fast

If you ever implemented a dynamic array in low level languages like C++ you allocate a chunk of memory for several elements at once, when there's no more space for new elements you allocate a new chunk. I guess this is valid and for Javascript arrays. Allocation of a memory on the heap is an expensive operation.

Let's try in Chrome's console (several times, the output is the most frequent):

const start = performance.memory.usedJSHeapSize;
const arr = new Array(1);
setTimeout(() => {
    const diff = performance.memory.usedJSHeapSize - start;
    console.log('heap allocated', diff);
}, 100);

> heap allocated 1688
Enter fullscreen mode Exit fullscreen mode
const start = performance.memory.usedJSHeapSize;
const arr = new Array(1_000_000);
setTimeout(() => {
    const diff = performance.memory.usedJSHeapSize - start;
    console.log('heap allocated', diff);
}, 100);

> heap allocated 4001684
Enter fullscreen mode Exit fullscreen mode

As we see with 1_000_000 array length we have 3.8Mb allocated.

Of course the reason behind the optimization could be different or composite, I'm not a developer of any current JS engines, but it seems that changing the length of an array is generally an expensive operation.

Regardless of the actual JS implementation if we follow the rule:

When the final size of an array is known during creation of the array - allocate it:

const arr = Array(knownSize);
Enter fullscreen mode Exit fullscreen mode

This immediately boosts your Javascript speed:

let $length = 10;
const arr = Array.from({length: $length}, (_, i) => i);

// @benchmark non pre-allocated
{
  const result = [];
  for(let i = 0; i < arr.length; i++){
    result.push(arr[i]);
  }
  result;
}

// @benchmark pre-allocated
{
  const result = Array(arr.length);
  for(let i = 0; i < arr.length; i++){
    result[i] = arr[i];
  }
  result;
}

// @benchmark Array#map
arr.map(i => i);

` Chrome/125
-----------------------------------------------------------------------------------------------
>                         n=10       |      n=100       |      n=1000       |      n=10000     
pre-allocated       ■ 1.00x x10m  85 | ■ 1.00x x10m 669 | ■ 1.00x   x1m 474 | ■ 1.00x x100k 560
Array#map             1.69x x10m 144 |   1.79x  x1m 120 |   2.15x x100k 102 |   3.20x  x10k 179
non pre-allocated     3.22x x10m 274 |   3.57x  x1m 239 |   4.11x x100k 195 |   3.66x  x10k 205
----------------------------------------------------------------------------------------------- `
` Firefox/126
---------------------------------------------------------------------------------------------
>                         n=10       |      n=100      |      n=1000       |     n=10000     
pre-allocated       ■ 1.00x x10m 227 | ■ 1.00x x1m 162 | ■ 1.00x x100k 192 | ■ 1.00x x10k 208
non pre-allocated     1.44x x10m 327 |   1.45x x1m 235 |   1.16x x100k 222 |   1.53x x10k 319
Array#map             1.46x x10m 332 |   1.06x x1m 172 |   1.73x x100k 333 |   2.17x x10k 452
--------------------------------------------------------------------------------------------- `
Enter fullscreen mode Exit fullscreen mode

Open in the playground

Keep in mind though that you create a sparse array with Array(size) so some methods like map(), forEach() wouldn't work as expected, but the iterators works fine like [...arr].

Top comments (9)

Collapse
 
bwca profile image
Volodymyr Yepishev • Edited

Amazing observation, thanks for sharing!

Would it mean that arr.reduce((a, c, i) => {a[i] = c; return a }, Array(arr.length)) actually beats a plain arr.map(i => i); in terms of performance due to the mere array pre-allocation? On paper it looks like it should.

Collapse
 
alexander-nenashev profile image
Alexander Nenashev • Edited

Beats in Chrome, but it shouldn't since the array size in Array#map() is known and should be pre-allocated too, but I found that many native array methods are slower than custom implementations, probably a topic for some future post with examples.

` Chrome/125
---------------------------------------------------------------------------------
>              n=10       |      n=100       |     n=1000      |     n=10000     
reduce   ■ 1.00x x10m 109 | ■ 1.00x x10m 632 | ■ 1.00x x1m 451 | ■ 1.00x x10k 117
map        1.21x x10m 132 |   1.84x  x1m 116 |   2.09x x1m 941 |   1.43x x10k 167
--------------------------------------------------------------------------------- `
Enter fullscreen mode Exit fullscreen mode

Open in the playground

Collapse
 
forguz profile image
Nicolas Dellazzeri

That's an interesting topic that I didn't see being commented very often. I guess many JS developers don't know about memory allocation, and I really don't blame them, I think that it's just uncommon to deal with issues like that nowadays and if you don't even know that this thing exists, how can you address it? Good job on the benchmark measuring as a proof.

What is the tool you're using for benchmark?

Collapse
 
alexander-nenashev profile image
Alexander Nenashev

silentmantra.github.io/benchmark
Press help to see a simple default template.
Currently refactoring it to make open-source.

Collapse
 
webjose profile image
José Pablo Ramírez Vargas

I don't know how to read those benchmark results, but I bring the MDN documentation on the Array constructor. It says, for the parameter of the constructor in use here:

If the only argument passed to the Array constructor is an integer between 0 and 232 - 1 (inclusive), this returns a new JavaScript array with its length property set to that number (Note: this implies an array of arrayLength empty slots, not slots with actual undefined values

It basically says that preallocation is not a thing in JavaScript. The constructor merely creates the array object and sets its length property value to the value passed to the constructor. There is no memory preallocation as in "not slots with actual undefined values".

Collapse
 
alexander-nenashev profile image
Alexander Nenashev • Edited

Benchmarks clearly show that if you create an array with a predefined size and doesn't change its length while filling it, it's faster than working with dynamic length. Regarding sparse arrays - it doesn't mean it's not allocated, it COULD mean that allocated values are undefined. Regardless of ACTUAL implementation, so called "pre-allocation" in my example shows stable speed benefits over long time (probably years) in Chrome and Firefox.

Collapse
 
webjose profile image
José Pablo Ramírez Vargas

That's it, the docs clearly say it is not undefined values. It makes it clear there is no pre-allocation done. That's the spec, though. In reality, maybe browsers do pre-allocate. Who knows, right? (I don't know, BTW). I was, however, compelled to point out that the documentation is clear about it: Pre-allocation is not expected at all.

Thread Thread
 
alexander-nenashev profile image
Alexander Nenashev • Edited

performance.memory and benchmarking tell otherwise 🤷‍♂️

Collapse
 
quadruple profile image
Anton • Edited

If you push around numbers you should use (and, yes, you’d have to pre-allocate it) a typed array that fits the values you operate on, like Uint8Array or Float32Array, which would be more than 2x faster than a vanilla pre-allocated array (measurethat.net/Benchmarks/Show/32...).

If you hold not numbers, but any non-primitive objects in your array, you will gain almost nothing performance-wise from pre-allocation (measurethat.net/Benchmarks/Show/32...). Not worth it.