TDLR
If you are merging arrays with thousands of elements across, you can shave off seconds from the process by using arr1.push(...arr2)
instead of arr1 = arr1.concat(arr2)
. If you really to go faster, you might even want to write your own implementation to merge arrays.
Wait a minute... how long does it take to merge 15,000 arrays with .concat
...
Recently, we had a user complaining of a major slowdown in the execution of their UI tests on UI-licious. Each I.click
I.fill
I.see
command which usually takes ~1 second to complete (post-processing e.g. taking screenshots) now took over 40 seconds to complete , so test suites that usually completed under 20 minutes took hours instead and was severely limiting their deployment process.
It didn't take long for me to set up timers to narrow down out which part of the code was causing the slowdown, but I was pretty surprised when I found the culprit:
arr1 = arr1.concat(arr2)
Array's .concat
method.
In order to allow tests to be written using simple commands like I.click("Login")
instead of CSS or XPATH selectors I.click("#login-btn")
, UI-licious works using dynamic code analysis to analyse the DOM tree to determine what and how to test your website based on semantics, accessibility attributes, and popular but non-standard patterns. The .concat
operations was being used to flatten the DOM tree for analysis, but worked very poorly when the DOM tree was very large and very deep, which happened when our user recently pushed an update to their application that caused their pages to bloat significantly (that's another performance issue on their side, but it's another topic).
It took 6 seconds to merge 15,000 arrays that each had an average size of 5 elements with .concat
.
What?
6 seconds...
For 15,000 arrays with the average size of 5 elements?
That's not a lot data.
Why is it so slow? Are there faster ways to merge arrays?
Benchmark comparisons
.push vs. .concat for 10000 arrays with 10 elements each
So I started researching (by that, I mean googling) benchmarks for .concat
compared to other methods to merge arrays in Javascript.
It turns out the fastest method to merge arrays is to use .push
which accepts n arguments:
// Push contents of arr2 to arr1
arr1.push(arr2[0], arr2[1], arr2[3], ..., arr2[n])
// Since my arrays are not fixed in size, I used `apply` instead
Array.prototype.push.apply(arr1, arr2)
And it is faster by leaps in comparison.
How fast?
I ran a few performance benchmarks on my own to see for myself. Lo and behold, here's the difference on Chrome:
👉 Link to the test on JsPerf
To merge arrays of size 10 for 10,000 times, .concat
performs at 0.40 ops/sec, while .push
performs at 378 ops/sec. push
is 945x faster than concat
! This difference might not be linear, but it is already is already significant at this small scale.
And on Firefox, here's the results:
Firefox's SpiderMonkey Javascript engine is generally slower compared to Chrome's V8 engine, but .push
still comes out top, at 2260x faster.
This one change to our code fixed the entire slowdown problem.
.push vs. .concat for 2 arrays with 50,000 elements each
But ok, what if you are not merging 10,000 size-10 arrays, but 2 giant arrays with 50000 elements each instead?
Here's the the results on Chrome along with results:
👉 Link to the test on JsPerf
.push
is still faster than .concat
, but a factor of 9.
Not as dramatic as 945x slower, but still dang slow.
Prettier syntax with rest spread
If you find Array.prototype.push.apply(arr1, arr2)
verbose, you can use a simple variant using the rest spread ES6 syntax:
arr1.push(...arr2)
The performance difference between Array.prototype.push.apply(arr1, arr2)
and arr1.push(...arr2)
is negligable.
But why is Array.concat
so slow?
It lot of it has to do with the Javascript engine, but I don't know the exact answer, so I asked my buddy @picocreator, the co-creator of GPU.js, as he had spent a fair bit of time digging around the V8 source code before. @picocreator's also lent me his sweet gaming PC which he used to benchmark GPU.js to run the JsPerf tests because my MacBook didn't have the memory to even perform .concat
with two size-50000 arrays.
Apparently the answer has a lot to do with the fact that .concat
creates a new array while .push
modifies the first array. The additional work .concat
does to add the elements from the first array to the returned array is the main reason for the slowdown.
Me: "What? Really? That's it? But by that much? No way!"
@picocreator: "Serious, just try writing some naive implementations of .concat vs .push then!"
So I tried writing some naive implementations of .concat
and .push
. Several in fact, plus a comparison with lodash's _.concat
:
👉 Link to the test on JsPerf
Naive implementation 1
Let's talk about the first set of naive implementation:
Naive implementation of .concat
// Create result array
var arr3 = []
// Add Array 1
for(var i = 0; i < arr1Length; i++){
arr3[i] = arr1[i]
}
// Add Array 2
for(var i = 0; i < arr2Length; i++){
arr3[arr1Length + i] = arr2[i]
}
Naive implementation of .push
for(var i = 0; i < arr2Length; i++){
arr1[arr1Length + i] = arr2[i]
}
As you can see, the only difference between the two is that the .push
implementation modifies the first array directly.
Results of vanilla methods:
-
.concat
: 75 ops/sec -
.push
: 793 ops/sec (10x faster)
Results of naive implementation 1
-
.concat
: 536 ops/sec -
.push
: 11,104 ops/sec (20x faster)
It turns that my DIY concat
and push
is faster than the vanilla implementations... But here we can see that simply creating a new result array and copying the content of the first array over slows down the process significantly.
Naive implementation 2 (Preallocate size of the final array)
We can further improve the naive implementations by preallocating the size of the array before adding the elements, and this makes a huge difference.
Naive implementation of .concat
with pre-allocation
// Create result array with preallocated size
var arr3 = Array(arr1Length + arr2Length)
// Add Array 1
for(var i = 0; i < arr1Length; i++){
arr3[i] = arr1[i]
}
// Add Array 2
for(var i = 0; i < arr2Length; i++){
arr3[arr1Length + i] = arr2[i]
}
Naive implementation of .push
with pre-allocation
// Pre allocate size
arr1.length = arr1Length + arr2Length
// Add arr2 items to arr1
for(var i = 0; i < arr2Length; i++){
arr1[arr1Length + i] = arr2[i]
}
Results of naive implementation 1
-
.concat
: 536 ops/sec -
.push
: 11,104 ops/sec (20x faster)
Results of naive implementation 2
-
.concat
: 1,578 ops/sec -
.push
: 18,996 ops/sec (12x faster)
Preallocating the size of the final array improves the performance by 2-3 times for each method.
.push
array vs. .push
elements individually
Ok, what if we just .push elements individually? Is that faster than Array.prototype.push.apply(arr1, arr2)
for(var i = 0; i < arr2Length; i++){
arr1.push(arr2[i])
}
Results
-
.push
entire array: 793 ops/sec -
.push
elements individually: 735 ops/sec (slower)
So doing .push
on individual elements is slower than doing .push
on the entire array. Makes sense.
Conclusion: Why .push
is faster .concat
In conclusion, it is true that the main reason why concat
is so much slower than .push
is simply that it creates a new array and does the additional work to copy the first array over.
That said, now there's another mystery to me...
Another mystery
Why are the vanilla implementations so much slower than the naive implementations?🤔I asked for @picocreator's help again.
We took a look at lodash's _.concat
implementation for some hints as to what else is vanilla .concat
doing under the hood, as it is comparable in performance (lodash's is slightly faster).
It turns out that because according to the vanilla's .concat
's specs, the method is overloaded, and supports two signatures:
- Values to append as n number of arguments, e.g.
[1,2].concat(3,4,5)
- The array to append itself, e.g.
[1,2].concat([3,4,5])
You can even do both like this: [1,2].concat(3,4,[5,6])
Lodash also handles both overloaded signatures, and to do so, lodash puts all the arguments into an array, and flattens it. It make sense if you are passing in several arrays as arguments. But when passed an array to append, it doesn't just use the array as it is, it copies that into another array, and then flattens it.
... ok...
Definitely could be more optimised. And this is why you might want to DIY your own merge array implementation.
Also, it's just my and @picocreator's theory of how vanilla .concat
works under the hood based on Lodash's source code and his slightly outdated knowledge of the V8 source code.
You can read the lodash's source code at your leisure here.
Additional Notes
The tests are done with Arrays that only contain Integers. Javascript engines are known to perform faster with Typed Arrays. The results are expected to be slower if you have objects in the arrays.
Why are we doing such large array operations during UI-licious tests anyway?
Under the hood, the UI-licious test engine scans the DOM tree of the target application, evaluating the semantics, accessible attributes and other common patterns to determine what is the target element and how to test it.
This is so that we can make sure tests can be written as simple as this:
// Lets go to dev.to
I.goTo("https://dev.to")
// Fill up search
I.fill("Search", "uilicious")
I.pressEnter()
// I should see myself or my co-founder
I.see("Shi Ling")
I.see("Eugene Cheah")
Without the use of CSS or XPATH selectors, so that the tests can be more readable, less sensitive to changes in the UI, and easier to maintain.
ATTENTION: Public service announcement - Please keep your DOM count low!
Unfortunately, there's a trend of DOM trees growing excessively large these days because people are building more and more complex and dynamic applications with modern front-end frameworks. It's a double-edge sword, frameworks allow us to develop faster, folks often forget how much bloat frameworks add. I sometimes cringe at the number of elements that are just there to wrap other elements when inspecting the source code of various websites.
If you want to find out whether your website has too many DOM nodes, you can run a Lighthouse audit.
According to Google, the optimal DOM tree is:
- Less than 1500 nodes
- Depth size of less than 32 levels
- A parent node has less than 60 children
A quick audit on the Dev.to feed shows that the DOM tree size is pretty good:
- Total count of 941 nodes
- Max. depth of 14
- Max number of child elements at 49
Not bad!
Top comments (73)
Nice investigation.
The concat loop is akin to O(n^2) whereas the push loop is O(n). By creating a new array you need to do the copy, as you showed. This happens for every iteration of the loop*. An array push however is amortized constant time (usually), thus you only copy each element once (well probably three times).
Part of the issue with increased copying is the increased memory access. Your array probably exceeds the bounds of the L0 cache on the CPU, possibly the L1 cache. This means that when concat copies, it needs to load new memory from a more distant cache, repeatedly. The push version never accesses the data already in the list, thus avoiding this extra memory loading.
(*Before somebody gets pedantic, it's more like a K*N loop, where K is the loop iterations. Hwever, since the size of the resulting array is linearly related to K, it means
c * K = N
, thus ~ N^2)Talking strictly in terms of the performance of
push
versusconcat
(albeit naively as it relates to VM implementation details), this wouldn't really apply.A meaningful comparison of the two would take two large pre-allocated arrays, and compare the results of a single
concat
operation and a singlepush
operation, where I'd expectconcat
to perform better when dealing with sufficiently large arrays, since it's (theoretically) a single allocation and then two memcpys, whereaspush
copies a single element over one at a time causing multiple reallocations.To be pedantic, the context in which we're talking about the performance characteristics of
push
andconcat
is in terms of merging a large number of arrays, where I'd wager garbage collection is likely the dominating factor in theconcat
solution, even after taking runtime characteristics into account.I'm not following how this means what I said doesn't apply?
push
is highly unlikely to allocate memory one bit at a time. If it's anything like a C++ vector, or a contiguous array in any other language, it has amortized O(1)push
time by allocating geometrically rather than linearly.We're also dealing in sizes that are definitely large enough to spill over the CPU buffers, especially in the concat case where we allocate new data.
GC is potentially involved, but the allocation and deallocation of memory, in such tight loops, for an optimized VM (I assume it is), is likely also linear time.
There's no reason a
memcpy
is faster than a loop copying over items one at a time.memcpy
is not a CPU construct, it gets compiled to a loop that copies a bit of memory at a time.I should clarify: What you said applies to merging multiple arrays, where the input size is the number of arrays being merged. If we're talking strictly about the performance of
push
vsconcat
for concatenating one array to another (which the headline sort of implies), then the input size is always one if analyzing in terms of merging multiple arrays.As you said, it's going to re-allocate similarly to a C++ vector, which for the purpose of runtime analysis we just disregard. I choose not to disregard it since I don't believe runtime analysis tells the whole story here. Or perhaps more appropriately, merging multiple arrays is a specific use of
push
andconcat
that doesn't speak to their performance characteristics in a generalized way.The point is,
concat
may perform better in scenarios where one is likely to use it, and it would be misleading to advocate always usingpush
instead.Also known as a Schlemiel the Painter’s Algorithm.
Note that your tests may be returning biased results as for example Chrome and likely other browsers have special cased arrays that are known to (not) be sparse.
Doing
Array.from(Array(50000), (x,i) => i + 1)
looks nice but will create a "hole-y" (sparse) Array. That is, the initial array will be 50000 "holes" or empty values. That you then fill them in later will not upgrade the array to be non-sparse later on. These things change quickly but this is what was featured in a session by a Chrome engineer on Array implementations from last year: youtube.com/watch?v=m9cTaYI95ZcI work with large arrays as well (3D model data), these are anywhere from 1KB to 100MB in size and I have to sometimes append them all together without wasting memory etc. I use the following method to append them together. It uses
Array.slice
but browsers (seem to) implement slicing as CopyOnWrite so the slice creates a view on the array and doesn't make copies constantly I've found empirically.Linked here in my repo, which also has quite a few other array methods, check 'em out ;)
github.com/stardazed/stardazed/blo...
Definately a biased sample for browser point of view - not fully intentional though. We tried to get firefox results for extreamly large arrays.... and err lets say it didnt go well at all for the other browsers when put side by side, so we left them out.
Bigger impact though is probably gonna be on the server side (node.js) of things, which is the case for us.
Despite my involvement in GPU.JS, and all things 3D, which is cool in its own right. For most other sites, I do personally feel that its usually a bad idea to overbloat the UI, especially for those with older devices.
Big thanks for the youtube link =)
Yes, my focus is in-browser (on desktop) high-end 3D scenes, which is very different from web pages and server-side stuff. I've spent a good amount of time benchmarking various approaches to array based data manipulations so certain of my functions have separate paths for diff browsers but in general with my code Safari (since v11) and Chrome are very close with my code usually performing best in STP and Firefox generally coming in 3rd place by about -10% to -30%, unless it faster by a good margin in particular things… JS engines.
This problem also exists for ArrayBuffers as the changes to JS engines to accommodate ES6 runtime features negatively affected many common functions on those types. New patches have come in since then and they're better now, but certain functions on ArrayBuffer, like
set
, are still very slow on all browsers.Thanks to Shi Ling for the nice article.
Actually your time spent on various array benchmarks across browsers, would be a cool article on its own - especially the various other browsers, safari desktop, mobile, edge, ie11? (haha).
Would actually love to see your browser specific implementations 🤔 It is admittingly a lot of work though
Thanks! A lot of this work is in various test files (not in repos) and quite a bit of it was done over the last 3 years so I'd have to retest all permutations etc. I often feel like writing an article here but instead I just write a long comment on someone else's story as that feels less "weighty." But cool that you'd be interested!
As for browsers, given the scope of my project I have the luxury of requiring essentially the latest versions of desktop browsers, so my project outputs ES2018 code and I only need to check a few different engines.
Oh, I get it - you mean those holey arrays will be evaluated as holey arrays forever even if they become packed.
That video is really helpful - I learnt yet another thing about Array implementations in V8 today. Gee I wish more of these good videos are transcribed into searchable online literature.
Hm... I could change the array initialisation so that it's not holey and redo the experiments - concat will still be slower than push, but the results for the vanilla implementations could perform faster.
Huh, that's interesting! I was wondering what are the memory footprints of different implementations to merge arrays!
A thing I do wonder though is that if you an extra .slice(0) operation after the array is packed, wouldn't that result into a true fast-to-operate non-holey array? So the cost would be an extra allocation and copy, which would still be better than continuous .push() into a packed array.
Of course this only holds true as long there will be thousands of items to process and you know the final size before you start working on it.
This is Javascript engine dependent, but please have a look at my initial comment, I observed that Array.slice creates a view on an array instead of making a new one until it is modified (CopyOnWrite). This behaviour likely will vary for example based on what the VM thinks is best given the size and contents of the array and will again vary per VM implementation. Also, my observation is just that, I did not verify this in the source of each VM, so consider this an unconfirmed opinion.
The video may also give a real answer to this Q as the presenter goes into detail about these things, but I haven't watched it recently. Good to have a look.
I watched the video and it didn't answer that specific question. However if slice creates a view then it still does need to make a new array once any modification is made on the sliced copy. At that point shouldn't it regard it as a new array and be able to optimize it?
It would be awesome though if holey arrays could become non-holey without this kind of VM specific tricking once there are no holes remaining.
So from what I remember, Array implementations in V8 come in two flavours: Fast element and Dictionary element (hash table) arrays. The former is present when the array is below 10000 elements and doesn't have any "holes" in them. The latter, on the other hand, is present when the opposite is true: array.length > 10000 and array has "holes" (e.g. [1, 2, undefined, 4, 5])
While not shown in the use case (as the above oversimplifies the actual code). Your right that holes have an impact between the array mode / dictionary mode in V8 code. Having holes does not gurantee a trigger into dictionary mode, but is one of the contributing factor in how the internal engine decides the mode.
Probably worth further investigation, gut feel as of now from how I understand it, is the array in our use case would not have any holes 99% of the time. And from our internal code monitoring, its these 99% that's has the problem. >_<
I can't recall the talk reference for this (but definitely know it's real) so if you know where it's from. It would be good to add here for others to learn.
I replied to this, but I don't see the reply here. Really weird.
Anyways here are two resources that talk about how V8 handles arrays:
So concat might have been slow because of this reason. I'm still not certain about this, mainly because .push was still fast.
I am sorry, I do not understand what is the big fuss about this... concat() and push() have two different applications and they were not made to compete with each other. You do not need a study to conclude concat() is a slower option. That is very clear by reading the two first lines of the documentation of each method.
developer.mozilla.org/en-US/docs/W...
developer.mozilla.org/en-US/docs/W...
It is important to understand that you use concat() when you are after immutability, for example, when you want to preserve states. One example where immutability is important is when using React - reactjs.org/tutorial/tutorial.html....
Otherwise, you use push when you don't care about changing the arrays (one of them at least).
I am very much more surprised immutability was not mentioned in this article, than with the difference between the performance of each method.
While that is true, and although it just so happened that in my use case immutability doesn't matter, but it is still worthwhile to consider using other methods to merge arrays instead of
concat
, while still preserving the source arrays.If you needed to merge thousands of small arrays in a loop without modifying the source arrays for example, you should create a new result array and use
push
the source arrays instead to avoid the penalty of copying the first array during each loop when you useconcat
.Naive code to merge 1000 arrays with
concat
(basically what I did, which in my case is 15,000 arrays):Faster code to merge 1000 arrays with
push
w/o modification to source arrays:Also, given by the name, it's not very obvious to people that
concat
creates a new array instead of modifying the first array. And even though I frequently read MDN docs, I still sometimes forget thatconcat
creates the result as new array.I always forget too 😂
FWIW, to me it is very intuitive that concat would create a new array since when I hear the word 'concat' I immediately think of strings. Maybe it's misguided, but that leads me to expect other things to behave the same way strings would.
Very thorough article.
Considering the fame of lodash, I now wonder is there's a place for an equivalent with performance in mind.
I also wonder how many more example like this is there is. Is that only
concat
or most ofArray.prototype
that can be optimized ?I remember my early struggles with JS performance, replacing
Math.floor
by<< 0
gaining considerable amount op/s. (Not true today)Another sample is
array.push
vsarray.unshift
, basically for memory reasons. If I remember correctly, apush
is simple for most computers because it already has "extra" memory allocated at the end of an array, and will only have to recreate the array when the allocated memory runs out.unshift
however will always recreate the whole array because adding an item to the start of an array would require moving ever single item one step. Recreating the whole array is simpler to do than moving each one.See stackoverflow.com/questions/440315...
Huh, interesting!
There's a major problem that should be mentioned.
push(... arr)
Will stack overflow if your array is larger than the stack size.
Wait... why? Stack as in execution stack or memory?
Execution stack. It's the same for
apply
:The problem is that function call arguments all need to go on the call stack, so that they can be bound to the called function's variables. Put another way,
push.apply(a, [1,2,3]) --> a.push(1,2,3)
, where 1, 2, and 3 are assumed to fit in the call stack.The max stack size is set by the OS & user, but generally small (a few hundred or thousand kb), since it's only meant to hold function call contexts.
So we can distinguish between cases where the array is actually a list of function arguments, or the
push.apply
usage, where the array is "just" data, and we're only able to use it withpush
because that function, for convenience, takes a variable number of arguments.In order to use
apply
in the latter case, it's good to know up front that the array will be small.Oh I see!
Might want to mention what the spread operator actually does, which is to spread the one argument to
array.push(...arg)
to multiple arguments like so:array.push(arg[0], arg[1], arg[2], ..., arg[n])
.You're right, I didn't realise that people might not know what the ES6 spread operator does. Updated the article - thanks!
Seems the problem is not in concat, but in = itself.
As I can see map of concat is even faster than push
Plz check jsperf.com/javascript-array-concat...
Not comparable: the concat mapper creates 10000 arrays of 20 item arrays, while the other functions create an array of 100010 items.
Ahh, indeed!
Well, with reduce-concat timing became the same as arr1 = arr1.concat :(
according the DOM count: how to properly handle situation when you simply have a lot of items to display? implementing fake scroller to render only the visible elements and remove from DOM these hidden ones doesn't work that well IMO, especially when all of them are still more or less interactive (multiple selection etc.)
You mean how does should a front-end engineer improve performance on a page with a high DOM count, or how does our test engine handle such huge pages?
I meant how to handle high item counts while keeping good performance/minimizing DOM count
Well, the most common solution is pagination. But if your designer fancies the infinite scrolling pattern, then as you said, rendering only visible content is a solution. The Chrome Dev Team proposed the
<virtual-scroller>
element youtube.com/watch?v=UtD41bn6kJ0 last year, which may be handy when if it becomes standard in the future.Additionally, we can also check if there are redundant wrapper elements on the page that can be removed.
Thanks for the link.
It's not exactly infinite scrolling or anything like that, it's an app (kind of, browser extension), RSS reader to be precise, and just by importing my sources list and downloading currently available articles ir goes to over 3000 articles. Selecting "all feeds" surely renders quite a big list.
It's a legacy project I picked up since it was abandoned and cleaned up a bit, there was even an attempt at handling long lists better but it was actually working slower than naive list everything, approach and was glitching quite often. Currently performance isn't terrible but if there was some easy way to improve it I'd gladly use it, I'll do some more research in this matter to see what can be done.
Great stuff, I have so much love for jsperf.com that I remember being very upset for a while when it went down for a while. Glad to have it back up, I have recently used it to check the performance on different solutions the Sock Merchant Challenge, please check the comments section if you have a sec
My question though is that since some have expressed possible bias on certain browsers, how much do we trust or should rely on the built in console.time method running in a browserless environment such as node to provide a base for such benchmarks;
If this is an objective and efficient way of testing speed, it should be easy to put a wrapper around it for reuse, if not what do folks recommend as tools to look into for this crucial exercise
Huh, I didn't know there's a
console.time
andconsole.timeEnd
methods.Been using new Date() all along :D.
I don't know if JsPerf uses
console.time
andconsole.timeEnd
under the hood, but the underlying implementations of the timer method can't be so sophisticated that it'll make a significant impact on the tests between browsers.Some comments may only be visible to logged-in visitors. Sign in to view all comments.