DEV Community

Cover image for Please don't "overchain" array methods
Basti Ortiz
Basti Ortiz

Posted on

Please don't "overchain" array methods

Before modern JavaScript was a thing, the only conceivable way of iterating over arrays was to use the classic C-style for loop. It was cumbersome to say the least. It was too verbose and had a lot of boilerplate code. With the rising popularity of concepts in functional programming came the array methods we love and enjoy today. Thanks to forEach, map, filter, and reduce, iterating over arrays has never been easier. Coupled with ES6 arrow functions, they have never been more concise.

In addition to its brevity, these array methods—which are essentially just glorified for loops—also allow us to chain various array operations as much as we need without sacrificing readability (depending on your code). It's a real work of art to see a beautiful chain of sequential method calls. Seeing how an array is manipulated step-by-step for each method call makes it all the more natural to read. What had to be done with several lines of code back then can now be done with a single one.

Although they have virtually eliminated the need for for loops, array methods introduce new problems to the table. As elegant as a chain of method calls can be, we have to remember that for each method we attach to the chain is a whole new iteration of the array. To write performant code, we must keep in mind that these long chains mean more iterations.

Combine your Math operations

To illustrate the problem of unnecessarily long chains, consider an array of numbers from -2 to 2. It is our objective to find the sum of thrice the squares of these numbers. At first glance, we can go about the problem with a chain of map and reduce calls.

const nums = [ -2, -1, 0, 1, 2 ];

const sum = nums
  .map(x => x * x)
  .map(x => x * 3)
  .reduce((prev, curr) => prev + curr, 0);
Enter fullscreen mode Exit fullscreen mode

This will indeed meet our objective. The only problem with it is the fact that it has three chained methods. Three chained methods mean three whole new array iterations. We can prove that fact by adding an intermediary console.log before returning each callback function but I won't do that in this article because you probably get the point by now. If that sounds very time-inefficient, especially at scale, then you'd be correct. To make this code more performant, we simply have to find a way to combine the method calls in such a way that minimizes the number of iterations the CPU has to do over the same array of data.

const nums = [ -2, -1, 0, 1, 2 ];

// Level 1: Combine the `map` calls
const level1Sum = nums
  .map(x => 3 * x ** 2)
  .reduce((prev, curr) => prev + curr, 0);

// Level 2: Combine _everything_
const level2Sum = nums
  .reduce((prev, curr) => prev + 3 * curr ** 2, 0);
Enter fullscreen mode Exit fullscreen mode

Use compound Boolean expressions

The same rule can be applied to Boolean expressions and the filter method. Let's say we have an array of User objects. We want to find the User objects that currently have premium accounts. Then, from those accounts, we look for administrators whose ages are over 18.

class User {
  constructor(isAdmin, hasPremium, age) {
    this.isAdmin = isAdmin;
    this.hasPremium = hasPremium;
    this.age = age;
  }
}

// Array of `User` accounts
const users = [
  new User(false, false, 9),
  new User(false, true, 30),
  new User(true, true, 15),
  new User(true, true, 19),
  new User(false, true, 3)
];
Enter fullscreen mode Exit fullscreen mode

Instead of combining Math operations, we can use compound Boolean expressions to combine each condition. This way, we can minimize the number of array iterations.

// Level 0: Chain _everything_
const level0 = users
  .filter(user => user.isAdmin)
  .filter(user => user.hasPremium)
  .filter(user => user.age > 18);

// Level 2: Combine _everything_
const level3 = users
  .filter(user => (
    user.isAdmin
    && user.hasPremium
    && user.age > 18
  ));
Enter fullscreen mode Exit fullscreen mode

Take advantage of operand omission

It is also worth noting that it is still possible to further optimize similar code. By arranging Boolean conditions in a clever way, the code can run slightly faster. This is because the ECMAScript specification states that the logical AND operator (&&) must immediately stop evaluating succeeding operands as soon as it encounters an expression that evaluates to false.

function willRun() {
  console.log('I just stopped the `&&` operator from evaluating the next operand.');
  return false;
}
function neverRuns() { console.log('This function will never run.'); }

// 'I just stopped the `&&` operator from evaluating the next operand.'
true && willRun() && neverRuns();
Enter fullscreen mode Exit fullscreen mode

To write (slightly) more performant code, Boolean expressions that are more likely to be evaluated to false must be placed at the beginning of the compound Boolean condition in order to prevent the unnecessary execution and evaluation of succeeding operands.

// Arranging conditions properly will
// make your code run slightly faster.
arr.filter(x => (
  x.mostLikelyToBeFalse
  && x.moreLikelyToBeFalse
  && x.likelyToBeFalse
  && x.leastLikelyToBeFalse
));
Enter fullscreen mode Exit fullscreen mode

Conclusion

Of course, the examples I presented are trivial. Running these examples won't present a huge performance difference, if at all. The performance impact of an unnecessarily long chain of iterations only becomes apparent at scale with more computationally expensive calculations. For most cases, we don't need to worry about it. Besides, most chains do not even exceed the length of four.

The point of this article is to serve as a reminder to all that just because we can chain methods calls, it doesn't mean we should overdo it. It is our responsibility as developers to make sure that we do not abuse this power. No matter how negligible, there really is a performance impact for each method we attach to a chain. If there is one thing that you should learn from this article, it is the fact that longer chains mean more iterations.

Unless you want to face the wrath of unnecessary iterations, please don't "overchain" array methods.

Top comments (61)

Collapse
 
abalbanyan profile image
Abdullah Albanyan

While I agree with the content of this article, I think any good article with advice on optimization needs to have some basic profiling to prove their point. Something as nebulous as "loop iterations" isn't really a convincing metric.

Here are the timings of each of the array operations performed in this article as performed on a randomly generated array of 50000 numbers on my machine. This gives an idea of the relative performance of each optimization.

sum:       6.189208984375ms
level1Sum: 4.746337890625ms
level2Sum: 1.995849609375ms
Enter fullscreen mode Exit fullscreen mode
Collapse
 
qm3ster profile image
Mihail Malo

That is an insane improvement.

Collapse
 
chuckwood profile image
Charles Wood

Percentage-wise, sure. In practice, is a human being going to notice a 4.2ms improvement?

Thread Thread
 
qm3ster profile image
Mihail Malo

Your phone's battery will notice.

Thread Thread
 
tinussmit profile image
Tinus Smit

I can't upvote this enough. It is such a simple, yet effective way of demonstrating where inefficiencies of code have a real-world impact. I'm going to use this in future discussions. People often see such metrics in isolation, without understanding the larger impact of why it's important to write more efficient code.

I work with server-side code mostly, and the same principle applies. If I can save the server even 1 second of processing on a given request, then it means that every instance of this request has time saved. Which means that many concurrent requests will not use as much of the server's resources.

^ applies to client side code as well, which means if you replace "server" with "client" (desktop / mobile), then you still have the same gains.

Thread Thread
 
tarialfaro profile image
Tari R. Alfaro

I always thought the whole "humans won't notice the performance improvements" thing bogus. Sure, they probably won't see a difference.

However their machines do. Sure, we are building applications for humans, but that doesn't mean we should overlook performance that only machines will notice.

There is a downside. Typically most performance tuned code has greatly reduced human readability, like the solution provided in this article.

Thread Thread
 
somedood profile image
Basti Ortiz

Sometimes, it's a price we have to pay if completely necessary. Indeed, it is frustrating how we can't have both readability and performance. Perhaps transpilers can be used to somehow convert less performant code to their more performant counterparts, but then that's just adding a whole new layer of complexity to the project. It isn't really a "solution" per se.

Thread Thread
 
qm3ster profile image
Mihail Malo

Transpilation seems like the cheapest of the three possible options to me.
Though, I don't see a lot of transpilation optimizing for performance.
I feel like we have the community for such an undertaking now, but those that care about raw performance of their web scripts have now focused on WASM instead. Which is a smart move, and has a lot more potential.
Yay WASM!

Collapse
 
6temes profile image
Daniel

Except that, if you are using JS for web development, as most of us, 99% of the times you will be working with arrays containing tens of objects, and theese optimizations will be irrelevant.

Thread Thread
 
okdewit profile image
Orian de Wit • Edited

I've often heard "there will never be more than 20 items on this page" followed by "let's add this complex metadata to each item" followed by "shit, due to unforeseen circumstances, a big client needs 20000 items on that page, and now it feels sluggish".

I'd say don't spend time on difficult optimizations with negligible real world improvements... But when the optimization is extremely easy, why would you NOT do it?

Thread Thread
 
qm3ster profile image
Mihail Malo

I wish JS runtimes did better inlining and monomorphization.
Then you could have your own utilities like

export const map => (arr, fn) {
  const len = arr.length
  const out = new Array(len)
  for (let i = 0; i < len; i++) {
    out[i] = fn(arr[i])
  }
  return out
}

Which beats out what the spec method does:

export function map(fn, thisArg) {
  const O = Object(this)
  const len = O.length
  if (!(fn instanceof Function)) throw new TypeError(`${fn} is not a function`)
  const A = new Array(len)
  for (let k = 0; k < len; k++) {
    const Pk = k.toString()
    const kPresent = O.hasOwnProperty(Pk)
    if (kPresent) {
      const kValue = O[i]
      const mappedValue = fn.call(thisArg, kValue, i, O)
      Object.defineProperty(A, Pk, { value: mappedValue, writable: true, enumerable: true, configurable: true })
    }
  }
  return A
}

But most of that is lost unless you actually write the upper loop inline, because unlike the built in methods it gets deoptimized if you call the utility with many different fn functions. Especially when some of them throw.

Collapse
 
somedood profile image
Basti Ortiz

Thank you so much for your advice and for taking the time to investigate further! I will definitely add metrics in the next time I write performance-related articles.

Collapse
 
ardcore profile image
Szymon Pilkowski

I disagree with this article.
We shouldn't sacrifice readability and composability for performance gains, and we don't have to.

Our goal here is to iterate over things in a way that is both readable and performant. Let's have a look at different methods of achieving this.

We want to multiply stuff by 2, and then add 4. Let's define these goals as functions:

const mulByTwo = (n => n * 2);
const addFour = (n => n + 4);
Enter fullscreen mode Exit fullscreen mode

This, as mentioned in the article, is readable, but may become a performance issue:

[1, 2, 3].map(mulByTwo)
         .map(addFour);
Enter fullscreen mode Exit fullscreen mode

The proposed solution, however, is not very readable, and doesn't scale/compose well:

[1, 2, 3].map(n => n * 2 + 4);
Enter fullscreen mode Exit fullscreen mode

This uses composition, but is far from being readable because of the
reversed, "inside-out" order of execution

[1, 2, 3].map(n => addFour(mulByTwo(n)));
Enter fullscreen mode Exit fullscreen mode

Solution? Best of both worlds. Let's create a higher-order function which would allow us to pipe
other function calls in a nice, clean, left-to-right order:

const pipe = (fn, ...fns) => 
  (...args) => fns.reduce((acc, f) => f(acc), fn(...args));
Enter fullscreen mode Exit fullscreen mode

We can now do stuff!

[1,2,3].map(
  pipe(mulByTwo, addFour)
);
Enter fullscreen mode Exit fullscreen mode

In fact, multiple functional/semi-functional libraries have pipe function in their toolbelt. it's also a part of my very minimal fp library.
We can as well define all, some or none functions and use them to compose our filter logic.

Collapse
 
qm3ster profile image
Mihail Malo

Funny, I usually define my pipe as following:

const pipe = (...fns) => arg => {
  let val = arg
  for (const fn of fns) {
    val = fn(arg)
  }
  return val
}

I also make args unary, so that I don't have to think about methods like map injecting spurious parameters.
Next comes giving it a TS type signature.
Which involves overloads.
It's horrible, but the results are great.

Collapse
 
somedood profile image
Basti Ortiz

Now this is an elegant solution. The only problem with it is that it isn't exactly the most beginner-friendly code. You must have a significant amount of experience with functional programming to be able to fathom the notion of functions inside functions inside functions for composition.

Besides that, it really is an elegant solution that combines the best of both worlds. Pipes truly are a marvel of (software) engineering.

Collapse
 
okdewit profile image
Orian de Wit

isn't exactly the most beginner-friendly code. (...) Pipes truly are a marvel of (software) engineering.

Maybe it's because I started out learning programming from the FP side, but composition feels like pretty intuitive beginner stuff, and I've always had trouble reading ugly loops which store all kinds of stuff in variables.

I think "beginner friendly" highly depends on what you've been exposed to so far.

Thread Thread
 
somedood profile image
Basti Ortiz

Yuppp, loops are always ugly. There is no question about that. You have no idea how many times I've gotten myself inside infinite loops. It's just a horrible experience when I was a beginner.

And I think you're right about that. It does depend on what you've been exposed to as a beginner.

Thread Thread
 
zorqie profile image
I, Pavlov

This is a bit like saying that nails are always ugly, you should use screws. Beginners should learn when to use which, in addition to how not to hit their fingers.

FP is cool and all but FP in JS has become an unbearable infomercial lately: "Have you suffered from infinite loops? Ever feel like your code doesn't have enough dots or arrows? Tired of using plus signs for addition? Try FP"

Collapse
 
nogginbops profile image
Julius Häger • Edited

What is the performance of this solution?
Because this will be guaranteed slower than the solution in the artice because the JavaScript engine almost certainly doesn't do functional optimization. (+ You create a capture witch is slow, needs more optimization which adds to potential jit time)

It is also less simple as you have to understand more concepts to understand the code, especially because JS is an imperative language the functional style is a bad fit. In imparative languages a for-loop with the proper if-statements and calculations will always be the better option for both performance and simplicity.

Please don't bully my phone's battery more than JS already has to. :)

Collapse
 
richardpringle profile image
Richard Pringle • Edited

This is what I call unnecessary optimization. If you follow time complexity optimization, O(3n) is still just O(n). Now, I'm not sure what the space complexity is on this, but unless your arrays are massive, I'm sure it's negligible for integers.

.map and .filter do create new arrays so that definitely is something to be aware of, but doing 3 operations in 1 loop is effectively the same thing as 3 loops with 1 operation.

I think you're misleading beginner JS developers with this article.

Collapse
 
somedood profile image
Basti Ortiz

Yes, that is right. But even though they ultimately boil down to an O(n) time complexity, it can still have a significant performance impact if your arrays are massive, as you said. In a way, I'm not really misleading beginners here because it's the truth of the matter.

But for most cases, one does not need to worry about these performance impacts. It really is just an unnecessary optimization.

Collapse
 
richardpringle profile image
Richard Pringle

I don't believe that you make any comments about .map using more space in your article do you? You also don't comment and say this is only worth doing in one step if your arrays are massive. That's why I suggest that you are misleading beginners. Your example is contrived. For this article to be really useful, you should use a really big array as an example and demonstrate the performance benefits of moving all your operations into the reducer. At the same time, you should speak about the code using significantly less space and that's why it's faster.

Thread Thread
 
somedood profile image
Basti Ortiz

Ah, I see now. There's a reason why I didn't add the #beginners tag for this article. It is exactly for the reason you stated. I wasn't targeting beginners for this article. I was assuming that those who read it will most likely know how to optimize their algorithms themselves with what they learned from this article as a guide on how to do so. I suppose that was wrong of me to assume.

Also, I was actually thinking about discussing the space inefficiency of long chains of array methods, but I felt that doing so would have caused me to stray away from the actual message I was trying to convey throughout the whole article: longer chains mean more iterations.

Collapse
 
okdewit profile image
Orian de Wit • Edited

Waiting 1m is still better than waiting 3m. Disregarding the less significant factors in O(3n) is not always a good idea in practice.

I agree all of this is a pretty useless exercise when you have 3 consecutive maps over 10 or even 1000 items, but this optimization is the reason many FP libraries offer super easy to use pipe/compose functions.

I'd say, if the optimization is that easy, do it — even if the gains small. Without pipe/compose available, I'd be in favor of readability over performance though.

Collapse
 
richardpringle profile image
Richard Pringle

What are you talking about. The time complexity doesn't change by moving everything into the reducer, it's the space complexity that changes because .map creates a new array. It's not 1ms vs 3ms...

Thread Thread
 
okdewit profile image
Orian de Wit • Edited

You are right that it doesn't take three times as long, my mistake.

But it's never just "1+1+1=3" either.

Especially if the arrays are large, and the operations are simple, then the overhead of copying the array doesn't just affect memory use, it also takes longer.

Collapse
 
dariogriffo profile image
Dario Griffo

The only comment based on the O notation in both time and space complexity. Looks like not all folks know about that.

Collapse
 
qm3ster profile image
Mihail Malo • Edited

Do I like functional programming? Yes.
Is this particularly readable and self-explanatory, after all the optimizations? Meh.
Are there still pointless function calls and variable allocations? Basically.

const level2Sum = nums
  .reduce((prev, curr) => prev + 3 * curr ** 2, 0);

So if I will at all concern myself with optimizing this case I'll just go to the following, which doesn't at all lack readability:

let level3Sum = 0
for (const num of nums) {
  level3Sum += 3 * curr ** 2
}

But since I am in fact obsessed with using const so I know I'm not mutating this number later in the use site, I will extract this loop and assign to const:

/* IN THE ROOT SCOPE, NOT IN THE FUNCTION!!! */
const l3s = (nums: number[]) => {
  let sum = 0
  for (const num of nums) {
    sum += 3 * curr ** 2
  }
  return sum
}
/* elsewhere... */
const sum = l3s(nums)
Collapse
 
jay97 profile image
Jamal Al

While using map filter and reduce is great for readability. Using a for loop is the fastest of all. So if your looking to optimize as much as possible i would go with a for loop.

Collapse
 
zorqie profile image
I, Pavlov

In Chrome 71 level2Sum is still 88% slower than a C-style for loop.

jsperf.com/chained-maps-vs-for-loop

In Edge 42 the level3Sum is just as slow as the original chained maps.

Collapse
 
somedood profile image
Basti Ortiz

Oh, man. Indeed, that does look pretty messy. 😬

Collapse
 
belochub profile image
Mykola Bilochub

Great article, the advice you provided can truly help with the optimization of operations on arrays.
And as for the first advice, in our company, we've just created our custom iterator to optimize chaining without making it difficult to read.
It utilizes 'lazy' computing, meaning that it doesn't compute anything until you ask for it, and thus it only does one iteration over the array no matter how many calls there are in the chain.
Due to the additional overhead though, it still won't beat your last example in performance, where you combined everything into one call, but whenever there is more than one operation in the chain, it works somewhat faster.
It also helps when performing some operations on the other containers like Map and Set, since they also support the iterator protocol, but unfortunately do not yet support all of the operations available for arrays, like map and reduce and others (it's coming soon, though).

Collapse
 
aghost7 profile image
Jonathan Boudreau

I don't think chaining array methods is a big deal since for the vast majority of the time performance doesn't matter. If its hot code then I can see it being a concern, but the advice given in this article is not generally applicable in my honest opinion.

Collapse
 
somedood profile image
Basti Ortiz

Yes, that's right. You definitely shouldn't be bothered too much by these nuances because they're too negligible to even notice.

I could say that the point of this article is pretty much just to serve as a heads-up.

Collapse
 
qm3ster profile image
Mihail Malo • Edited

We wouldn't have to worry about it at all, ever, if they introduced them properly, as iterator/stream adapters, and not one-off map(this:Array, fn): Array functions.

Collapse
 
jessekphillips profile image
Jesse Phillips

Exactly, D lazily operates over the elements and only needs to be eager for certain algorithms like sort.

Thread Thread
 
qm3ster profile image
Mihail Malo • Edited

sort isn't a streaming/iterating operation, it's a collection operation. So if you have a stream going, it would make sense to sort on insertion into collection. Something like iter.sort(): Array or iter.sortInto(Array): void

Thread Thread
 
jessekphillips profile image
Jesse Phillips

D does an in place sort, so the range needs swap and random access. But yes, it isn't a stream.

Thread Thread
 
qm3ster profile image
Mihail Malo

Which means it needs to allocate the whole range, as an internal implementation.
I feel like whatever optimizations you can get out of hiding that implementation (Are there any, even?) are outweighed by the better decisions the developer will make when you shove this information in their face.

Thread Thread
 
jessekphillips profile image
Jesse Phillips

It isn't hidden, it won't compile if the range doesn't provide the needed operations. The simpilist solution is to call .array() before the call to sort.

Thread Thread
 
qm3ster profile image
Mihail Malo

That sounds awesome...
Are we talking about Dlang? :v

Thread Thread
 
jessekphillips profile image
Jesse Phillips
Thread Thread
 
qm3ster profile image
Mihail Malo • Edited

Aw yiss, get that D!
Homegirl on a mission.

Also: Oh no, this competes with Rust in a number of usecases, better say bad things about it all around just in case.

Also: Vector operations look sweet.

Collapse
 
drbearhands profile image
DrBearhands

Good article, but this shows JS is not functional. By the isomorphism of FP with CT we know that (fmap g) ∘ (fmap f) equals fmap (g ∘ f), so a compiler would be able to optimize this away easily. This is not the case in JS because g and f might have order-dependent side-effects, i.e. it is not functional.

Collapse
 
somedood profile image
Basti Ortiz

Precisely! That is a good point. I believe this is exactly why JavaScript is considered to be a "multi-paradigm" programming language, at least according to Wikipedia. It may not exactly have the purest functional programming practices, but it sure does blend them pretty well with object-oriented programming, considering that the two are kind of like polar opposites of each other.

Collapse
 
drbearhands profile image
DrBearhands

I know this is a popular stance, but I disagree with it.
Just being able to pass functions around as arguments doesn't make something functional. Claiming otherwise seems disingenuous to me, as you'd be claiming the benefits of FP without delivering (barring some syntactic sugar). You might call JS "functional style" or "closure-centric procedural". I just made the later up, but I think it's rather descriptive.

Thread Thread
 
somedood profile image
Basti Ortiz

Now that you mention it, I think I agree with you in that regard. It does seem more accurate to consider JavaScript a "functional-style" programming language rather than a "functional programming language mixed in with some object-oriented features".

Collapse
 
brianbohe profile image
Brian Bohe

This is something that should be the same for the developer and optimized in the compiler or interpreter. I disagree with the idea of pushing developers away of writing nice code.

Collapse
 
qm3ster profile image
Mihail Malo

Yeah, but they already implemented it the way they did.
Now we must sit and suffer.
Or use a transducer library which sometimes beats this horrible decision in terms of performance in userspace without calling out to native code... and profile when we should use that. Once again putting more strain on the developer.

Collapse
 
somedood profile image
Basti Ortiz

Yeah, I definitely agree with you there. At least this "feature" forces us to think more carefully about our code. In the end, we become better programmers because of it.

Collapse
 
dallgoot profile image
dallgoot

Very useful to remind people of that.
This is basic algorithm time complexity : with an array of N elements
one loop = O(N)
2 loop = O(N+N)
X loop = O(N*X)
So for an array of 10 elements and 4 ".map" the time to execution is 4 times what is needed.
And you need to add to that the time to push new values in the new arrayS.
I think a general rule can be "NEVER USE SAME ARRAY METHOD TWICE".
Also maybe always use "filter" before "map" ?
And i would also like to point that if for some people a 3 term operation is a problem of readability : god are people becoming this lazy ?

I too like readable code, but our primary goal is "efficiency".
Efficiency = performance and should ALWAYS take over what any idea of "beauty" someone may have.
I'm not talking about micro-optimizations but to avoid unnecessary operations.
First do the job in an efficient manner, then refactor to make it human readable as possible.
Sometimes complex things are complex to read.
We are not making art.

Collapse
 
somedood profile image
Basti Ortiz • Edited

I mean, one can't really say that those who disagree with this article are completely wrong. Readability really is an important aspect of writing code. Performant code means nothing if it is unreadable.

Also, yup, I definitely agree with using filter before map. That should honestly be a default for all JavaScript programmers.

Collapse
 
chuckwood profile image
Charles Wood • Edited

I like the perspective that we have to decide what we're optimizing: the writing, the execution, or the reading. Each has its place and tradeoffs.

Collapse
 
ctimmerman profile image
Cees Timmerman • Edited

JSPerf is down atm, but i'm pretty sure this is faster and easier to read:

const nums = [ -2, -1, 0, 1, 2 ]
var sum = 0; for(var i = 0; i < nums.length; ++i) sum += nums[i]**2 * 3
Collapse
 
somedood profile image
Basti Ortiz

I mean... yes, it definitely is, but the point of my article is not to promote the most performant code; rather, the point of this article is to remind the reader to be wary of "overchaining" array methods because of its implications on the number of array iterations. 😅

Some comments may only be visible to logged-in visitors. Sign in to view all comments.