DEV Community

Cover image for Stop mutating in map, reduce and forEach
Stephan Meijer
Stephan Meijer

Posted on • Updated on • Originally published at meijer.ws

Stop mutating in map, reduce and forEach

There are plenty of articles that will try to convince you that you should use the map, filter and reduce methods. Less of them mention forEach, and not many of them mention the more traditional for loops as serious alternative. Or when to use map over reduce or especially forEach.

Programming is mostly about opinions and (maybe a bit too much) about something that we like to call "common sense". In this article, I'm sharing my opinion, and write about the functions and the problem of side effects (mutating). Triggered by this tweet of Erik Rasmussen today, and experience from the past.

I still remember this change I requested during a code review. It grew among the team, and was even discussed during the next retrospective. PR #1069, July 18, 2019, author unimportant.



path?.map(id => checkID(id)); // eslint-disable-line no-unused-expressions


Enter fullscreen mode Exit fullscreen mode

My request was to change it to:



path?.forEach(id => checkID(id));


Enter fullscreen mode Exit fullscreen mode

A little background, path is a string[], and checkID does some validations on that string to see if it's a id-like value. If not, it will throw an error.

Why my change request, and why mention it in the retro? There is no law against calling methods in the map function, or throwing from within it. It was just that it doesn't match with my expectations. And I still believe I'm in my rights there.

Map

My expectations for map is that it "maps" one value to another. Like so:



const input = [1, 2, 3];
const output = input.map(value => value * 2);


Enter fullscreen mode Exit fullscreen mode

There is an input value ([1, 2, 3]), map does something with it, and returns an entirely new value. input !== output and my expectation is that whenever an array value changed, it doesn't match the previous value either. In other words I expect that at least for one element input[n] !== output[n].

We're also able to extract the callback function so that we end up with a pure, testable function. My expectation from a map call, is always that it is side effect free. No exceptions.



function double(value) {
  return value * 2;
}

const input = [1, 2, 3];
const output = input.map(double);


Enter fullscreen mode Exit fullscreen mode

Expectations

Now let's take that example from Erik



return items.map((item) => { 
  item.userId = userId; 
  return item; 
});


Enter fullscreen mode Exit fullscreen mode

And build some code around this, so it get's a bit easier to work with.



function addUserId(userId) {
  return (item) => { 
    item.userId = userId; 
    return item; 
  }
}

const items = [
  { id: 1 },
  { id: 2 },
];

const newItems = items.map(addUserId('abc'));


Enter fullscreen mode Exit fullscreen mode

How do you now feel about mutating the item objects inside that map? When you look at the small snippet from Erik, you might be ok with it. But after extracting that callback function, I hope it starts to feel wrong. If you don't see the problem I'm trying to highlight, try answer the following questions:

  • what does items[0] look like?
  • what does newItems[0] look like?
  • what does items === newItems return?
  • what does items[0] === newItems[0] return?
  • do these answers match your expectations?

forEach

Now let's simply change that map call to a forEach.



const items = [
  { id: 1 },
  { id: 2 },
];

items.forEach(addUserId('#abc'));


Enter fullscreen mode Exit fullscreen mode

What does this do with your expectations? Did it change anything?

Whenever I see a forEach, I expect side effects. Something is being done for (or to) each value in the array. The fact that forEach doesn't have a return value, strengthens this feeling.

And this is entirely personal, but I stopped using the functional forEach calls to mutate the objects as well. I'm still okay with a forEach(sideEffect) but I won't use it to mutate values. I'm using the for of loops for that, as I find it easier to recognize them as causing mutations.



const items = [{ id: 1 }, { id: 2 }];

for (const item of items) {
  item.userId = userId;
}

return items;


Enter fullscreen mode Exit fullscreen mode

Please compare that to the original, and feel free to share your thoughts in the comments:



const items = [{ id: 1 }, { id: 2 }];

const newItems = items.map((item) => {
  item.userId = userId;
  return item;
});

return newItems;


Enter fullscreen mode Exit fullscreen mode

Reduce

Some would say that reduce is meant for mutating values. In my opinion, they're wrong. Reduce is meant for when the shape of the container changes. Think conversions between objects and arrays, or even collections to primitives. Or a change of length of the array. Reduce is more about changing the shape of the entire collection, then it's about changing the shape of individual entries. For that, we have map.

I've changed this section a bit, so let me quote Sebastian Larrieu from the comments below:

reduce is about transforming a collection into a single value, that's why its param is called accumulator.

Sebastian summarizes the purpose of reduce quite well. Think about computing the sum from an array of numbers. An array of numbers go in, and a single number comes out.



[1, 2, 3, 4, 5].reduce((sum, value) => sum + value, 0);


Enter fullscreen mode Exit fullscreen mode

But the return value doesn't always have to be a primitive. Grouping for example, is another very valid use case for reduce:



[1, 2, 3, 4, 5].reduce((groups, value) => {
  const group = value % 2 ? 'odd' : 'even';
  groups[group].push(value);
  return groups;
}, { even: [], odd: [] });


Enter fullscreen mode Exit fullscreen mode

Until very recently (2 days ago basically), I saw one more purpose for reduce. I used it as alternative for a filter » map call, because reduce can do the same thing, in a single iteration. Think:



[1, 2, 3, 4, 5]
  .filter(value => value > 3)
  .map(value => value * 2);


Enter fullscreen mode Exit fullscreen mode

Or



[1, 2, 3, 4, 5].reduce((values, value) => {
  if (value <= 3) {
    return values;
  }

  values.push(value * 2)
  return values;
}, []);


Enter fullscreen mode Exit fullscreen mode

The difference here is that reduce only walks the array a single time, whereas the filter and map combo walks the array two times. For 5 entries, this isn't a big deal. For larger lists, it might it's no big deal either. (I thought it was, but I was wrong.).

The filter().map() is easier to read. I made my code harder to read, for no gain at all. And with that, we are back to the "common sense" issue. Programming isn't all black and white. We can't document, spec, or lint every single rule or choice that we have to make. Use what feels best and take your time to consider the alternatives.


👋 I'm Stephan, and I'm building updrafts.app. If you wish to read more of my unpopular opinions, follow me on Twitter.

Top comments (33)

Collapse
 
aprillion profile image
Peter Hozák

I suspect in a lot of cases, especially for arithmetic on larger dense arrays of numbers, the .filter().map() version will not only be MUCH simpler to read, but also much faster because the simpler unconditional map works better on modern CPUs (not sure if current compilers actually vectorize it, or if it's fast enough already).

Collapse
 
smeijer profile image
Stephan Meijer

It looks like you suspect correct. Despite the additional iterations, I'm unable to create large differences between filter/map and reduce.

Here is a perf comparison. There is no clear winner. Sometimes filter/map is slightly faster, sometimes reduce.

For this simple test, filter>map is faster when having 10k records or less. On a higher number of records, reduce starts to be slightly faster. I'm not sure what's going on here. But I'm sure I have to update the article. Don't blindly use reduce as perf optimization. It might perform worse.

img

Collapse
 
mihaistancu profile image
Mihai Stancu • Edited

In today's software the bottleneck isn't usually the CPU. Most modern applications hit many other bottlenecks way before having to consider optimizing for CPU time (RAM, sync-io, async-io pooling, etc.).

And "since the dawn of time man hath prematurely optimized the wrong thing" meaning that a prematurely optimized application will be filled with "clever" optimizations that don't address the real optimization problem and architecturally make it harder to address.

So the question is what would you rather have to do:
a) optimize a readable but unoptimized application
b) find the correct optimization to add (to a pile of optimizations) in an unreadable (partially-)optimized application**
?

** Terms and conditions may apply which may sometimes lead you to deoptimizing existing optimizations either for clarity of what that thing does (by intent) before changing it or for exposing the piece that truly needs to be optimized that the architecture is making very hard to reach.

Collapse
 
marcosvega91 profile image
Marco Moretti • Edited

Both examples are O(n) complexity so they should be almost similar. Only on large example you will see differences

Thread Thread
 
smeijer profile image
Stephan Meijer

True. I can't believe I missed that. Jacob Paris explains it dead simple in a tweet.

Looping once and doing three things on each item performs the same as doing one thing on each item but looping three times.

It's time I return to school.

Thread Thread
 
hiasltiasl profile image
Matthias Perktold

I also expected the filter+map version to be slower, not because of the extra loop, but because of the intermediate array that needs to be created. But maybe the browser can optimize it away.

Thread Thread
 
smeijer profile image
Stephan Meijer

I haven't profiled it on that level, so maybe there is a difference in memory usage. I honestly don't know. Browsers have changed quite a bit over the last few years. So I wouldn't be surprised if the intermediate array has been optimized away.

It would be interesting to compare the memory profiles though.

Collapse
 
vasiliikovalev profile image
Vasilii Kovalev • Edited

Thank you for the article! Totally agree with map and reducer purposes.

I personally find myself avoiding data mutations at all, and if I want to extend existing data, I do it on the data item's copy:

function addUserId(userId) {
  return (item) => ({ 
    ...item, 
    userId,
  });
}

const items = [
  { id: 1 },
  { id: 2 },
];

const newItems = items.map(addUserId('abc'));
Enter fullscreen mode Exit fullscreen mode

Thus it frees us from unnecessary side effects and heartaches.

Other cases, like pushing to or changing external data conditionally, are covered by reduce (as in your last example).

It seems the last time I used forEach, it was run of unit tests with different test data, but the same expect (like in jest-in-case before I found out about it).

What real use cases for mutating data instead of copying it can you imagine?

Collapse
 
valiice profile image
Valentin

This allocating memory on every iteration would result in horrible performance.

Collapse
 
smeijer profile image
Stephan Meijer

Oh, I'm definitely with you there. I just felt that if I would make that function immutable, that people might mis the point I was trying to make for map vs forEach.

That being said, I do have a couple of mutable functions on the server side. It does perform better, and there isn't always a need to keep it immutable. Think for example transformations that are applied after fetching from db and before sending to client.

Also on the client now I think about it. I sometimes add "private properties" to objects that I need to recall later. If I don't use them for display (/to trigger rerenders) I might just as well mutate the object.

It is really on a case by case basis tho.

Collapse
 
hoffmann profile image
Peter Hoffmann

Hy, sorry to have doubled your comment, I did only see that you basically proposed the same change after reading it a second time.

Collapse
 
aminmansuri profile image
hidden_dude

You are correct. Map is for mapping.

You could hack Reduce (or many other functions) to do everything you do with Map or ForEach but that would violate the principle of least surprise.

The problem is that many people today learned of these concepts as "alternatives" that Javascript provides rather than basic building blocks as you would in la language like Lisp or Smalltalk (they're named differently in Smalltalk but are essentially the same concept).

Collapse
 
smeijer profile image
Stephan Meijer

Ooh definitely. I have seen so many blogs with titles like "become a better developer by using map!". People are sensitive to that, and many of us follow written advice blindly.

Collapse
 
aminmansuri profile image
hidden_dude

So much of software development is just fad driven without really understanding or evaluating the benefits.

I think at this point it's just getting wasteful and will need to stop eventually.

Collapse
 
sebalr profile image
Sebastian Larrieu • Edited

I agree in everything but the reduce.
In my opinion (and based on what I've learned) reduce is about transforming a collection into a single value, that's why its param is called accumulator.
I think your reduce use case is good for performance but not very clear.
I believe such a complex behaviour (filter AND map) should be in a for loop, using reduce for filter and map is as hacky as mutating data in map

Collapse
 
smeijer profile image
Stephan Meijer • Edited

I agree. I messed up that one. 😔 I might rewrite that paragraph to not confuse readers and teach them wrong patterns.

*edit, I updated that section. I hope you don't mind I quoted you.

Collapse
 
myzel394 profile image
Myzel394

Totally agree with you! Sometimes I'd like to pop out my eyes and throw them at my monitor when I see somebody using map instead of forEach! And if that person even eslints it, there is no hope left.

Thanks for your <article />! :)

Collapse
 
cwraytech profile image
Christopher Wray • Edited

After your explanation it makes a lot more sense using forEach.

And yes, when you’re reading the code, forEach gives me the right idea about what is actually happening when you’re using a callback function...

I think map makes sense when you are displaying that data (like a map) somewhere on the front end.

Collapse
 
redbar0n profile image
Magne • Edited

It seems like that last implementation using .reduce also violates the aforementioned purpose of reduce (since it returns an array, albeit a filter-mapped one): "reduce is about transforming a collection into a single value, that's why its param is called accumulator."

Collapse
 
smeijer profile image
Stephan Meijer • Edited

I'm not sure if you refer to the grouping example, or the alternative for filter.map? Anyways, I do mention that the return value doesn't always need to be a primitive.

But the return value doesn't always have to be a primitive. Grouping for example, is another very valid use case for reduce.

In that regard, I still see that snippet as a valid use case.

That being said. It doesn't perform better than the filter+map sequence. And it's also harder to read. So I wouldn't write it like that any more.

Is that section confusing? I rewrote it a bit after feedback and reconsideration, but decided to leave the filter/map vs reduce comparison in, so the comments would still make sense.

Collapse
 
redbar0n profile image
Magne • Edited

I was thinking about the filter+map example. But the grouping example is also a candidate.

The official docs say:

The reduce() method executes a reducer function (that you provide) on each element of the array, resulting in single output value.

developer.mozilla.org/en-US/docs/W...

It seems like an abuse to use .reduce in a way that treats an array or an object (containing two arrays) as "a single output value", especially when the examples in the official docs treats a single output value as either an accumulated sum, or one of the values in the original array.

I find clever tricks almost always more confusing than a more straightforward boring approach (like a for-loop or .forEach). In this case I agree that the filter+map is simplest.

I wouldn't consider the grouping to be a valid use case for ´.reduce´ either. Semantically speaking, it is more a case of filtering an array based on two conditions. Maybe this would be clearer to read:

const isEven = val => val % 2 === 0
const isOdd = val => val % 2 === 1
const ar = [1, 2, 3, 4, 5]
return {
 even: ar.filter(isEven),
 odd: ar.filter(isOdd),
}
Enter fullscreen mode Exit fullscreen mode

In general, I find it helpful to do one thing twice, than to do two things at once. It also follows the single responsibility principle for functions.

I would perhaps keep the sections in, but before every example just include an Update: Due to insightful comments, I no longer recommend this approach:.

Thanks for your post, and for your thoroughness in following up!

Collapse
 
_bkern profile image
Barry

I agree with what you said. I find these get abused a bit in JS but in other more fp heavy languages with types you are exactly right on your expectations. Foreach, map 'mean' something and have to obey certain laws and just like you when I read some code with them I expect certain things.

Collapse
 
hoffmann profile image
Peter Hoffmann

I understand your concerns and would like to propose something more idiomatic to modern javascript and the map function.

const newItems = items.map(i => { ...i, id: userId })
Enter fullscreen mode Exit fullscreen mode

This way, you don't have the two effects of a) changing items and b) returning the changed array. By switching to forEach you choose to only have effect a). By changing to my proposition you choose to have effect b). Both decisions are quite fine, just wanting both seems indeed somehow redundant.

Collapse
 
smeijer profile image
Stephan Meijer

You have a bug though 😇

const newItems = items.map(i => ({ ...i, id: userId }))
Enter fullscreen mode Exit fullscreen mode

But yeah, I see where you're coming from, and I do agree that that pattern is often the better one. It was just not the message I was trying to give with this article. You might be interested in reading dev.to/smeijer/don-t-stop-mutating... as well.

Thanks for sharing though. Appreciated 🙂

Collapse
 
svella profile image
Shon Vella

You first example feeling about map being used where forEach should have been is correct, but you seem to not understand why other than it seems wrong. The reason map is inferior in this example is that it will create a new array and fill it with the return value of each call to checkID() (probably undefined) and then immediately throw it away. Memory allocation (and the subsequent associated garbage collection) isn't free and in fact probably more expensive by itself than what checkID() is doing.

Collapse
 
smeijer profile image
Stephan Meijer • Edited

I'm well aware of the fact that a map creates an array, and that the return value is not used. That's just not the problem I wanted to focus on in this article. I could have just as well made the contrived examples a bit more complex, by creating an example that does use the return value from that map function, while also executing a side effect. But that would have made the examples harder to grasp, while it doesn't contribute to the message I was trying to send.

Collapse
 
svella profile image
Shon Vella

Sorry, didn't mean to offend. You did specifically say "It was just that it doesn't match with my expectations", which lead me to believe you were going off of a gut feeling rather than a concrete pragmatic reason for raising it as an issue, which in turn lead me to lose confidence in what you had to say and I stopped reading.

Thread Thread
 
smeijer profile image
Stephan Meijer • Edited

No. It's not just gut feeling. Or well, maybe the real request for change was. Because I do understand what you're saying, but the performance impact there is so minimal, that I would not use that as argument to request a change.

I find the readability issue (unexpected side effects) a way bigger problem.