Spread syntax (...) allows an iterable such as an array expression or string to be expanded in places where zero or more arguments (for function calls) or elements (for array literals) are expected, or an object expression to be expanded in places where zero or more key-value pairs (for object literals) are expected.
I had a serious issue with the {... } operator that made the browser stop working entirely.
Let me explain:
I have an API that returns an array of objects and I want to reduce these objects to a single object.
my implementation wasthis:
const features = APIResposne.features.reduce(
(prev, feature) => {
return {
...prev,
[feature.id]: feature.value
};
},
{},
);
the problem with this approach is that the I am copying the previous object allover again and creating a new object in each iteration which is - yes you guessed it right - taking an extra O(N)
making the reduce O(N^2) instead of O(N).
The correct solution should be:
const features = APIResposne.features.reduce(
(prev, feature) => {
prev[feature.id] = feature.value;
return prev;
},
{},
);
What to learn from this:
- Easy does not mean efficient.
- You should always understand how these cool operators work.
Top comments (16)
Another (more elegant imo) solution to this could be to use Object.fromEntries and a Array.map:
and although you are right in that the first example is inefficient, slower and may consume more memory than the other one I have hard time believing that it would cause a browser to stop working. For that to be an actual problem you'd need the
feature
array to have had millions of entries, or have the the function that invokes the.reduce
run in a loop or something.about 200Mil features to be exact >_<Sorry I ment it was 20k features that I loop N^2 on which are 400mil iterations
Wait what, are you serious? A list this big would weight multiple gigabyte's as a JSON, there's no way you transfer that via API requests
Sorry I ment it was 20k features that I loop N^2 on which are 400mil iterations
I Will Edit the answer
In most cases, mutation bugs are a bigger threat than inefficiency. Plus, if you are careful to use immutable operations, you are more able to safely parallelize your code, which will most often give you much greater efficiency benefits than small inline optimizations like mutation.
Use spread by default and, if you must, refactor into mutation where you know you have bottlenecks that affect your users.
Can you please elaborate. Thx
Which part?
How can mutations be more dangerous than inefficiency can you give an example
Or if you have another post that talks about it i would be grateful
Sure. Mutation is a kind of side-effect, which is a common source of bugs in code.
For example, let's say I have a function called
name
which is there to set the name of the object in question, so I use it likeLater on, your future self or a collaborator is skimming the code but misses that
name()
has a mutation side-effect. They see that obj is assigned and that it is not (or can't be) reassigned, and so they assume obj has not changed. So a bug could be easily introduced the next time the code is worked on because the mutation side-effect obscured the purpose of the code and made it a little more difficult to read.A better way to write the code would be to use immutable operations like spread to return a new object, so the calling code could be read as
Now it is very clear to the reader that obj is being changed, so it is less likely that the next programmer will introduce a bug.
It may be tempting to say that programmers should just be more careful readers of code, but when you introduce parallelism (which is very important for efficiency) it becomes clear that bugs are unavoidable without immutability.
Consider:
I believe it might be impossible to predict what this code will do. Will the promise finish first, or will the console.log? That depends on how the particular browser or runtime is implemented and on how many cores are available on the user's machine and what else is running at the time... and so on. (Actually, I don't think this particular example runs in JavaScript because it sheds the outer context when it creates the promise, but this is the simplest example I could think of).
This is why I prefer languages like Elm which don't support mutations at all. One might think that constantly creating new objects would be costly to performance, but Elm benchmarks better than most popular frameworks because it is very parallel by default.
For better examples of mutability bugs than my lazy ones, have a look here: elmprogramming.com/immutability.html
For further reading, look into immutability as a functional programming paradigm.
thank you so much <3 this was an amazing explanation
I enjoyed the reading, just want to break a spear in favour of mutation.
Using proper names makes it pretty understandable. Even that, sure it's better to create new instances to avoid bugs.
A usecase that usually takes place when this situation happens is an object being used in different places and a dev adding a mutation in the middle just for a new use case, breaking things in other data paths (use-cases). If you instantiate a new object, array or whatever and you work with your own instance this will hardly take place.
There are other methods to avoid this, of course. Interfaces in OOP, and good practices when using FP, maybe the most important thing is to differentiate well whether are you defining imperative statements and were and how you use declarative ones.
I agree... but that's largely because I intentionally made the example trivial so that the issue would be easy to understand. Real-world mutation bugs can be extremely subtle.
Also agree with that
It's not a question of dangerousness, but rather common sense. Imagine you have a pack of sticky notes and you want to create a dictionary of features. Would you, with every new entry, open completely new pack, copy previous dictionary adding just that one entry? It would be immensely stupid behaviour, I hope there's no question about it.
It's not necessarily dangerous. It's only harmful if used incorrectly...