DEV Community

Cover image for Should Frontend Devs Care About Performance??
Adam Nathaniel Davis
Adam Nathaniel Davis

Posted on • Edited on

Should Frontend Devs Care About Performance??

I was recently talking to an architect at Amazon and he made a very interesting comment to me. We were talking about the complexity of a given algorithm (discussed in Big-O notation), and before we even got too far into the explanation, he said:

I mean, it's not like we need to worry too much about this. After all, we're frontend devs!


I found this admission to be extremely refreshing, and it was entirely unexpected coming from someone in the Ivory Tower that is Amazon. It's something that I've always known. But it was still really nice to hear it coming from someone working for the likes of a FAANG company.

You see, performance is one of those subjects that programmers love to obsess about. They use it as a Badge of Honor. They see that you've used JavaScript's native .sort() method, then they turn up their nose and say something like, "Well, you know... That uses O(n log(n)) complexity." Then they walk away with a smug smirk on their face, as though they've banished your code to the dustbin of Failed Algorithms.


Image description

Smart Clients vs. Dumb Terminals

The terms "smart client" and "dumb terminal" have fallen somewhat by-the-wayside in recent decades. But they're still valid definitions, even in our modern computing environments.

Mainframe Computing

Way back in the Dark Ages, nearly all computing was done on massive computers (e.g., mainframes). And you interacted with those computers by using a "terminal". Those terminals were often called "dumb terminals" because the terminal itself had almost no computing power of its own. It only served as a way for you to send commands to the mainframe and then view whatever results were returned from... the mainframe. That's why it was called "dumb". Because the terminal itself couldn't really do much of anything on its own. It only served as a portal that gave you access to the mainframe.

For those who wrote mainframe code, they had to worry greatly about the efficiency of their algorithms. Because even the mainframe had comparatively-little computing power (by today's standards). More importantly, the mainframe's resources were shared by anyone with access to one of the dumb terminals. So if 100 people, sitting at 100 dumb terminals, all sent resource-intensive commands at the same time, it was pretty easy to crash the mainframe. (This is also why the allocation of terminals was very strict, and even those who had access to mainframe terminals often had to reserve time on them.)

PC Computing

With the PC explosion in the 80s, suddenly you had a lot of people with a lot of computing power (relatively speaking) sitting on their desktop. And most of the time, that computing power was underutilized. Thus spawned the age of "smart clients".

In a smart client model, every effort is made to allow the client to do its own computing. It only communicates back to the server when existing data must be retrieved from the source, or when new/updated data must be sent back to that source. This offloaded a great deal of work off of the mainframe, down to the clients, and allowed for the creation of much more robust applications.

A Return To Mainframe Computing (Sorta...)

But when the web came around, it knocked many applications back into a server/terminal kinda relationship. That's because those apps appeared to be running in the browser, but the simple fact is that early browser technology was incapable of really doing much on its own. Early browsers were quite analogous to dumb terminals. They could see data that was sent from the server (in the form of HTML/CSS). But if they wanted to interact with that data in any meaningful way, they needed to constantly send their commands back to the server.

This also meant that early web developers needed to be hyper-vigilant about efficiency. Because even a seemingly-innocuous snippet of code could drag your server to its knees if your site suddenly went viral and that code was being run by hundreds (or thousands) of web surfers concurrently.

This could be somewhat alleviated by deploying more robust backend technologies. For example, you could deploy a web farm that shared the load of requests for a single site. Or you could write your code in a compiled language (like Java or C#), which helped (somewhat) because compiled code typically runs faster than interpreted code. But you were still bound by the limits that came from having all of your public users hitting a finite set of server/computing resources.


Image description

The Browser AS Smart Client

I'm not going to delve into the many arguments for-or-against Chrome. But one of its greatest contributions to web development is that it was one of the first browsers that was continually optimized specifically for JavaScript performance. When this optimization was combined with powerful new frameworks like jQuery (then Angular, then React, then...), it fostered the rise of the frontend developer.

This didn't just give us new capabilities for frontend functionality, it also meant that we could start thinking, again, in terms of the desktop (browser) being a smart client. In other words, we didn't necessarily have to stay up at night wondering if that one aberrant line of code was going to crash the server. At worst, it might crash someone's browser. (And don't get me wrong, writing code that crashes browsers is still a very bad thing to do. But it's farrrrr less likely to occur when the desktop/browser typically has all those unused CPU cycles just waiting to be harnessed.)

So when you're writing, say, The Next Great React App, how much, exactly, do you even need to care about performance?? After all, the bulk of your app will be running in someone's browser. And even if that browser is running on a mobile device, it probably has loads of unleveraged processing power available for you to use. So how much do you need to be concerned about the nitty-gritty details of your code's performance? IMHO, the answer is simple - yet nuanced.

Care... But Not That Much

Years ago, I was listening to a keynote address from the CEO of a public company. Public companies must always (understandably) have one eye trained on the stock market. During his talk, he posed the question: How much do I care about our company's stock price? And his answer was that he cared... but not that much. In other words, he was always aware of the stock price. And of course, he was cognizant of the things his company could do (or avoid doing) that would potentially influence their stock price. But he was adamant that he could not make every internal corporate decision based upon one simple factor - whether or not it would juice the stock price. He had to care about the stock price, because a tanking stock price can cause all sorts of problems for a public company. But if he allowed himself to focus, with tunnel vision, on that stock price, he could end up making decisions that bump the price by a few pennies - but end up hurting the company in the long run.

Frontend app development is very similar in my eyes. You should always be aware of your code's performance. You certainly don't want to write code that will cause your app to run noticeably bad. But you also don't want to spend half of every sprint trying to micro-optimize every minute detail of your code.

If this all sounds terribly abstract, I'll try to give you some guidance on when you need to care about application performance - and when you shouldn't allow it to bog down your development.


Image description

Developer Trials

The first thing you need to keep in mind is that your code will (hopefully) be reviewed by others devs. This happens when you submit new code, or even when someone comes by months later and looks at what you've written. And many devs LOVE to nitpick your code for performance.

You can't avoid these "trials". They happen all the time. The key is not to get sucked into theoretical debates about the benchmark performance of a for loop versus the Array.prototype function of .forEach(). Instead, you should try, whenever possible, to steer the conversation back into the realm of reality.

Benchmarking Based Upon Reality

What do I mean by "reality"? Well, first of all, we now have many tools that allow us to benchmark our apps in the browser. So if someone can point out that I can shave a few seconds of load time off my app by making one-or-two minor changes, I'm all ears. But if their proposed optimization only "saves" me a few microseconds, I'm probably gonna ignore their suggestions.

You should also be cognizant of the fact that a language's built-in functions will almost always outperform any custom code. So if someone claims that they have a bit of custom code that is more performant than, say, Array.prototype.find(), I'm immediately skeptical. But if they can show me how I can achieve the desired result without even using Array.prototype.find() at all, I'm happy to hear the suggestion. However, if they simply believe that their method of doing a .find() is more performant than using the Array.prototype.find(), then I'm going to be incredibly skeptical.

Your Code's Runtime Environment

"Reality" is also driven by one simple question: Where does the code RUN??? If the code-in-question runs in, say, Node (meaning that it runs on the server), performance tweaks take on a heightened sense of urgency, because that code is shared and is being hit by everyone who uses the app. But if the code runs in the browser, you're not a crappy dev just because the tweak is not forefront in your mind.

Sometimes, the code we're examining isn't even running in an app at all. This happens whenever we decide to do purely academic exercises that are meant to gauge our overall awareness of performance metrics. Code like this may be running in a JSPerf panel, or in a demo app written on StackBlitz. In those scenarios, people are much more likely to be focused on finite details of performance, simply because that's the whole point of the exercise. As you might imagine, these types of discussions tend to crop up most frequently during... job interviews. So it's dangerous to be downright flippant about performance when the audience really cares about almost nothing but the performance.

The "Weight" Of Data Types

"Reality" should also encompass a thorough understanding of what types of data that you're manipulating. For example, if you need to do a wholesale transformation on an array, it's perfectly acceptable to ask yourself: How BIG can this array reasonably become? Or... What TYPES of data can the array typically hold?

If you have an array that only holds integers, and we know that the array will never hold more than, say, a dozen values, then I really don't care much about the exact method(s) you've chosen to transform that data. You can use .reduce() nested inside a .find(), nested inside a .sort(), which is ultimately returned from a .map(). And you know what?? That code will run just fine, in any environment where you choose to run it. But if your array could hold any type of data (e.g., objects that contain nested arrays, that contain more objects, that contain functions), and if that data could conceivably be of nearly any size, then you need to think much more carefully about the deeply-nested logic you're using to transform it.


Image description

Big-O Notation

One particular sore point (for me) about performance is with Big-O Notation. If you earned a computer science degree, you probably had to become very familiar with Big-O. If you're self-taught (like me), you probably find it to be... onerous. Because it's abstract and it typically provides no value in your day-to-day coding tasks. But if you're trying to get through coding interviews with Big Tech companies, it'll probably come up at some point. So what do you do?

Well, if you're intent upon impressing those interviewers who are obsessed with Big-O Notation, then you may have little choice but to hunker down and force yourself to learn it. But there are some shortcuts you can take to simply make yourself familiar with the concepts.

First, understand the dead-simple basics:

  1. O(1) is the most immediate time complexity you can have. If you simply set a variable, and then at some later point, you access the value in that same variable, this is O(1). It basically means that you have immediate access to the value stored in memory.

  2. O(n) is a loop. n represents the number of times you need to traverse the loop. So if you're just creating a single loop, you are writing something of O(n) complexity. Also, if you have a loop nested inside another loop, and both loops are dependent upon the same variable, your algorithm will typically be O(n-squared).

  3. Most of the "built-in" sorting mechanisms we use are of O(n log(n)) complexity. There are many different ways to do sorts. But typically, when you're using a language's "native" sort functions, you're employing O(n log(n)) complexity.

You can go deeeeeep down a rabbit hole trying to master all of the "edge cases" in Big-O Notation. But if you understand these dead-simple concepts, you're already on your way to at least being able to hold your own in a Big-O conversation.

Second, you don't necessarily need to "know" Big-O Notation in order to understand the concepts. That's because Big-O is basically a shorthand way of explaining "how many hoops will my code need to jump through before it can finish its calculation."

For example:

const myBigHairyArray = [... thousandsUponThousandsOfValues];
const newArray = myBigHairyArray.map(item => {
  // tranformation logic here
});
Enter fullscreen mode Exit fullscreen mode

This kinda logic is rarely problematic. Because even if myBigHairyArray is incredibly large, you're only looping through the values once. And modern browsers can loop through an array - even a large array - very fast.

But you should immediately start thinking about your approach if you're tempted to write something like this:

const myBigHairyArray = [... thousandsUponThousandsOfValues];
const newArray = myBigHairyArray.map(outerItem => {
  return myBigHairyArray.map(innerItem => {
    // do inner tranformation logic 
    // comparing outerItem to innerItem
  });
});
Enter fullscreen mode Exit fullscreen mode

This is a nested loop. And to be clear, sometimes nested loops are absolutely necessary, but your time complexity grows exponentially when you choose this approach. In the example above, if myBigHairArray contains "only" 1,000 values, the logic will need to iterate through them one million times (1,000 x 1,000).

Generally speaking, even if you haven't the faintest clue about even the simplest aspects of Big-O Notation, you should always strive to avoid nesting anything. Sure, sometimes it can't be avoided. But you should always be thinking very carefully about whether there's any way to avoid it.

Hidden Loops

You should also be aware of the "gotchas" that can arise when using native functions. Yes, native functions are generally a "good" thing. But when you use a native function, it can be easy to forget that many of those functions are doing their magic with loops under the covers.

For example: imagine in the examples above that you are then utilizing .reduce(). There's nothing inherently "wrong" with using .reduce(). But .reduce() is also a loop. So if your code only appears to use one top-level loop, but you have a .reduce() happening inside every iteration of that loop, you are, in fact, writing logic with a nested loop.


Image description

Readability / Maintainability

The problem with performance discussions is that they often focus on micro-optimization at the expense of readability / maintainability. And I'm a firm believer that maintainability almost always trumps performance.

I was working for a large health insurance provider in town and I wrote a function that had to do some complex transformations of large data sets. When I finished the first pass of the code, it worked. But it was rather... obtuse. So before committing the code, I refactored it so that, during the interim steps, I was saving the data set into different temp variables. The purpose of this approach was to illustrate, to anyone reading the code, what had happened to the data at that point. In other words, I was writing self-documenting code. By assigning self-explanatory names to each of the temp variables, I was making it painfully clear to all future coders exactly what was happening after each step.

When I submitted the pull request, the dev manager (who, BTW, was a complete idiot) told me to yank out all the temp variables. His "logic" was that those temp variables each represented an unnecessary allocation of memory. And you know what?? He wasn't "wrong". But his approach was ignorant. Because the temp variables were going to make absolutely no discernible difference to the user, but they were going to make future maintenance on that code sooooo much easier. You may have already guessed that I didn't stick around that gig for too long.

If your micro-optimization actually makes the code more difficult for other coders to understand, it's almost always a poor choice.


Image description

What To Do?

I can confidently tell you that performance is something that you should be thinking about. Almost constantly. Even on frontend apps. But you also need to be realistic about the fact that your code is almost always running in an environment where there are tons of unused resources. You should also remember that the most "efficient" algorithm isn't always the "best" algorithm, especially if it looks like gobbledygook to all future coders.

Thinking about code performance is a valuable exercise. One that any serious programmer should probably have, almost always, in the back of their mind. It's incredibly healthy to continually challenge yourself (and others) about the relative performance of code. In doing so, you can vastly improve your own skills. But performance alone should never be the end-all/be-all of your work. And this is especially true if you're a "frontend developer".

Top comments (50)

Collapse
 
peerreynders profile image
peerreynders • Edited

TL;DNR: Often it's less about being performance conscious but more about being explicit about what tradeoffs are being made: for who's benefit and and to who's detriment.

The article largely focuses on code produced by the frontend developer but the third party code selected for use on the client side (and thus affecting the client side architecture) imposes overhead even before a single line of code is written (The Cost of Javascript Frameworks, Benchmarking JavaScript Memory Usage).

So perhaps "caring about performance" should be practised by honestly understanding the impact our tools have on end user performance.

These days React is pretty much a bandwagon choice; reportedly popular DX, large ecosystem, ready supply of developers - but is the (performance) cost of adoption fully understood? If React Native isn't needed perhaps Preact is "good enough" (Etsy). And if it's mostly about JSX maybe Solid is an option?

Similarly Next.js is popular right now but are the end user performance tradeoffs well understood by those who develop with it? There is room for improvement which is why Remix exists. Astro right now supports multiple frameworks making it possible to gradually migrate towards more lightweight solutions once Astro becomes SSR capable (currently just in the SSG phase). Meanwhile Qwik aims to accomplish things that are impossible with the mainstream frameworks.

it was entirely unexpected coming from someone in the Ivory Tower that is Amazon.

Amazon is a large company with numerous teams.

In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

Marissa Mayer at Web 2.0 (2006)

So given their business volume a 1% difference can establish a tolerance for a lot of effort, expense, and "a certain lack of maintainability" in the right place.

And even if that browser is running on a mobile device, it probably has loads of unleveraged processing power available for you to use.

That's largely a desktop web perspective that doesn't transfer well to the (mass) mobile web.

It seems everybody is adopting a stance that serves their particular needs best - example: "on a mobile device this can take seconds".

So the truth is likely somewhere in between and "good enough" is highly context sensitive.

But if the code runs in the browser, you're not a crappy dev just because the tweak is not forefront in your mind.

That comes across as "if it doesn't happen in my backyard, I don't care".


Frontend Devs should care about web performance; JavaScript micro-optimizations play only a minor role in that (unless we're dealing with the implementation of frameworks/libraries).

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

I pretty much agree with everything you've written. But I may not have made it clear when I said you should "Care... but not too much" that what you should care about are discernible differences in performance. I totally agree that even a 100 millisecond "delay" may be enough to negatively affect conversions. What I'm railing against are those who are fretting over a nested loop, when the array being looped over can only ever hold, say, 10 values. In scenarios like those, fretting about "performance" is rather silly.

Collapse
 
peerreynders profile image
peerreynders

But I may not have made it clear when I said you should "Care... but not too much" that what you should care about are discernible differences in performance.

"Care... but not too much" would resonate strongly with the crowd that likes to invoke the "premature optimization" clause to shut down any discussion relating to any kind of performance - typically to justify or even promote "performance ignorance" because "that's the responsibility of the framework/libraries that we're using - so we don't have to care". So it's kind of "in vogue" to downplay performance.

My sense was that you were singling out "pointless JavaScript micro-optimizations" but there was never a counterpoint "what aspects of performance should a front end developer care about?"

when the array being looped over can only ever hold, say, 10 values.

Understood but there has to be the conscious decision "it's OK for 10 values, for 100_000_000 I'd have to do better", i.e. there should be knowledge of potential performance consequences should the code find itself on the hot path.

"… but the takeaway I want you to get is that more so than in other systems, you need to measure measure measure measure, and make sure your measurements are as near as possible to the real thing you're trying to build."

That said most code isn't on the hot path but it's easy for people to fixate on JavaScript micro-optimizations because those are relatively easy to spot in code - whether or not they are actually relevant. By extension the real performance issues are: knowing how to measure whether code is performant enough, knowing how to find the code that needs improvement, identifying early decisions that limit performance, and exploiting opportunities that aren't directly related to JavaScript.

The Three Unattractive Pillars of Web Dev: accessibility, security and performance;

  • "They’re only a problem when they’re missing."
  • "Try and retrofit any of them to your project and you’re going to have a bad time."

Even in React there is a fair amount of judgement involved when deciding to use features like React.memo, useMemo or to "just let things go".

A front end development performance mindset isn't about micro-optimizing every piece of JavaScript but caring about end user performance from the beginning of the first request up to the point when the browser page tab finishes closing.


Henry Petroski:

The most amazing achievement of the computer software industry is its continuing cancellation of the steady and staggering gains made by the computer hardware industry.

Collapse
 
cubiclesocial profile image
cubiclesocial

If you run Javascript anywhere, then you already don't care about system performance. Neither your own nor anyone else's.

You probably care more about whether or not the code runs the same in all major web browsers on all OSes. And if you use NodeJS, then you probably care that there is one language that you can use everywhere: You've got a hammer and everything looks like a nail.

If you want to measure performance, then you need to measure clock cycles. A clock cycle is the amount of time it takes to execute a common instruction on the CPU. Most modern CPUs are clocked at around 3-4GHz or roughly 3-4 billion instructions per second. Clock cycle information is not available to Javascript nor any current web browser tools. Measuring how much wall clock time an instruction takes to execute in a loop in Javascript is not actually all that helpful because many CPUs have pipelining and predictive branching thus allowing them to intelligently determine what the next instruction is likely to be and precalculate the result. If the next instruction is actually what was predicted, then it has already obtained the answer and can skip ahead (if not, the pipeline will probably stall). So doing something in a loop is measuring how long a loop is going to take. It might give you a rough idea of any given instruction but clock cycles are a more definitive and accurate measurement. Without line-level clock cycle counts, you'll have a very difficult time measuring performance in Javascript.

You should write some C or C++ code sometime. You'll suddenly see Javascript as the very sluggish, bloated, extremely abstracted away from the metal language that it actually is. Of course, C++ devs also tend to abstract away from the metal. Javascript and DOM are useful for abstracting and normalizing the GUI but it's not fast by any stretch of the imagination. Nor will it ever be.

Collapse
 
peerreynders profile image
peerreynders • Edited

If you run Javascript anywhere, then you already don't care about system performance. Neither your own nor anyone else's.

That attitude simply ignores the realities on the web. The browser already has a runtime for JavaScript so you don't have to ship one.

WebAssembly for Web Developers (Google I/O ’19):

Both JavaScript and WebAssembly have the same peak performance. They are equally fast. But it is much easier to stay on the fast path with WebAssembly than it is with JavaScript. Or the other way around. It is way too easy sometimes to unknowingly and unintentionally end up in a slow path in your JavaScript engine than it is in the WebAssembly engine.

Also Replacing a hot path in your app's JavaScript with WebAssembly.

Using the language du jour on the browser will typically require the download of a massive runtime unless something like C/C++/Rust is used and those tend to inflate development time. So using WebAssembly has to be seen as an optimization once things stabilize.

In this case performance is about using the available resources to the best effect - JavaScript on the browser is (for some time to come only) part of the whole picture.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

This is a great point. And I wouldn't disagree with you on any level. I will only point out what may not have been clear in my original post: When you're writing JavaScript for the browser, the preeminent measure of "performance" is time. Now of course, that can vary wildly on a machine-by-machine (or browser-by-browser) basis. But the generic end-user's perception of time is what typically dictates whether my code is seen as "performant".

Of course JavaScript is "sluggish". In fact, all interpreted languages are. Because they are, as you've pointed out, "farther from the metal". But when I'm writing web-based apps, in JavaScript, the "metric" by which my code is typically judged is: Does the end-user actually perceive any type of delay? If the page/app seems to load/function in a near-instant fashion, I'm not going to waste time arguing with someone over the CPU benchmark performance of one function versus another.

But again, I totally agree with your points here.

Collapse
 
jayjeckel profile image
Jay Jeckel

Interesting article and a lot of good points, but I disagree greatly with one aspect:

"But you also need to be realistic about the fact that your code is almost always running in an environment where there are tons of unused resources."

My "unused resources" aren't an excuse for web devs to write less performant and efficient code. You should be no less concerned about using my client resources that cost me money than you are concerned about using your server resources that cost you money.

Collapse
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

I find it interesting that you did not mention what is probably the biggest predictor of performance in the browser - the size of the download. In general, the less code you send to the browser, the better.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

With regard to initial page load time, yes. After the initial page load, the size of the download has almost nothing to do with performance.

Collapse
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

That's mostly true when your audience has relatively recent hardware and a good connection to the internet. Something about 4 billion people don't have.

 
bytebodger profile image
Adam Nathaniel Davis • Edited

No. I'm sorry. But it doesn't matter whether you have gigabit fiber or a 56k dial-up modem. Once the code has been downloaded, the amount of code makes no difference to performance. I'm not saying - in any way - that you shouldn't care at all about bundle size. But if you're inferring that more code leads to lower performance once the package has been downloaded, then that's simply not accurate.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes • Edited

I'm referring to the fact that a lot of people run on old hardware and/or out of date browsers and more code does affect performance for them.

 
bytebodger profile image
Adam Nathaniel Davis

I guess you're referring to the performance of the code in memory. Because more code can take up more space in RAM. But even on a relatively-ancient system, the "performance" hit needed to process 10,000 lines of JS code versus 1,000 lines of JS code is extremely minimal. If you think that you can improve the runtime performance of your code, on anyone's system, merely by writing fewer lines of code, then your target audience probably can't effectively run ANY React / Angular / jQuery / whatever app.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

It sounds like you've never encountered an app that will not run well on your old system, but runs fine on your new system.

 
bytebodger profile image
Adam Nathaniel Davis

When an app runs poorly on your old system, but it runs fine on your new system, it's not based on the number of lines of code.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

I didn't mention lines of code. There is a correlation between the size of the app and the complexity of its function and the demand it places on its running environment.

The size isn't the actual cause (usually). It's just indicative of the likelihood that the app will be more demanding of its execution environment.

 
bytebodger profile image
Adam Nathaniel Davis

I'm sorry, but this is a bit disingenuous. You say that you didn't mention lines of code. But your initial comment was about the size of the download. What do you think makes the download large???

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes • Edited

I'm sorry, but your original use of the phrase was disingenuous. It comes across as an attempt to belittle the point. Lines of code is purely a function of formatting (unless you can point me to an accepted standard of how to measure it).

I get that you don't think the number of bytes you send to the browser matters. You've made that perfectly clear. I understand that point of view. The company I work for takes the exact same stance. It's still the wrong stance. Size is not usually the actual cause, but it is certainly a reasonable proxy for judging potential performance requirements. And that is exactly what I pointed out.

The more code you send to the client, the more potential for execution errors, logic errors, or errors indirectly related to the code itself. More code usually means more complexity, which is another vector for more demand placed on the client system.

The code you didn't have to write will never cause a problem. I'm a firm believer in the best code is no code. If you've never heard the phrase before, you might look it up. The idea has been around for quite a while.

 
Sloan, the sloth mascot
Comment deleted
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Wow. Your sarcasm skills are epic. I hope you can teach me as well.

 
bytebodger profile image
Adam Nathaniel Davis

I could. But you'd have to download the instructions. And I'm sure that your bandwidth/device couldn't handle the bundle size.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Now that you've given up refuting my point, you're going to stick to ad hominem attacks instead. I'll keep that in mind.

 
bytebodger profile image
Adam Nathaniel Davis

It sounds really impressive to use Latin words like "ad hominem" - until you use them in a way that doesn't make any sense in the current context.

Thread Thread
 
trenthaynes profile image
Info Comment hidden by post author - thread only accessible via permalink
Trent Haynes

Definition of ad hominem (Entry 1 of 2)
1: appealing to feelings or prejudices rather than intellect

It's appropriate.

Collapse
 
ingosteinke profile image
Ingo Steinke, web developer

Funniest aspect of "performance" is how many different meanings the word can have to different people. I used to be responsible for "web performance optimization" in a company and co-headed a meetup series about the same topic (formerly known as "meet for speed") and I still care a lot about speeding up websites and avoid unnecessary loading times, but I also care about a lot of other aspects like usability, accessibility and environmental energy optimization.

So far, so good, but to a business person, "performance optimization" might mean adding more videos to help them sell more products in their online shop, or it might mean making their developer teams more efficient to increase their programming performance.

Care... But Not That Much

That's probably the most important thing that we can learn from business people. Don't strive for perfection! Don't overengineer! Don't micro-optimize! Care for readability, maintenance and a pragmatic effort-to-outcome ratio!

Collapse
 
nitzanhen profile image
Nitzan Hen

Great article!

I feel that generally, many developers approach web development with the mindset of developing algorithms, and it's critical to understand that coding in different environments and/or for different purposes means that your top priorities as a developer should also be different.
It's similar, in a sense, to different types of writing - when writing a technical document, for example, you put your focus on completely different qualities than when writing a novel or a poem, even though they're both essentially writing!

As you've said, and it can't be stressed enough - in the case of web development the big-O efficiency of your code is usually a secondary priority. It's important to keep an eye out for it, but unless we're talking about really bad code, it typically makes no noticeable difference. Code brevity, maintainability and other similar qualities have a far greater impact on your product.

However, there is a nuance I'd like to shed light on - big-O time (and memory) efficiency are the two most popular aspects of efficiency, but they're by no means the only aspects of efficiency. Us web developers can afford to pay less attention to those, but other types of inefficiency can make a huge difference: concurrency & async operations, for example, are cardinal to virtually any modern app, and bad performance in that aspect could lead to terrible results. A similar point goes for network operations, bundle sizes, and more.
Once again, in most cases writing clear and maintainable code is a top priority, and can be achieved without sacrificing any of those, but it's important to keep in mind that inefficiency in those aspects of your logic could significantly harm the overall result.

And again - great article, well done!

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

TOTALLY agree. One of my biggest pet peeves is when someone stresses over tiny details of algorithmic "performance", but when you open the inspector, you can see that their app is making three identical GET calls to the exact same endpoint to retrieve the exact same data.

Collapse
 
nitzanhen profile image
Nitzan Hen

Exactly 😂

Collapse
 
ecyrbe profile image
ecyrbe • Edited

Hi Adam

Nice article again. I'll summarize for the Lazy:

  • Do not optimize early, if at all if there's no issue
  • Focus on maintainability over optimisation

I'll add, if you start having front-end Time issues, that you should mesure or add tooling to mesure easily (automate lighthouse reporting, activate flamegraphs). and optimize only problematic parts of your reports.

Nowadays, the biggest perf issues i face are not related to algorithms. But on front-end monolith being so big that webpack can take like 20 mn to package all the bundles (working on a really Big app). Vite JS is not an option, as we have too much legacy that vite can't even compile the project. Optimizing this kind of front-end issue is much harder.
So nowadays, i'm doing micro front-end to slim the Monster down. Module federation is a really nice pièce of technology.
I wrote a small article about it yesterday if you are interested.

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

Agreed. And module federation is indeed a wonderful feature.

Collapse
 
lexlohr profile image
Alex Lohr

Premature optimization is the root of all evil, they say. I think not caring about performance means we're more interested about pushing MVPs onto the customer than actually solving problems.

One of these is that a lack of performance will needlessly burn CPU cycles and waste energy, while also ensuring that whatever system it runs on needs to be replaced faster.

So keep in the back of your mind that you don't want to kill the planet with bad front end performance. Thanks for coding considerately.

Collapse
 
webreflection profile image
Andrea Giammarchi • Edited

Imho, in every part of the stack you need to care about performance when performance is your bottleneck. Yet knowing better algorithms, or better libraries or solutions, assuming similar DX, to obtain the same result, is a plus that removes the idea "performance are not great" from the equation and reduce long term need for refactoring and/or maintenance.

In few words, if it takes the same time to implement the same solution but because you care about performance it's faster by design, you'll be a better developer in the long term than one that "didn't care about performance because FE, yolo!".

There are also a lot of people that mistake FE with business logic, PWA or SPA or MTA needs in terms of architecture and so on and so forth ... saying FE shouldn't care about performance is as short sighted as one could be in this industry, if by FE you consider how much responsibility has JS these days to make literally anything work on the Web.

Collapse
 
miketalbot profile image
Mike Talbot ⭐

A few thoughts on performance: one critical performance indicator these days is the amount of battery that functionality uses, and while it's often the case that it is hard to determine this for a website; hybrid apps and heavily used web apps that burn through a user's battery have a directly negative impact on that user's day. Not that this is an argument for micro optimisation, but I suggest it should be a consideration around critical functionality.

Imagine a web app that has some type-ahead functionality, too frequent use of a device's radio to contact the server for suggestions will have a negative impact if this is a commonly used function. Poorly written search functionality in the browser could make the experience of the search functionality poor and burn battery. Over eager caching of entire data sets to allow client side searching could negatively impact both energy usage and startup performance. This simple example shows that we should address proper consideration to the user objectives and the architecture of solutions where there is some chance that solution will be a core part of the user's journey.

The data structures we use frequently dictate performance too, choosing when to trade memory for computation (e.g. building O(1) lookup tables) or utilising our own or 3rd party APIs to request data in the right shape to reduce data transfer, round trips or client side processing are also worth considering at the solution architecture stage too.

I am totally with you on the pragmatism side, I'd use find over a fancy lookup table for arrays expected to be small too, because there is another cost here, the cost to our business or employer in terms of the amount of time it takes to build and deliver solutions to our customers. This is another practical optimisation, because if we run out of money before the solution is released (perhaps due to one of those 3 month long Linting wars?) we have also failed at our task!

A great article, so good to be back reading your thoughts and the debate that they produce after the hiatus.

Some comments have been hidden by the post's author - find out more