I don't see enough people talking about practical ways to improve at JavaScript. Here are some of the top methods I use to write better JS.
Use TypeScript
The number one thing you can do to improve your JS, is by not writing JS. For the uninitiated, TypeScript (TS) is a "compiled" superset of JS (anything that runs in JS runs in TS). TS adds a comprehensive optional typing system on top of the vanilla JS experience. For a long time, TS support across the ecosystem was inconsistent enough for me to feel uncomfortable recommending it. Thankfully, those days are long behind us and most frameworks support TS out of the box. Now that we're all on the same page about what TS is, let's talk about why you would want to use it.
TypeScript enforces "type safety".
Type safety describes a process where a compiler verifies that all types are being used in a "legal" way throughout a piece of code. In other words, if you create a function foo
that takes a number:
function foo(someNum: number): number {
return someNum + 5;
}
That foo
function should only ever be called with a number:
good
console.log(foo(2)); // prints "7"
no good
console.log(foo("two")); // invalid TS code
Aside from the overhead of adding types to your code, there are zero downsides to type-safety enforcement. The benefit on the other hand, is too large to ignore. Type safety provides an extra level of protection against common errors/bugs, which is a blessing for a lawless language like JS.
Typescript types, make refactoring larger applications possible.
Refactoring a large JS application can be a true nightmare. Most of the pain of refactoring JS is due to the fact that it doesn't enforce function signatures. This means, a JS function can never really be "misused". For example, if I have a function myAPI
that is used by 1000 different services:
function myAPI(someNum, someString) {
if (someNum > 0) {
leakCredentials();
} else {
console.log(someString);
}
}
and I change the call signature a bit:
function myAPI(someString, someNum) {
if (someNum > 0) {
leakCredentials();
} else {
console.log(someString);
}
}
I have to be 100% certain, that every place where this function is used (1000's of places), I correctly update the usage. If I even miss 1 my credentials could leak. Here's the same scenario with TS:
before
function myAPITS(someNum: number, someString: string) { ... }
after
function myAPITS(someString: string, someNum: number) { ... }
As you can see, the myAPITS
function went through the same change as the JavaScript counterpart. But instead of resulting in valid JavaScript, this code results in invalid TypeScript, as the 1000's of places it's used are now providing the wrong types. And because of the "type safety" we discussed earlier, those 1000 cases will block compilation, and your credentials don't get leaked (that's always nice).
TypeScript makes team architecture communication easier.
When TS is setup correctly, it will be difficult to write code without first defining your interfaces and classes. This also provides a way to share concise, communicative architecture proposals. Before TS, other solutions to this problem existed, but none solved it natively, and without making you do extra work. For example, if I want to propose a new Request
type for my backend, I can send the following to a teamate using TS.
interface BasicRequest {
body: Buffer;
headers: { [header: string]: string | string[] | undefined; };
secret: Shhh;
}
I already had to write the code, but now I can share my incremental progress and get feedback without investing more time. I don't know if TS is inherently less "buggy" than JS. I do strongly believe that forcing developers to define interfaces and API's first, results in better code.
Overall, TS has evolved into a mature and more predictable alternative to vanilla JS. There is definitely still a need to be comfortable with vanilla JS, but most new projects I start these days are, TS from the outset.
Use Modern Features
JavaScript is one of the most popular (if not the most) programming languages in the world. You might expect that a 20+ year old language used by 100s of millions of people would be mostly "figured out" by now, but the opposite is actually true. In recent times, many changes and additions have been made to JS (yes I know, technically ECMAScript), fundamentally morphing the developer experience. As someone who only started writing JS in the last 2 years, I had the advantage of coming in without bias or expectations. This resulted in much more pragmatic, non-religious choices about what features of the language to utilize and which to avoid.
async
and await
For a long time, asynchronous, event driven callbacks were a unavoidable part of JS development:
traditional callback
makeHttpRequest('google.com', function (err, result) {
if (err) {
console.log('Oh boy, an error');
} else {
console.log(result);
}
});
I'm not going to spend time explaining why the above is problematic (but I have before). To solve the issue with callbacks, a new concept, "Promises" were added to JS. Promises allow you to write asynchronous logic, while avoid the nesting issues that previously plagued callback-based code.
Promises
makeHttpRequest('google.com').then(function (result) {
console.log(result);
}).catch(function (err) {
console.log('Oh boy, an error');
});
The biggest advantage of Promises over callbacks is readability and chainability.
While Promises are great, they still left something to be desired. At the end of the day, writing Promises still didn't feel "native". To remedy this, the ECMAScript comittee decided to add a new method of utilizing promises, async
and await
:
async
and await
try {
const result = await makeHttpRequest('google.com');
console.log(result);
} catch (err) {
console.log('Oh boy, an error');
}
The one caveat being, anything you await
must have been declared async
:
required definition of makeHttpRequest in prev example
async function makeHttpRequest(url) {
// ...
}
It's also possible to await
a Promise directly since an async
function is really just a fancy Promise wrapper. This also means, the async/await
code and the Promise code, are functionally equivalent. So feel free to use async/await
without feeling guilty.
let
and const
For most of JS's existence, there was only one variable scope qualifier var
. var
has some pretty unique/interesting rules in regards to how it handles scope. The scoping behavior of var
is inconsistent and confusing, and has resulted in unexpected behavior and therefore bugs , throughout the lifetime of JS. But as of ES6, there is an alternative to var
, const
and let
. There is practically zero need to use var
anymore, so don't. Any logic that uses var
, can always be converted to equivalent const
and let
based code.
As for when to use const
vs let
, I always start by declaring everything const
. const
is far more restrictive and "immutablish" which usually results in better code. There aren't a ton of "real scenarios" where using let
is necessary, I would say 1/20 variables I declare with let
. The rest are all const
.
I said
const
is "immutablish" because it does not work in the same way asconst
in C/C++. Whatconst
means to the JavaScript runtime, is that the reference to thatconst
variable will never change. This does not mean the contents stored at that reference will never change. For primitive types (number, boolean etc),const
does translate to immutability (because it's a single memory address). But for all objects (classes, arrays, dicts),const
does not guarantee immutability.
Arrow =>
Functions
Arrow functions are a concise method of declaring anonymous functions in JS. Anonymous functions, describe functions that aren't explicitly named. Usually, anonymous functions are passed as a callback or event hook.
vanilla anonymous function
someMethod(1, function () { // has no name
console.log('called');
});
For the most part, there isn't anything "wrong" with this style. Vanilla anonymous functions behave "interestingly" in regards to scope, which can/has result in many unexpected bugs. We don't have to worry about that anymore, thanks to arrow functions. Here is the same code, implemented with an arrow function:
anonymous arrow function
someMethod(1, () => { // has no name
console.log('called');
});
Aside from being far more concise, arrow functions also have much more practical scoping behavior. Arrow function inherit this
from the scope they were defined in.
In some cases, arrow functions can be even more concise:
const added = [0, 1, 2, 3, 4].map((item) => item + 1);
console.log(added) // prints "[1, 2, 3, 4, 5]"
Arrow functions that reside on a single line, include a implicit return
statement. There is no need for brackets or semi-colons with single line arrow functions.
I want to make it clear. This isn't a var
situation, there are still valid use cases for vanilla anonymous functions (specifically class methods). That being said, I've found that if you always default to an arrow function, you end up doing a lot less debugging as opposed to defaulting to vanilla anonymous functions.
As usual, the Mozilla docs are the best resource
Spread Operator ...
Extracting key/value pairs of one object, and adding them as children of another object, is a very common scenario. Historically, there have been a few ways to accomplish this, but all of those methods are pretty clunky:
const obj1 = { dog: 'woof' };
const obj2 = { cat: 'meow' };
const merged = Object.assign({}, obj1, obj2);
console.log(merged) // prints { dog: 'woof', cat: 'meow' }
This pattern is incredibly common, so the above approach quickly becomes tedious. Thanks to the "spread operator" there's never a need to use it again:
const obj1 = { dog: 'woof' };
const obj2 = { cat: 'meow' };
console.log({ ...obj1, ...obj2 }); // prints { dog: 'woof', cat: 'meow' }
The great part is, this also works seamlessly with arrays:
const arr1 = [1, 2];
const arr2 = [3, 4];
console.log([ ...arr1, ...arr2 ]); // prints [1, 2, 3, 4]
It's probably not the most important, recent JS feature, but it's one of my favorites.
Template Literals (Template Strings)
Strings are one of the most common programming constructs. This is why it's so embarassing that natively declaring strings is still poorly supported in many languages. For a long time, JS was in the "crappy string" family. But the addition of template literals put JS in a category of its own. Template literals natively, and conveniently solve the two biggest problems with writing strings, adding dynamic content, and writing strings that bridge multiple lines:
const name = 'Ryland';
const helloString =
`Hello
${name}`;
I think the code speaks for itself. What an amazing implementation.
Object Destructuring
Object destructuring is a way to extract values from a data collection (object, array, etc), without having to iterate over the data or access it's key's explicitly:
old way
function animalParty(dogSound, catSound) {}
const myDict = {
dog: 'woof',
cat: 'meow',
};
animalParty(myDict.dog, myDict.cat);
destructuring
function animalParty(dogSound, catSound) {}
const myDict = {
dog: 'woof',
cat: 'meow',
};
const { dog, cat } = myDict;
animalParty(dog, cat);
But wait, there's more. You can also define destructuring in the signature of a function:
destructuring 2
function animalParty({ dog, cat }) {}
const myDict = {
dog: 'woof',
cat: 'meow',
};
animalParty(myDict);
It also works with arrays:
destructuring 3
[a, b] = [10, 20];
console.log(a); // prints 10
There are a ton of other modern features you should be utilizing. Here are a handful of others that stand out to me:
Always Assume Your System is Distributed
When writing parallelized applications your goal is to optimize the amount of work you're doing at one time. If you have 4 available cores, and your code can only utilize a single core, 75% of your potential is being wasted. This means, blocking, synchronous operations are the ultimate enemy of parallel computing. But considering that JS is a single threaded language, things don't run on multiple cores. So what's the point?
JS is single threaded, but not single-file (as in lines at school). Even though it isn't parallel, it's still concurrent. Sending an HTTP request may take seconds or even minutes, if JS stopped executing code until a response came back from the request, the language would be unusable.
JavaScript solves this with an event loop. The event loop, loops through registered events and executes them based on internal scheduling/prioritization logic. This is what enables sending 1000's of "simultaneous" HTTP requests or reading multiple files from disk at the "same time". Here's the catch, JavaScript can only utilize this capability if you utilize the correct features. The most simple example is the for loop:
let sum = 0;
const myArray = [1, 2, 3, 4, 5, ... 99, 100];
for (let i = 0; i < myArray.length; i += 1) {
sum += myArray[i];
}
A vanilla for loop is one of the least parallel constructs that exists in programming. At my last job, I led a team that spent months attempting to convert traditional R
lang for-loops into automagically parallel code. It's basically an impossible problem, only solvable by waiting for deep learning to improve. The difficulty of parallelizing a for-loop comes from a few problematic patterns. Sequential for-loops are very rare, but they alone make it impossible to guarantee a for-loops separability:
let runningTotal = 0;
for (let i = 0; i < myArray.length; i += 1) {
if (i === 50 && runningTotal > 50) {
runningTotal = 0;
}
runningTotal += Math.random() + runningTotal;
}
This code only produces the intended result if it is executed in order, iteration by iteration. If you tried to execute multiple iterations at once, the processor might incorrectly branch based on inaccurate values, which invalidates the result. We would be having a different conversation if this was C code, as the usage is different and there are quite a few tricks the compiler can do with loops. In JavaScript, traditional for loops should only be used if absolutely necessary. Otherwise utilize the following constructs:
map
// in decreasing relevancy :0
const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
const resultingPromises = urls.map((url) => makHttpRequest(url));
const results = await Promise.all(resultingPromises);
map with index
// in decreasing relevancy :0
const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
const resultingPromises = urls.map((url, index) => makHttpRequest(url, index));
const results = await Promise.all(resultingPromises);
for-each
const urls = ['google.com', 'yahoo.com', 'aol.com', 'netscape.com'];
// note this is non blocking
urls.forEach(async (url) => {
try {
await makHttpRequest(url);
} catch (err) {
console.log(`${err} bad practice`);
}
});
I'll explain why these are an improvement over traditional for loops. Instead of executing each "iteration" in order (sequentially), constructs such as map
take all of the elements and submit them as individual events to the user-defined map function. This directly communicates to the runtime, that the individual "iterations" have no connection or dependence to each other, allowing them to run concurrently. There are many cases where a for-loop would be just as performant (or maybe more) in comparison to a map
or forEach
. I would still argue that losing a few cycles now, is worth the advantage of using a well defined API. That way, any future improvements to that data access patterns implementation will benefit your code. The for-loop is too generic to have meaningful optimizations for that same pattern.
There are a other valid async options outside of map
and forEach
, such as for-await-of
.
Lint Your Code and Enforce a Style
Code without a consistent style (look and feel), is incredibly difficult to read and understand. Therefore, a critical aspect of writing high-end code in any language, is having a consistent and sensible style. Due to the breadth of the JS ecosystem, there are a LOT of options for linters and style specifics. What I can't stress enough, is that it's far more important that you are using a linter and enforcing a style (any of them), than it is which linter/style you specifically choose. At the end of the day, no one is going to write code exactly how I would, so optimizing for that is an unrealistic goal.
I see a lot of people ask whether they should use eslint or prettier. For me, they serve very different purposes, and therefore should be used in conjunction. Eslint is a traditional "linter", most of the time, it's going to identify issues with your code that have less to do with style, and more to do with correctness. For example, I use eslint with AirBNB rules. With that configuration, the following code would force the linter to fail:
var fooVar = 3; // airbnb rules forebid "var"
It should be pretty obvious how eslint adds value to your development cycle. In essence, it makes sure you follow the rules about what "is" and "isn't" good practice. Due to this, linters are inherently opinionated. As with all opinions, take it with a grain of salt, the linter can be wrong.
Prettier is a code formatter. It is less concerned with "correctness", and far more worried about uniformity and consistency. Prettier isn't going to complain about using var
, but it will automatically align all the brackets in your code. In my personal development process, I always run prettier as the last step before pushing code to Git. In many cases, it even makes sense to have Prettier run automatically on each commit to a repo. This ensures that all code coming into source control has consistent style and structure.
Test Your Code
Writing tests, is an indirect but incredibly effective method of improving the JS code you write. I recommend becoming comfortable with a wide array of testing tools. Your testing needs will vary and there's no single tool that can handle everything. There are tons of well established testing tools in the JS ecosystem, so choosing tools mostly comes down to personal taste. As always, think for yourself.
Test Driver - Ava
Test drivers are simply frameworks that give structure and utilities at a very high level. They are often used in conjunction with other, specific testing tools, which vary based on your testing needs.
Ava is the right balance of expressiveness and conciseness. Ava's parallel, and isolated architecture is the source of most my love. Tests that run faster save developers time and companies money. Ava boasts a ton of nice features, such as builtin assertions, while managing to stay very minimal.
Alternatives: Jest, Mocha, Jasmine
Spies and Stubs - Sinon
Spies give us "function analytics" such as how many times a function was called, what they were called by, and other insightful data.
Sinon is a library that does a lot of things, but only a few super well. Specifically, sinon excels when it comes to spies and stubs. The feature set is rich but the syntax is concise. This is especially important for stubs, considering they partially exist to save space.
Alternatives: testdouble
Mocks - Nock
HTTP mocking is the process of faking some part of the http request process, so the tester can inject custom logic to simulate server behavior.
Http mocking can be a real pain, nock makes it less painful. Nock directly overrides the request
builtin of nodejs and intercepts outgoing http requests. This in turn gives you complete control of the response.
Alternatives: I don't really know of any :(
Web Automation - Selenium
Selenium is one I have mixed emotions about recommending. As it's the most popular option for web automation, it has a massive community and online resource set. Unfortunately, the learning curve is pretty steep, and it depends on a lot of external libraries for real use. That being said, it's the only real free option, so unless you're doing some enterprise grade web-automation, Selenium will do the job.
Two Other Random JS Things
- Very rarely should you use
null
, poornull
- Numbers in JavaScript just suck, always use a radix parameter with
parseInt
Conclusion
Draw your own.
Top comments (82)
Great read! While I do agree with you in 90% of what you said, you've explained everything quite clear!
My pet peeve though is asking JS developers to switch to TypeScript. Don't you think TS tries to force an OOP paradigm into JS, which is not necessarily OOP?
Wouldn't you agree that instead of forcing people out of JS's paradigm so they can write better code, it would be better to get them to actually understand JS's paradigm instead?
Just asking to spark conversation, I would love your opinion on it, I was never able to get on board TS or CoffeeScript back then either.
You don't have to write object oriented TypeScript. Also, JS is just as much OOP as TS is.
TS doesn't change the basic paradigm of JS, it just makes it type safe. Types !== Objects. The only real reason to use JS over TS is that it's slightly (I really do mean slightly) faster in terms of development speed. But that is definitely not worth the loss of confidence and consistency you get with TS.
Check out fp-ts, which brings functional semantics to TypeScript.
I always appreciate your comments Fernando, thanks for sparking a great conversation.
Ryland do you consider writing TypeScript for the sake of "type safe" is more advantageous than testing ?
If I had to choose between the two I would choose testing every time. Nothing replaces good tests.
I see, I've never used Typescript before. However, all the arguments presented by people recommending it never convinced me. I think it's kinda useless to switch for TS to only get that compile-time error hinting.
My point is why to switch if you can use the current JS ecosystem to write tests that ensure the outcome ( The business logic ) is valid, and check the types if you want to, rather than adding that semantic analysis provided by TS which gives no extra magic just a hint for the source of type mismatching ( The same as testing ). Really Writing better JS using JS itself, alongside testing, is more appropriate IMO.
JS ecosystem is complicated enough, TS is fragmenting the community.
Typescript and tests cover different failure scenarios. This article was very informative about the benefit of each: css-tricks.com/types-or-tests-why-...
Thanks, it sounds interesting. I'll give it a read
The whole "only being a compile time safety net" is something I told myself before I forced myself to give it a propper go.
The power of being able to refactor a large app without breaking something is invaluable for me.
A good example is the ability to define the type of a return from an API. If this shape changes in the future with a simple modification of your type you can now ensure that not a single part of your code base references a node which no longer exists without erroring before building.
It has solved more errors on a refactor than I can count - On any large application if you want to have any confidence modifying code it's worth the overhead :)
I'm not arguing against your use case, I'm sure TS helped there, but that can also be done using pure JS. With the proper test cases and JSON schemas put in place (I'm making the assumption you're talking about a JSON-based API), the same thing can be achieved.
Again, not arguing against your use case, I'm just trying to find one that is clearly easier to implement in TS than in vanilla JS.
My 2c:
TypeScript makes your IDE smarter, though you have to do the type definition work. Ever worked with objects with many properties (especially nested properties) and forgotten something? Or have to keep on referring back to some file where it was first defined? Or accidentally slapped on extra properties that should have been somewhere else? TypeScript won't fix the world, but it does help you to avoid these errors and I find that the intellisense improvements speed me up greatly. Functions don't just accept or return arbitrary objects: I can easily refer to structure definitions in code I haven't seen before and get good autocompletion.
Yes, you could also achieve some of this with jsdoc, but if you're writing jsdoc, you might as well define types. It will actually be quicker 🙂
TypeScript is also, imo, the easiest way to get modern syntax like async/await and import syntax. I've found it easier to set up than Babel.
All JavaScript is TypeScript, so you don't have to go "full-on" - adopt what works as you decide to. You can introduce it to an existing JavaScript codebase without having to change your existing code - it can deal with .js files and if you decide to convert some existing code, change to .ts and deal with any warnings/errors. You can also relax the compiler, but I've found most usefulness with stricter settings (like not allowing 'any') and projects where things get the murkiest have been where people avoid the typing system. My advice is to rather try to find / define the correct types than telling the compiler not to bother.
Well, I guess I've been neglecting my TS. I'll have to give it a try and see if types and me agree with each other :) Thanks for the nice reply and explanation!
It has nothing to OOP. Type systems exists also in functional languages like Haskell Elm or OCaml. I can only agree that type definitions looks similar to these in OOP languages. But it doesn't mean you need to do OOP, even such functional lib like Ramda has type definitions.
Agreed, If I where to say that classes (singletons) could be used in a functional programming, I would be branded a Heretic, but a class is just a data structure, it's so tied to the OOP identity, that is all. FP in my eyes, is not about functions at all, it's about expressions over statements and immutability more than anything. Optional typing is a language feature too and nothing to do with FP or OOP. I wish FP and OOP would just get a room and make a little FPOOP 🤯
No you wouldn’t, OOP and FP are orthogonal and a language can exist as both. F# is a good example of this, it has classes and even inheritance but I don’t think anyone would argue that it isn’t a functional language. FP isn’t even about immutability, OCaml, for example, has mutability (and also OO-like objects).
The term you’re looking for is object-functional programming; although it is slightly fringe.
😆 the more you know, thanks Andy I have learned something today.
It's an incredibly common misconception; I thought the two were incompatible for years too.
It does get discussed often as if the two things where oil and water. F# is one of the mid sized language I am still to research, but you have given me a reason.
I think this is mostly a consequence of two things:
1) Almost universally, you learn only OOP in school / bootcamps / self-learning. You have to actively search out education on FP, which means that when you inevitably do come across it after years of writing OOP it can seem so alien as to be completely incompatible with what you know.
2) When most people think of FP they think of Haskell which:
Haskell being the poster child of FP has lead to a somewhat inaccurate representation of the field. Many people assume all FP is Pure (immutability, no side effects, referential transparency), when the reality is many functional languages describe themselves as pragmatic; allowing for controlled mutability, side effects, etc...
To bring this discussion back round to the original conversation. Modern javascript is getting very functional: we've always had first-class functions, but now with methods like map, reduce, and arrow functions we can write our code in a very functional style.
Typescript, on the other hand undoubtedly favours an OO style. You can still write functional code, but it's a bit more hassle and you'll often find yourself reaching for features found in other typed functional languages that just don't exist in ts / js.
In Elm, for example, there is a
Result
type that models a computation that can fail. It looks like this:it looks like you can achieve this in typescript all the same:
But if you know ts you know that this is invalid. In the Elm version
Ok
andErr
are constructors for the typeResult
. In typescript,Ok
andErr
are types themselves, and so we need to go ahead and actually define them.I could continue, but this reply is getting too long and I'm sure you (and others) get my point. Even fully typing a curried function becomes a jumbled mess:
My opinion stems for a video of functional c++ of all things. Anyway take this reply and make a post this is interesting!
I agree. Typescript is unnecessary and it speaks more to the inexperience of the developer using javascript than enforcing good practice. I am a huge proponent of TDD (Test Driven Development), which innately forces the developer to gain a more in depth understanding of javascript and functional programming.
I have been programming for decades and most of that time has been in 100% in JavaScript. I prefer TypeScript and have used it exclusively on the backend and front end for 3 years now.
So if I am an experienced developer and tech lead... maybe... just maybe there is a reason why I’ve chosen TypeScript. Use the best tool for the job.
Additionally, I only write in a functional style. TypeScript has never impaired my ability to write FP. Map reduce for life. Btw, ImmutableJS and Ramda have great type definitions. And as someone above said: Haskell has types. So... what’s your response to that?
The biggest problem of typescript that it is not sound. It forces you to always write complex type calculations. For FP it is very essential. When I used reasonml I was focused on writing the code. Flow also fits much better because of the same reason. Try to add types to a reduce function or to transducers. Typescript is not strict even in the strictest mode. Type casting to unknown to whatever or adding exclamation marks to the code. React works much better with Flow than with typescript. After switching to typescript I feel more like types developer.
For me you can't tell it's better way to write JavScript code and tell people to use TypeScript.
You're not talking about JavaScript anymore.
as
Fernando Doglio
deleteman123 deleteman http://fdoglio.com
says it would be better to get them to actually understand JS's paradigm instead.
Recent TS release notes have admitted the oop paradime and are looking at changing the docs to fit FP which typescript is certainly capable of doing. It's just poor marketing.
Great list, Ryland 👍
I would like to add json-server, it's a freaking awesome tool to let the frontend developer work on his own.
The big plus is that it auto-magically makes the developer be able to write cleaner code (knowing that he will switch the backend api service later).
Hey, that looks like a very cool tool. It's like nock but for everything that isn't testing.
That's always a great plus. Will have to try it out later. Thanks for recommendation. Glad you enjoyed the post!
Just checked out JSON server it looks really cool, Thanks
While a lot of your advice is nice, about
map
and friends:No JS engine does this. JS doesn't magically run in parallel—it's a single-threaded language.
edit—I was a bit mean before. I blame lack of sleep.
From the article:
I think you might have missed a couple paragraphs. In case this doesn't make sense, read my article about async, concurrency and parallelism.
Your article writes that
map
is a construct that JS provides us that runs tasks in parallel.But map doesn't care if you're passing it an async function or not—it runs a function on everything you pass it, in order. Notably, even this is possible, because async functions don't yield unless you actually call
await
:At some level you're right that
map
isn't an inherently parallel construct. But I still stand by what I said,map
has the potential of being parallelized on a level that a traditionalfor-loop
does not. Afor-loop
explicitly surfaces a mechanism to enforce ordering, amap
(andforEach
) do not.In your example, the code is not guaranteed to have a consistent result. The only way it could be consistent is if V8 guaranteed in-order execution of asynchronous tasks, which it does not.
Another differentiator in my mind is state. Anyone who has worked with distributed systems, knows that shared state is incredibly expensive. A traditional
for-loop
inherently provides shared state, the iterator/bounds check variable i. This inherently orders the loop, while map may be implemented as ordered, it's an implementation detail. Original MapReduce wasn't ordered.I would say the moment you slap
await
in there, the code is no longer asynchronous. It's blocking as any other line.That’s not true. If I await a web request in some random function, it will still be asynchronous as long as the random function is invoked asynchronously.
Try doing two awaits in a row and check if they run concurrently. This is the definition of synchronous.
Au contrarire, the downsides of type-safety are too many compared to any benefits TS may have:
The Buddha's way is to face your maladies directly instead of creating abstractions around them. I'd rather write my code in ES6 than write TS and then convert to ES6!
Incredibly subjective.
I'm actually not sure how a typing system could make code "less expressive". Can you provide an example?
The language is open source, the spec of the language is open web. This statement entirely misrepresents TypeScript. Would you tell people not to use Java because it was created by Sun (now Oracle)? What about C#? What about JavaScript, a current trademark of Oracle?
Au contrarire, you should be writing V8 bytecode. Or maybe even just skip all abstractions and send 1's and 0's via electrical current.
:)
I disagree on this point.
The Buddha would want you to “communicate mindfully” and to speak clearly. Types clarify reality which is what Buddhism teaches to respect:
“Communicating your needs” / TypeScript’s value from a Buddhist perspective (part 1)
Cubicle Buddha ・ May 29 ・ 4 min read
Great article and a nice overview that can inspire a lot of people. I'm also a big fan of using TS, it's still JS just a little safer.
I do want to correct you on the topic of web automation that selenium is not the only free option to do so.
I think Cypress is very good free alternative. They do have payed (hosted) options but aren't mandatory to use.
Besides that, it has a low learning curve and excellent documentation.
Not to mention the very good tooling to write, debug and run tests fast.
I do want to note that because of how Cypress and Selenium approach a web application, one or the other might not be suited for every situation.
But, Cypress is certainly worth mentioning ;)
It's funny because my engineering team at work is trying to pitch me Cypress right now too. For a long time, I didn't like Cypress because it only worked with Chrome. I've hear that they've changed this, which would definitely shift my perspective on it.
Cypress also is nice because of the way it integrates with CircleCI. Thanks for the insightful addition, I probably should have mentioned Cypress.
Try TestCafe then. AFAIK Cypress is not free. TestCafe is free and works in multiple browsers, even remote, mobile, headless or not.
One of the best things I've done for writing better JS was to really understand the native array methods like
map
,reduce
,filter
, etc.So much of what we do as developers is the manipulation and processing of data, and if you can learn how to do that in a declarative rather than imperative way your life is going to be so much better.
Couldn’t agree more. One path scales the other doesn’t.
Solid tools and advice. Out of curiosity what is your opposition/alternative to using
null
? Sometimes it is a bit unavoidable depending on the backend/backend team you are working with and can also help to show intent that something is purposefully w/out a value.I'm curious as to when you would want to explicitly pass/accept an argument that has no value within JS... Most programmatic behavior happens within arrays (hence the large drive for people to now grasp
map
,filter
, andreduce
) and on objects/hashes/maps/whatev your language calls them. In the case of the array, anull
orundefined
is most likely something you're only going to care about skipping over so your program doesn't crash. And in the case of the object, why look to operate on a parameter that you don't expect to be set?Especially when it comes to
forEach
,map
, andreduce
, the better option thannull
is usually a default, blankish value, like0
for addition/subtraction,1
for multiplication/division,""
when you expect to be working with strings,[]
for when you expect to be processing a list, and{}
when you're expecting an object. As an added bonus, while they aren't technically able to prevent type bugs, using defaults in function signatures can hint to other developers what the types of their arguments should be.This is a very very good reply.
Is especially accurate. It's not that I think
null
can't be used well, I just don't understand why it fits in a incredibly high-level language like JS. Great comment.I'm with you on most of this, but I've found code that borders on illegible because of overuse of the spread operator. Especially when dealing with React state, doing an
Object.assign
is sometimes a lot easier to read than many lines of...nextObj,
.As always, tend towards readability. The computer does not care what your code looks like, but other devs will.
Don't even get me started on the spread operator visual design. I think the fact that they didn't come up with a specific syntax and used the existing rest style is atrocious. They actually do opposite things, and are somehow controlled with the same literal. This is just off the top of my head. But why not:
FPOOP it's a new design pattern I just invented and looks like Typescript can already help me write some FPOOP.
FPOOP consists of two paradigms FP and OOP, the main rule is to choose the appropriate paradigm for the job, FPOOP.
Also the fp-TS library
I’ve been practicing “functional code imperative shell” for a while (and in TypeScript might I add!).
"The number one thing you can do to improve your JS, is by not writing JS"
Kind of a joy-killing way to start a JS' best practices article... :/
I like TS, but it definitely does not belong in all JS contexts.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.