[NOTE: The library that I reference throughout this post - allow - is now available in an NPM package. You can find it here: https://www.npmjs.com...
For further actions, you may consider blocking this person and/or reporting abuse
I hear you on typescript. I have a love/hate relationship with it. I will say that Deno has made it a lot more friendly, just because I don’t have to think about setting it up. Does not solve the compiler gripe, or the lack of runtime typechecks.
What is kind of interesting with your approach, is that this is very similar to what we might do in a node server. If you build a APi, you have to treat all incoming data as hostile. They could have the wrong information, or they could be leave out fields, or it could be malicious.
It is both good DX and security practice to do this. The best APIs can tell the user what was wrong. The best security prevents that API from being used as a exploit and pass bad code to some other system.
That being said, I am nervous about passing type checks at run time to the client for code that does not except outside input. Mostly because it is more JavaScript. Larger bundle. Longer time to parse.
I see this being great in “App” experience like Figma. But less idea for marketing pages and experience that have to be blazing fast on mobile device with targeted load time.
Agreed. In theory, you must do this for any "outside" information. In practice, I believe it makes much tighter code to do this even for "inside" information. IMHO, this is a lot of what TS is trying to do. It's trying to get you to validate that information - even when it's all "your" information.
I hear what you're saying, but I can't claim to agree with the concept. My
allow.js
file is 6.47 KB in its raw, unminified state. To put this in perspective, my entire bundle size for my Spotify Toolz app, which contains a lot more code than simplyallow.js
, is 11 KB. Once it's loaded, it doesn't get loaded again every time it's called. It's a one-time "hit" to bundle size - and then the only performance consideration is the overhead needed to run those functions. And those functions are... tiny. It should take a handful of milliseconds to run each check.Not that I expect many (or even, any) people to follow my example. But, IMHO, the place where this is most useful and meaningful is in frontend applications that rely heavily on outside data. Because, ultimately, you can never fully trust outside data.
That being said, I don't want the cognitive overhead of trying to think of when to use it. I just use it - everywhere.
This also makes my unit testing farrrrrr easier. I don't write 50 different unit tests for each function, trying to account for all the jacked up ways that it could be called.
Thank you so much for the thoughtful feedback!
Sorry replying again. The cognitive piece I totally get. It’s also valuable if a team aligns on this, as it just make defensive coding a first class citizen because you operationalized it, instead of it being a testing strategy or a fix when you get a bug.
Agreed. When I'm working outside my own personal code - on larger teams - I have used these kinds of approaches. But I also try to be judicious about it. Because if this little library isn't used anywhere else in the app, and no one else on the team likes it or wants to use it, and my contribution is not in some kinda standalone/modularized portion of the app, then it can be borderline-irresponsible to just start chucking it into the middle of a much larger preexisting codebase.
Yeah you are spot on with the bundle size. But this will vary based on use case. The larger the app, the more calls for validation, more setup, and most importantly the JS evaluate time. That is actually my bigger concern.
We have a fairly small bundle, but we’re finding that it was taking almost 2 seconds on budget phones to evaluate.
Sorry I don’t remember the number, this was a year ago. But it actually caused us to replatform our whole app. A sizable amount of our users live with bad devices and poor bandwidth. So for us it made sense.
Yeah - good points. I don't mean to imply that bundle size or runtime aren't valid concerns. It's just that in the vast majority of "modern" apps, assumed to be running on "modern" devices, the overhead to bundle this one (small) code file, and then to run it on entry into each function, is wafer thin.
But obviously, in some scenarios, with some teams, and on some apps, those are absolutely valid concerns. I just laugh sometimes because I've seen too many cases where a JS dev is fretting over whether he should add another 5 KB to a bundle - on an app, that's running in a heavily-marketed site, in which all the other corporate influences have already chunked many megabytes worth of graphics, trackers, video, iframes, etc.
Kinda late to the party here, but I can see the utility of this approach. Although it is still inherently "defensive programming", at least it does so in an elegantly friendly and readable fashion.
Since this library provides the runtime checks for user-facing code, then I'd say this is a better solution than just assumptions based on type annotations.
However, I may still opt to use TypeScript internally. As I mentioned in my comment from your previous post, TypeScript lives in a "perfect" world. If I can be sure that your library sanitizes all incoming input, then I would at least find solace in the fact that the internal TypeScript application layer indeed operates in that "perfect" world.
Otherwise, if I wouldn't use TypeScript internally, then I'd have to write validators everywhere, which is not exactly... elegant, per se. The inherent verbosity is an immediate deal breaker in internal layers (where I could assume a "perfect" world in order to keep my sanity).
So in summary, I believe your approach is ideal for setting up the "perfect" world. Validators are necessary for front-facing applications such as clients, user interfaces, and APIs.
But once the "perfect" world is set up (by the aforementioned front end), I believe TypeScript is enough for writing secure applications without the runtime overhead that comes with repeatedly and defensively validating every single "interface" throughout the codebase.
TL;DR: Validators are ideal for setting up the "perfect" world. But once everything is set up, TypeScript can finally take over and enforce the contracts between internal interfaces via type annotations.
Totally agree. Although I don't personally care for TS, I'm not trying to claim that my little validation library is truly a replacement for it. As you point out, I definitely feel there's a "time and place" for TS. I just think that, in many places where people are using it, it's not the best tool for the job.
As for verbosity, that's largely a subjective judgment that everyone makes for themselves. My approach does add one additional LoC to every single function declaration. In my experience, that's still far less than the extra code I end up writing to appease TS. Of course, your mileage may vary.
I appreciate the feedback!
The pleasure is mine. Your recent articles really provoked me to think about the true value of TypeScript in my projects.
Before, I would slap in TypeScript everywhere and call it a day. Now, I am very aware of the fact that TypeScript alone is terribly unsafe—and sometimes foolish—in user-facing environments. And for that, I have much to thank for.
Now that I research, there seems to be a Babel plugin (tcomb) that can do $Refinement. You can do it side-by-side with Flow, and only use tcomb when you what to emit runtime type checking (e.g. function entry points.)
In order to enable this feature add the tcomb definition file to the
[libs]
section of your.flowconfig
.gcanti / babel-plugin-tcomb
Babel plugin for static and runtime type checking using Flow and tcomb
It would probably throw error, if I try to use
tcomb
alongside TypeScript.Nice work Adam.
It's eerily similar to my runtime check library. :)
Well maybe not so eerie, because these types of checks just makes sense for validating input.
What's different is that I take a functional approach; so no classes, no methods, no optional params, no this-keyword, no method chaining.
I also have a separate set of funcs in another module specifically for throwing. My typechecker module itself only contains checker funcs that only return bools.
The chaining is a cool idea, but I couldn't use it with oneliner funcs smoothly.
Here's an example usage using the isNotNil checker func...
Here's another example usage using a "throwIf" func from my throwIf module...
P.S.
I'll mention also that taking a functional approach using free funcs—each with their own "exports"—allows one to take advantage of tree-shaking. It's usually not common for a module to use more than one or two of the checker or throwIf funcs.
Very shocking, coming from the guy whose username is
Functional Javascript
:-)Seriously, though. I rarely use classes for much of anything anymore. I do find them to be practical and useful when creating little libraries of utility functions - which is what this is. Especially when some of those functions need to call each other. But you could definitely do this without the class.
I only just recently added them in my latest iteration. I've been using a previous homegrown library for several years with no params. However, one of the biggest things that I like to check for is making sure that the data types aren't empty. Cuz if you're expecting a string/object/array, it's quite common that an empty string/object/array isn't valid.
To get around this before, I'd have functions like
aString()
and.aPopulatedString()
. Adding the optional param was just a way for me to collapse those into a single validation.That was also something that was only added just recently. I don't think I'd ever written something designed for chaining, but one of my coworkers suggested it because I often have a function with two-or-three arguments. And I want to provide validation on each one. So the chaining is just a way to conveniently and logically collapse into into a single LoC - in the same way that the function signature itself is usually a single LoC.
It's definitely interesting to see your approach. Great minds, and all that...
I especially like how you've logically concatenate them in front of the eventual function call.
Thanks for the feedback!
Hi Adam,
I skipped this article of yours last year. i just wanted to point one library that might also do the job and that i use everywhere. It's called Joi.
Joi is a really powerfull validation library that covers everything you listed and a lot more.
Here are some of your article examples converted :
it's really powerfull, it can check anything with any complex schema. you can check emails, patterns, etc...
might be worth giving it a shot.
Also, FWIW, the NPM package for my version of this is now published here:
npmjs.com/package/@toolz/allow
Very cool - thanks for pointing this out!
Yeah, that's right. I used to have a simpler version that you can see here:
github.com/bytebodger/type-checkin...
But I don't really use that anymore. In fact, I'm thinking about making
allow
an NPM library, if only because I've never actually done that before and it'd be kinda cool to say that I finally created my own NPM package.Exactly. If you look at my own code, I definitely don't keep defensive programming "to a minimum". In fact, I use it all over the place. But I only do so because the raw LoC are so scant. If it's verbose, and if you have to think to much about it, then defensive programming quickly becomes a burden. And when something's a burden, we jump through all sorts of mental hoops to justify why we shouldn't do it at all.
What I don't quite understand about this concept is what your code is supposed to do at runtime when it finds a runtime type error. I see that the
allow
library throws an Error by default and can be configured with any callback. Isn't the only advantage here that you guarantee your program to throw the error at the top of the function, instead of wherever else it would eventually throw an error in the body of the function?That might make your error monitoring dashboard a little cleaner to look at, but it doesn't seem like it helps the people using the software. I suppose you could also display a slightly better error message, but it's not going to be significantly more useful to any visitor who has no way of resolving the error.
I dunno. That's a good question that I was already thinking of.
I have the React dependency in there for only one of the checks -
allow.aReactElement()
- which is important to me because I'm mostly a React developer these days. But yeah... I understand that this is package bloat if you're not specifically working on a React project.I'd probably do it as two packages. The stripped down one for general JS dev, and the "bulkier" one for React dev. Of course, even React devs could still use the slimmer one if they didn't feel the need to use
allow.aReactElement()
.I do NOT see "real-world" problem being solved by using TypeScript.
All I see are silly examples of a "function that adds or divides 2 numbers' to justify "type checking". e.g., Who writes a function just to do basic math????
Next, developers are testing (aka runtime) their code LINE-BY-LINE anyway, so "compile-time" checking isn't all that beneficial in saving a developer time coding.
In other words, TypeScript add a layer of complexity that doesn't have that much benefit to JavaScript world.
NEWS FLASH:
Boom!
npmjs.com/package/@toolz/allow
Yuck...
Hahaha. OK.