This post originally appeared on my blog in 2014, but remains relevant, which is why I am sharing it here.
Back in 2014, Scott Hanselman gave a fantastically-entertaining keynote at BlendConf entitled “JavaScript, The Cloud, and the rise of the New Virtual Machine.” In it, he chronicled all of the ways Web development and deployment has changed—for the better—over the years. He also boldly declared that JavaScript is now, effectively, a virtual machine in the browser.
This is a topic that has been weighing on my mind for quite some time now. I’ll start by saying that I’m a big fan of JavaScript. I write a lot of it and I find it incredibly useful, both as a programming language and as a way to improve the usability and accessibility of content on the Web. That said, I know its limitations. But I’ll get to that in a minute.
In the early days of the Web, “proper” software developers shied away from JavaScript. Many viewed it as a “toy” language (and felt similarly about HTML and CSS). It wasn’t as powerful as Java or Perl or C in their minds, so it wasn’t really worth learning. In the intervening years, however, JavaScript has changed a lot.
Most of these developers first began taking JavaScript seriously in the mid ’00s when Ajax became popular. And with the rise of JavaScript MVC frameworks and their ilk—Angular, Ember, etc.—many of these developers made their way onto the Web. I would argue that this, overall, is a good thing: We need more people working on the Web to make it better.
The one problem I’ve seen, however, is the fundamental disconnect many of these developers seem to have with the way deploying code on the Web works. In traditional software development, we have some say in the execution environment. On the Web, we don’t.
I’ll explain.
If we’re writing server-side software in Python or Rails or even PHP, one of two things is true:
- We control the server environment: operating system, language versions, packages, etc.; or
- We don’t control the server environment, but we have knowledge of it and can author your program accordingly so it will execute as anticipated.
In the more traditional installed software world, we can similarly control the environment by placing certain restrictions on what operating systems our code can run on and what the dependencies for its use may be in terms of hard drive space and RAM required. We provide that information up front and users can choose to use our software or use a competing product based on what will work for them.
On the Web, however, all bets are off. The Web is ubiquitous. The Web is messy. And, as much as we might like to control a user’s experience down to the very pixel, those of us who have been working on the Web for a while understand that it’s a fool’s errand and have adjusted our expectations accordingly. Unfortunately, this new crop of Web developers doesn’t seem to have gotten that memo.
We do not control the environment executing our JavaScript code, interpreting our HTML, or applying our CSS. Our users control the device (and, thereby, its processor speed, RAM, etc.). Our users choose the operating system. Our users pick the browser and which version they use. Our users can decide which add-ons they put in the browser. Our users can shrink or enlarge the fonts used to display our Web pages and apps. And the Internet providers that sit between us and our users, dictating the network speed, latency, and ultimately controlling how—and what part of—our content makes it to our users.
All we can do is author a compelling, adaptive experience, cross our fingers, and hope for the best.
The fundamental problem with viewing JavaScript as the new VM is that it creates the illusion of control. Sure, if we are building an internal Web app, we might be able to dictate the OS/browser combination for all of our users and lock down their machines to prevent them from modifying those settings, but that’s not the reality on the open Web.
The fact is that we can’t absolutely rely on the availability of any specific technology when it comes to delivering a Web experience. Instead, we must look at how we construct that experience and make smarter decisions about how we use specific technologies in order to take advantage of their benefits while simultaneously understanding that their availability is not guaranteed. This is why progressive enhancement is such a useful philosophy.
The history of the Web is littered with JavaScript disaster stories. That doesn’t mean we shouldn’t use JavaScript or that it’s inherently bad. It simply means we need to be smarter about our approach to JavaScript and build robust experiences that allow users to do what they need to do quickly and easily even if our carefully-crafted, incredibly well-designed JavaScript-driven interface won’t run.
Top comments (7)
When I started to work with JavaScript, the language was still in its infancy,
document.write
was used on most pages and the browser wars were just starting.Most developers who came after us don't realize that we live in a golden age, where browsers are performant and capable, where you can run on the server, too. They've gotten so used to it that they can't even see how things could be otherwise.
I think I’ve been fortunate in some way to have had a lot of broken experiences since my earliest days on the web. In fact, my first experience was in a command line browser in ’95, trying to use sony.com, which was all image maps with no alternative text. Major fail. It gave me perspective on how things can go wrong.
I also have a knack for breaking things. I think I’m just a lumbering edge case.
Great read!
Is it actually that bad, if an installed client would be compared to a web client? In both cases you can not rely on certain points, like connection.
Totally agree. I’ve installed a number of apps that are network-dependent and seen them fail abysmally without a connection even when it shouldn’t be necessary (e.g., Diablo 3 in 1-player mode). That’s not a universal thing though; there are lots of binary apps that carry everything they need with them.
Reflecting on this eight years later (wow, time flies), are there any particular trends to point to which have worsened, or improved on, the state of things?
I think the one thing I’ve learned over the last ~25 years on the web is that things are cyclical. I think a number of things influence this, most notably new folks joining the effort with their own perspectives and talents. The pendulum or prioritization swings back and forth between developers and end users; sometimes several times a year!
I think what’s been most promising to me is that we are pushing both forward over time. I mean we’ve made giant leaps forward when it comes to developing websites. It’s still hard work, but we’ve found ways to simplify very complex things (e.g., login, credit card processing) and make them more universally available, which improves the quality of the products we build. Those leas are often accompanied by massive improvements in developer ergonomics, but often come at the expense of accessibility and (often) resilience. As these improvements mature, however, they do tend to get backfilled in a more user-considerate way. For example: client side frameworks leap ahead, followed (eventually) by server side rendering of those very frameworks.
I think my one wish is that we had a more diverse group of people (and perspectives) at the table when these frameworks and techniques are being developed so that we could end up with more inclusive products from the get-go, with less of the cyclical push and pull that tends to create headaches for the people our products are meant to serve.