DEV Community

Cover image for World Wide Web Wars
Andrea Chiarelli
Andrea Chiarelli

Posted on • Updated on

World Wide Web Wars

When I was a kid, my grandfather used to tell me about his wartime adventures. He served in both World War I and World War II. He was a boy of 1899.
I was interested in the entertaining side of the stories without understanding the suffering behind them. One thing that stuck with me was that he often said that the two wars were different. They were different in the way they were fought, in the way they involved the population, in their motivations.

Years later, I find this thought about the two different wars applicable to a context that is fortunately less tragic than the one my grandfather experienced: the World Wide Web.

In my career, I've been privileged to see the birth of the Web and its evolution from a platform for sharing documents to a platform for running distributed applications. And along the way, I've witnessed a few wars.

I'll try to tell you about them...as if I were your grandfather.
Make yourself comfortable: it won't be short.

The HTML-Centric Era

In the beginning, the Web was based only on HTML and HTTP. Nothing could be simpler: documents were described using a simple markup language and transferred from one computer to another using a simple protocol. Berners-Lee's idea was simple and revolutionary at the same time. And the key principle was interoperability. Let's remember that word.

I started exploring the Web in the early 90s, when Mosaic was the most advanced browser. In those days, you had to plan your Internet tour very well to avoid bleeding yourself dry with the time-based tariff (at least in Italy). For a developer, creating static HTML pages wasn't that exciting. It didn't require any true programming skills. The interesting part was to generate HTML dynamically on the server via the glorious CGI scripts.

Everything was quite clear and defined. It didn't matter what kind of script you used on the server side: Perl scripts, shell scripts, C programs. What mattered was that its output was HTML and that it was transmitted over HTTP.
And the future looked bright.

The First Web War

Then, in 1995, JavaScript came along and the scenario began to change. The client side of the Web also needed development skills. New browsers came out: Netscape Navigator and Internet Explorer. The first war began: the browser war.

Browser builders, basically Netscape and Microsoft, competed to offer more and more attractive features: support for animated GIFs, new HTML tags (e.g.: marquee, blink, font), support for scripting languages with little or no compatibility (JavaScript, JScript, VBScript), support for applets, Flash objects, ActiveX components, etc.

It was a continuous explosion of new features but they were supported by only one browser and starting from a specific version. There was a strategy, later called EEE, to defeat competitors with incompatible features.

Creating an interactive HTML page was a nightmare. If you wanted the page to be usable by as many users as possible, you had to take into account what features were supported by the browser and the version the users were using. Browser sniffing techniques emerged, which had to be kept up to date in order not to cut users off from your website. To understand a bit what I mean, try to take a look at some sample code from that era.

The alternative approach was to create web pages optimized for one specific browser: an easier solution for the developer, but a frustrating solution for the user. All over the Web you could find sites with more or less elegant stickers inviting the user to use the browser for which the site was optimized:

Best viewed with Internet Explorer

Best viewed with Netscape

So, in that first Web war, we had fierce competition among browser vendors to offer an increasingly advanced Web experience. Developers had to deal with differences between browsers, and between different versions of the same browser. Many chose to optimize their pages for a particular browser, much to the chagrin of users who experienced accessibility problems and were forced to use multiple browsers.

At the time, the Web was not a good place for developers and users. Nor was it for the interoperability that had been dreamed of. The overall satisfaction with the Web could be summarized as follows:

Users Developers Browser vendors
🙁
🙁
🙂

The Role of Standards

At this point, you may be wondering about the role of standards in all this confusion. In particular, what about the role of the W3C? In fact, the consortium was responsible for defining a reference standard for web technologies, primarily HTTP and HTML. But at the time, its influence (and responsiveness) was insufficient for a Web as dynamic as that of the late 1990s.

Often the role of the W3C was limited to defining as a standard those HTML features that were already de facto accepted. For example, iframe, object, and XMLHttpRequest were actually introduced by IE before they became standard. So, it was an a posteriori standardization that didn’t bring much benefit to either developers and users.

Things began to change when the ECMAScript specification was released in 1997. JavaScript and other minor dialects had to conform to it to ensure compatibility across browsers. But scripting language standardization alone was not enough. That same year, the W3C defined DOM Level 1, which moved browser interoperability in the right direction. But we had to wait until 2000 for DOM Level 2 standardization, which stabilized the chaos that was occurring in web development.

That year, the W3C went further. It defined XHTML, a version of HTML 4 based on the formally stricter criteria of the XML standard, but with the advantage of extensibility. Anyone could extend the language by providing the appropriate DTD or XML Schema. The solution was technically sound, but it was not fully understood and considered too complex. In addition, the lack of backward compatibility of XHTML with the old HTML caused many concerns.

This led to the formation of a new working group outside the W3C, the WHATWG, whose goal was a reorganization of HTML that would eventually result in HTML 5. XHTML was doomed to die within a few years.

The Invasion of External Runtimes

As in all wars, there are those who get poor and those who take advantage.
During the browser war and the resulting lack of interoperability, many developers turned to alternative technologies. Many of them turned to Flash, Java, ActiveX, and other technologies that offered more seamless development and a consistent user experience. All they needed was install a runtime on the user's preferred browser.

At the time, developing interactive and engaging web pages using HTML and JavaScript was a daunting task. Alternative technologies, such as Flash, allowed an application to be downloaded locally to the browser and have only minor interactions with the server to retrieve data and update the user interface without having to reload the entire page. A real revolution!

Unlike ActiveX, which was tied to Internet Explorer and Windows, Flash and Java applets guaranteed a consistent user experience regardless of the user’s browser.

The price to pay was the use of proprietary technologies and some potential security risks that occasionally occurred.

The vision of the Web as a standard and interoperable environment began to crack.
Browser vendors simply supported external plugins to delegate app execution to Flash & Co. runtimes. Developers who embraced this web programming model had the advantage of not having to worry anymore about which browser the user was using. However, they were no longer web developers in the proper sense: they abandoned actual web technologies and specialized in developing for one or more of these runtimes. Users could use the browser of their choice, but they had to install and keep updated the different runtimes. In short, websites optimized for a particular browser had become websites optimized for a particular runtime - not much of a gain, after all.

While developers were quite comfortable once they chose their reference runtime, users continued to suffer from the need to install different runtimes while keeping their browser of choice. In summary, this was the overall level of satisfaction:

Users Developers Browser vendors
🙁
😐
😐

The JavaScript-Centric Era

With the introduction of Ajax and dynamic DOM, JavaScript and standard web technologies had their redemption. Partial updates of a web page became a reality using standard technologies as well, thanks to what was then known as Dynamic HTML (DHTML).

JavaScript, HTML, and CSS were the standard triad of the Web. By 2005, JavaScript was the real driver of sophisticated front-end application development. It was the new competitor to proprietary solutions like Flash, Java, and the others.

A few years later, there was a historic changing of the guard for the Web: Netscape Communicator, successor to the glorious Netscape Navigator, left the scene. From its ashes, the Mozilla project was born. And that project would give birth to Firefox. At the same time Chrome was born.

It was the dawn of a new Web era.

The Web Is JavaScript

The resurgence of JavaScript and the desire to make the web ecosystem competitive against proprietary technologies gave a strong impetus to the proliferation of libraries aimed at simplifying DOM manipulation, Ajax interaction with the server, and integration with CSS.

In 2006, jQuery was born, destined to become the most widely used library on the Web. Soon, MooTools, Backbone, Ember, and others were there to compete with it.

Now developers had great allies to create dynamic and attractive web interfaces without reinventing the wheel. The number of JavaScript libraries grew at an unprecedented rate. Not a day went by without a new JavaScript library being born.

Ecosystems were created around some of these libraries, i.e., libraries that specialize in a particular task, but are based on specific core libraries. For example, jQuery UI is a library for building web interfaces that uses jQuery for DOM manipulation and other low-level tasks. Bootstrap was also born as a jQuery-based UI library.

These libraries relieved the developer of the burden of having to work directly with HTML and CSS to create UI elements. In addition, they offered the advantage of handling the residual differences in standards support between different browsers admirably. The developer's life was greatly simplified compared to the previous Web era. The jQuery library became something of a standard and you could find it everywhere. So much so that many junior developers had a hard time distinguishing jQuery from JavaScript and the DOM. Many didn't even know what DOM was. To many, $() was a JavaScript feature!

Users’ lives also became easier in those years. In general, the experience of browsing and interacting with web pages was pretty consistent. And users could use any browser they wanted, except Internet Explorer 6 and some later versions.

By now, a web front-end application was primarily a JavaScript application. Developers simply had to choose the target library ecosystem and use compatible UI libraries. Building a web UI became a matter of JavaScript code. HTML was just a hook to pull code into the browser. Rendering was dynamically performed by the base library’s DOM manipulation. You were in the era of Single-Page Applications.

Soon AngularJS came and later React. Shortly thereafter, Vue arrived to complete the new triad of web front-end development.

Yet Another Web War?

Today, we can say that most front-end development for the Web is based on these three ecosystems: Angular, React, and Vue. Can we say that there is a war going on?
If we compare the current situation to that of the first Web war, we cannot say that there is the same kind of conflict. In general, browser vendors are ensuring some level of compliance with standards, which are finally proactive. The user's browsing experience does not suffer from the same problems as it did then.
But have we achieved that interoperability we dreamed of as a fundamental principle of the Web?

Looking more closely, those who suffer from the lack of interoperability in the current situation are the front-end developers. In the first war, they used to choose a browser and built pages optimized for it to make web development easier. Now they are forced to choose a JavaScript ecosystem and build applications optimized for it. On the user side, of course, there is a significant win. Users don't care which JavaScript framework developers use. And that's fair.

However, developers cannot reuse the result of their work in another JavaScript framework. A React component can only be used in a React application. If you need the same component in an Angular or Vue application, you have to recreate it.
In short, the war between browsers is now a war between JavaScript frameworks. Maybe a little quieter, but not exempt from the occasional religious war.
The current situation can be summarized as follows:

Users Developers Browser vendors
🙂
😐
🙂

Back to the Web Platform (Peace and Love)

The current war is not as loud as the browser war (believe me, it is not at all). But that does not mean that it is not a problem. Over the past few years, I have witnessed a split in front-end developer skills between the three major JavaScript frameworks. I have also seen a loss of basic web platform standards skills.

It seems that there is a general loss of basic knowledge, similar to what happened after the success of jQuery. Several libraries simplify the developer's life and make up for the shortcomings of HTML, JavaScript, the DOM, etc. But in the meantime HTML has evolved, JavaScript has evolved, the DOM has evolved, CSS has evolved.

The Web Components standard gives us with enough infrastructure to make the Web an interoperable platform for front-end development. Some people criticize this technology for being too low-level, but there are libraries that make the developer's life easier.

You might say that this is the same vicious cycle as JavaScript frameworks. That's wrong, because Web Components are interoperable by design. Choosing Stencil or Lit or any other library is a development convenience that has little to do with the interoperability of the resulting components.

Do we really want to continue this silent war? Wouldn't it be better to channel those energies into a standard ecosystem where everyone is free to use the tool of their choice to create components that can be used on the Web as a development platform? Let's transform the Web from a platform for running applications to a platform for composing applications. Let's add that missing piece to restore the original dream of an interoperable Web.

As a developer, the next time you start a new project ask yourself this simple question: do I really need a JavaScript framework?
Maybe it's time to bring some peace to the Web. My grandfather (and not just him) would be happy.

Top comments (17)

Collapse
 
tracygjg profile image
Tracy Gilmore • Edited

Hi Andrea, Thank you for an excellent article.
Back in year 2000 we came across the forerunner to the XMLHttpRequest component in the form of MSXML2.XMLHTTP.

var xmlHttpReq = new ActiveXObject("MSXML2.XMLHTTP.6.0");  
xmlHttpReq.open("GET", "https://localhost/books.xml", false);  
xmlHttpReq.send();
Enter fullscreen mode Exit fullscreen mode

MSXML2.XMLHTTP was introduced to the Windows environment courtesy of Microsoft Outlook but could also be used in MS Internet Explorer (5/5.5) to enable 2-way communication between the server and browser. With an XSL-T transform into HTML and some JS-based DOM manipulation (the hard way) the screen could be updated without the need for a page refresh, improving the user experience. Thus the SPA was born and a few years later Jesse James Garrett would dub the technique AJAX. It would be some years later with the release of MS IE 7 ( and other browsers) we would get XMLHttpRequest integrated into the browser and IMO Web 2.0 was born.
I have been designing and developing web-based business applications ever since.
Regards, Tracy

Collapse
 
andychiare profile image
Andrea Chiarelli

Hey Tracy, thank you for your reply.
You may be right. There was a lot of excitement in those years, and the idea of an application that could run in the browser was in the air. It was not called SPA, but very often it was called RIA (Rich Internet Applications).
The point is that there is no scientific definition of SPA, although it is commonly understood to mean an application that runs in the browser without the aid of an external runtime, such as Flash, for example. In the case you mention, there was a dependency on MSXML2.XMLHTTP.

However, at the time, IE introduced several innovations that went in the direction of today's SPAs. I also remember the XML Data Islands, the ancestor of JSON.

By the way, I think the page address you mentioned is this one.

Collapse
 
bizzibody profile image
Ian bradbury

Oh blimey. You've rekindled some painful memories. There was a time when I wrote a search interface to a large Domino backend that used the MSXML2.XMLHTTP component. MSXML2.XMLHTTP to fetch the search results. XSLT to draw the interface and results. And.... my memory tells me that it worked pretty well.

Collapse
 
efpage profile image
Eckehard

Webcomponents are basically Javascript modules, that are called through HTML-tags. They can provide some fancy, well-encapsulated building blocks for your page. But the backbone of your page will be limited to what HTML provides. Is this really enough? Are we still only "composing" our content?

Everyday new properties are added to CSS to meet growing demands. But it is really hard work to create a fully responsive app using CSS only. What, if your page needs to be completely different on a smaller device? Or you want a different appearance at night? Should we add new CSS-properties for every possible situation?

Modern Applications act more like desktop apps, they need a lot of reactivity and responsiveness. And they need a lot of internal communication. This can be provided by modern Javascript without adding new extensions every day. It is easy to create a different page, if you build your UI programmatically. So, possibly frameworks like Svelte or even more radical VanJS will do a better job.

Collapse
 
skyjur profile image
Ski • Edited

If you need completely different page on smaller device consider building 2 pages instead of trying to smush it into one. Apply concepts of composability. Design 2 pages but identify reusable pieces. HTML components really are perfectly scalable. In the end of the day all frameworks builds on top of same DOM api so DOM is always limitation and the bigger the more complex app gets typically the more direct-dom hacks are necessary to overcome framework's limitations. HTML components don't really have more limitations compared to any framework - they are more scalable than any framework overall - main issue is developer-ergonomic as api is little more cumbersome to use and you need to have more knowledge of possible architectures to use them effectively and majority of developers simply haven't got working knowledge about it.

Collapse
 
koas profile image
Koas • Edited

What a great post, thanks! So many memories... I remember back in the day when the only way to update a page without reloading was submitting a form to an iframe and returning a script tag that called a function in the parent window.

Our life as developers is so much easier now :)

Collapse
 
zirkelc profile image
Chris Cook

We’ll written and very informative article! 👍🏻 I actually remember a lot those things as they became popular.

I think PHP deserves to be mentioned here as well. I remember it as the de facto way to build dynamic websites, if you didn’t want to get started with Flash or Java Applets. And then came Node.JS…

Collapse
 
andychiare profile image
Andrea Chiarelli

Hey @chris, thank you for your feedback!
I deliberately wanted to focus on front-end technologies. These are the ones that have undergone the most disruption and caused the most problems for developers, users, and browser vendors.

The evolution of server-side technology is a different story. It has been quieter and users have generally been unaware of it (as it should be)

Collapse
 
zirkelc profile image
Chris Cook

That’s true and makes sense!

Collapse
 
ben profile image
Ben Halpern

Great post

Collapse
 
ajfriesen profile image
Andrej Friesen

What a great article!

Collapse
 
andychiare profile image
Andrea Chiarelli

Awesome! By the way, I wrote a book about Web Components a few years ago :-)

Collapse
 
lico profile image
SeongKuk Han • Edited

Great post! it was fun reading and an opportunity to rethink about the goal of the web development not just for a 'job'. Thank you!

Collapse
 
fruntend profile image
fruntend

Сongratulations 🥳! Your article hit the top posts for the week - dev.to/fruntend/top-10-posts-for-f...
Keep it up 👍

Collapse
 
smitterhane profile image
Smitter

python is envious about kingdom of javascript at the front-end

Who knows what we will see of Javascript vs Pyscript in future

Collapse
 
tempral profile image
Carmine Zonno

Isn't already everyone free to use the tool of their choice to create components that can be used on the Web ?

Collapse
 
andychiare profile image
Andrea Chiarelli

Sure, everyone is free to use the tool they prefer to create components. But the point is: are components created with the tool A interoperable with components created with the tool B?
This is the missing piece to restore the original interoperability of the World Wide Web project IMO.