DEV Community

Cover image for Components are Pure Overhead
Ryan Carniato for This is Learning

Posted on

Components are Pure Overhead

A couple of years ago in the The Real Cost of UI Components, I explored the cost of components in JavaScript frameworks. I asked whether components were just overhead?

And the answer was: it depends. The Virtual DOM library I tested, ivi, had no issues handling more components. But Lit and Svelte were dramatically worse. They scaled back to almost React levels of performance as I broke them down to more components. All their non-VDOM performance benefits basically disappeared.

Alt Text

The versions scale from "0" having the least number of components, through "1" which has a component per row, to "2" where each <td> is wrapped in a component.

Luckily for both of those frameworks, almost all benchmarks can be written as a single component.

But when was the last time you wrote an app in a single component?

In their defense, 50,000 components on a page is a bit much. But this still illuminates an inevitable shortcoming we need to overcome. 2 years later I still stand by the conclusion.

So I’m going to make a bold statement here for the Non-Virtual DOM crowd. I think Components should vanish in the same way as Frameworks. If the new world is compilers, we can do better. We can optimize along bundle chunk lines instead of ES modules. If Components are throw away think about how much overhead we could reduce by inlining them.

But I've come to realize there is much more to this than performance.


Your Framework is Pure Overhead

This is not an appeal to the Vanilla JavaScript purists that lurk in the comments section on every site. Instead this an honest look at JavaScript frameworks from someone that builds them.

When one says the Virtual DOM is pure overhead they are often referring to unnecessary object creation and diffing. And Rich Harris, creator of Svelte, covers this topic well.

Of course, as shown above, there are Virtual DOM libraries faster than Svelte, so what gives?

Consider this example from the article:



function MoreRealisticComponent(props) {
  const [selected, setSelected] = useState(null);

  return (
    <div>
      <p>Selected {selected ? selected.name : 'nothing'}</p>

      <ul>
        {props.items.map(item =>
          <li>
            <button onClick={() => setSelected(item)}>
              {item.name}
            </button>
          </li>
        )}
      </ul>
    </div>
  );
}


Enter fullscreen mode Exit fullscreen mode

The criticism is that on any state update a VDOM is forced to re-render everything. Only change your selection, but you still recreate the whole list again. However, most performant VDOM libraries can recognize that most of these VDOM nodes never change and cache them rather than recreate them each render.

But more importantly, there is a solution to isolate updates that every React developer knows. No, it's not useMemo. Create a child component.

For the cost of almost nothing, a VDOM library can stop update propagation by wrapping this logic in a different component. A simple referential check of properties will tell it when to re-render. Unsurprisingly the VDOM can be pretty performant.

Speaking of useMemo some recent attention brought to the fact that it probably shouldn't the be the first thing you reach for. However, reactive libraries tend to memoize by default.

In React or any other VDOM library when you want to break out of the update cycle structurally, you split out components and lift state. To improve initial render performance with a library like Svelte, you do the opposite and remove as many intermediate components as possible.

Why? Because each component is a separate reactive scope. Often this means more than just creating the reactive scope. There is overhead to synchronizing updates between them. This is all corroborated by the benchmark at the beginning of the article.

While we were busy focusing on how VDOM libraries do all this potentially unnecessary work, we weren't paying attention to our reactive libraries doing all this unnecessary memoization.

So yes, your Reactive library is pure overhead too.


Component DX > Performance

When I look at both approaches I see the same problem. The way we structure Components has too much say in how our applications perform. This is a problem.

A component's purpose is more than just performance. The way our components are structured directly impacts the maintainability of our code.

When you have too few components you end up duplicating logic. The typical component has state and a view. The more complicated the control flow you have and the more nested state is, the more you will find the need to duplicate that logic in both. When a new requirement arises, as simple as say toggling visibility, you find yourself creating that same conditional in multiple places.



export function Chart({ data, enabled, headerText }) {
  const el = useRef();
  useEffect(() => {
    let chart;
    if (enabled) chart = new Chart(el.current, data);
    return () => chart?.release();
  }, [enabled]);

  return (
    <>
      <h1>{headerText}</h1>
      {enabled && <div ref={el} />}
    </>
  );
}


Enter fullscreen mode Exit fullscreen mode

How many different places are we doing additional checks due to props.enabled? Can you find all 4? This isn't React specific. Equivalent code in Svelte(and most frameworks) touches 3 locations.

Conversely breaking things up into too many components leads to heavy coupling. Too many props to pass. This is often referred to as prop drilling. The indirection can make changing the shape of that state surprisingly complicated. There is potential to continue to pass down props no longer used, to pass down too few that get swallowed by default props, and for tracing to be further obscured by renaming.



function Toggle() {
  const [on, setOn] = useState(false)
  const toggle = () => setOn(o => !o)
  return <Switch on={on} onToggle={toggle} />
}
function Switch({on, onToggle}) {
  return (
    <div>
      <SwitchMessage on={on} />
      <SwitchButton onToggle={onToggle} />
    </div>
  )
}
function SwitchMessage({on}) {
  return <div>The button is {on ? 'on' : 'off'}</div>
}
function SwitchButton({onToggle}) {
  return <button onClick={onToggle}>Toggle</button>
}


Enter fullscreen mode Exit fullscreen mode

Vanishing Components

Alt Text

The future is in primitives. Primitives that are smaller than Components. Things like you find today in reactive systems. Things that might look like what you see in React Hooks and Svelte. With one exception. They are not tied to the component that creates them.

The power of fine-grained reactivity and the reason for Solid's unmatched performance are not fine-grained updates. Too expensive at creation time. The real potential is that our updates are not tied to our components. And that goes beyond one implementation of the idea.

Between reactive models and these hooks, we have converged a language for change:

State -> Memo -> Effect

or, if you prefer, Signal -> Derivation -> Reaction. We don't need components anymore to describe our updates. This is the mismatch React developers intuitively feel with Hooks. Why do we need to keep track of both our components re-rendering and the closures over our Hooks?

And typical Single File Components(SFCs) are just the opposite extreme where we are still imposing (unnecessary) boundaries by technology. Ever wonder why there is friction between JavaScript frameworks and Web Components? Too much conflated on a single concept.

Every time we write a component there is this mental overhead on how we should structure our code. The choice doesn't feel our own. But it doesn't have to be that way.


The Future is Component-less

Not that we won't write re-usable components or templates. Just components will vanish, removing their impact on the output. That doesn't require a compiler to start. We can move to make components no heavier than a simple function call. That is essentially Solid, but that is only one way to attack this.

We don't need separation to accomplish this either. It is unnecessary to hoist all our state into a state management tool playing puppeteer to our renderer. I'm proposing aggressive co-location. Modern frameworks have that right. Whether JSX or SFC we've been pulling it together and should continue to.

Ultimately, if a compiler could look beyond the current file it was processing to use language to understand your whole app, think of the doors that would open. Our logic and control flows could solely define the boundaries. That's not just unlocking levels of performance, but freeing ourselves of the mental burden of ever worrying about this again.

Wouldn't it be amazing to recapture the pure declarative nature of something like HTML in authoring our sites and applications? The unadulterated cut and paste bliss? I'm not certain where this goes, but it starts here.

Top comments (49)

Collapse
 
brucou profile image
brucou • Edited

The thing is, most abstractions come with some overhead... And we still use them happily because they are also useful. So I don't see the focus on overhead or performance as particularly interesting in general. But modularity, separation of concerns, cohesion/coupling, declarativeness -- that is the kind of things I think we should think about much more. That would be worth in and of itself a series of articles.

Long story short, components are not going anywhere because modularity remains a necessity for any codebase of reasonable size. What exactly is a component may vary, but the idea that you split a big things into small things because the big thing is too big for certain purposes, that is not going away.

Modules being separate, standalone units, they facilitate reusability in miscellaneous contexts, which then positively impacts maintainability, as you mention.

Modularity also relates to composition because the small things must be factored back into the big thing. A good modularity story must go hand in hand with a good composition story.

So the interesting question for me is how to modularize programs or applications.

Talking about UI frameworks, I noticed that:

  • components are not often separate, standalone units (because they rely on some context or external information to perform computations)
  • components are often not reused

In other words, modularization of web applications is more often than not suboptimal and instead of the spagetthi of imperative programming, we have the spagetthi due to many components that interact with each other in obscure ways through undeclared dependencies on external data. I discuss that at length in my Framework-free at last post.

To be real modules, components should be as independent and interchangeable as ESM modules are. That in particular means that they should have an interface that allows predicting the entirety of their computation, so the program that uses the component need not depend on its implementation details and reciprocally the component needs not know a thing about the program that uses it.

So the future is not component-less in the least. In the same way, the fact that ESM modules can be bundled into a single file does not mean that ESM modules are unnecessary overhead. But we may indeed be interested in better ways to modularize our code, that is, a better componentization story that we have as of now, because a lot of what we call components are not actual modules, which seriously complicates, as we know, the composition story.

So I am thinking let;s see how the story continues and how you will address modularity in whatever it is that you propose.

For those interested in modularity, coupling and cohesion: http://cv.znu.ac.ir/afsharchim/T&M/coupling.pdf

Collapse
 
beeplin profile image
Beep LIN

There is always an interesting opinion coming from funtional-programming based community, that is, a stubborn ignorance of performance.

The thing is, most abstractions come with some overhead... And we still use them happily because they are also useful. So I don't see the focus on overhead or performance as particularly interesting in general. But modularity, separation of concerns, cohesion/coupling, declarativeness -- that is the kind of things I think we should think about much more. That would be worth in and of itself a series of articles.

Yes abstractions always bring overhead, but there is a thing called "zero-cost abstraction", in C++ and Rust community. These costs should be paid more on compile-time rather than run-time.

What Ryan is trying to say here is simply this:

  1. For React with vdom, components are cheap during runtime, so we can keep components in runtime for those vdom-based frameworks;

  2. For non-vdom-based frameworks like Solid and Svelt, runtime component interface comes with detectable cost, so we keep components only in pre-compile-time, and eliminate them during compiling, so they vanish in runtime.

This is surly a legitimate argument, taking a little more to compile, achieving better runtime performance, and no harm for modularity, decoupling etc. Very close to "zero-cost abstraction".

Collapse
 
brucou profile image
brucou • Edited

Quoting from a previous reply:

Costs have to be put in front of benefits. For instance, not paying the cost of reassembly (when your small modules become a big one) through compilation may have other costs that are not discussed or obvious; and/or produce benefits that are not worth the trouble. I can't talk about what you are proposing because I don't know what that is.

The general idea to be efficient or economical is a good one, that is the base of any proper engineering approach. But my point is that the devil is in the details.

JavaScript is not a zero abstraction either and you pay most of it at runtime. Should we compile JavaScript to binaries and send that to the browser? Compiling is great, inlining is great, anything to make the code run faster is great but my point is that it is not free. There are tradeoffs and I want to have a look at the full picture before adding yet another layer of complexity in a landscape that is already crowded.

Second line of though: functional UI also ignores components. In fact, Elm has long recommended staying away from arbitrarily putting things in components a-la-React, not because of some artificial FP religious values, but simply because they have found better patterns (that is patterns with better tradeoffs).

Thread Thread
 
beeplin profile image
Beep LIN

JavaScript is not a zero abstraction either and you pay most of it at runtime.

Yes by far JS is the most widely used FP-flavored (partly) language in the practical world thanks to the hard and dirty work by v8 and other engine teams who take performance as a major pursuit rather than ignoring it.

Similarly the react core team does
all the complex and dirty work inside the framework so that we can enjoy the neat f(state)->ui pattern. And yes they are trying all their best to improve performance.

Should we compile JavaScript to binaries and send that to the browser?

Yes that is why we have rust and wasm now and they may bring great changes in the near future.

they have found better patterns (that is patterns with better tradeoffs).

That is the point. In EE we have a concept called gain-bandwidth-product. For a given circuit pattern, the product remains a constant. Increasing gain will harm bandwidth and vice versa. It seems much like the argument that pursuing better performance and less overhead will harm modalarity and neatness. When we have a fixed performance-neatness-product, say, 12, do we choose 2 for performance and 6 for neatness or 4 for performance and 3 for neatness? That is what tradeoff means.

But that is only the beginning of the story. In fact human beings are developing new circuit patterns, inventing new designs, exploring new materials, to achieve better product. The same here. We cannot say vannila JS and react and vue and solid share the exact same performance-neatness-prouct so the only thing that matters is some kind of tradoff. Not true. Framework authors are trying to push the product to higher level. Ryan in this article is trying to point out something that can improve performance without harming neatness. In fact his work can be seamlessly used in xstate or raj or kingly, all tools you mentioned in the functional ui articles. That is pure progress. That is what you called better patterns bringing better tradeoffs.

Application-level engineers like us are mainly accepting a given performance-neatnes-product determined by our infrastructure and making tradeoffs within it. But infrastructure-level engineers like framework authors like Ryan have higher duty to enhance the product for all good.

Thread Thread
 
brucou profile image
brucou • Edited

I feel like this is slowly driving out of topic. The title of this piece is components are pure overhead. Assertion that I reject as far-lacking in nuance. Then "The Future is Component-less" I also reject because once again we have a framework author busy evangelizing his particular vision of the future through gratuitous dramatic click-baity formulas. As much as I like discussing programming topics, and god knows a lot of topics are worth discussing (modularization being a very important one), this kind of gross, ill-founded generalization irks me to no end and takes me away from actually spending my time addressing them.

Regarding performance improvement of libraries, framework, compilers, etc. hats off to all those who are bringing this out. I am glad that they found their calling and that their audience can benefit from their efforts. They generate options and enlarge the solution space. I do reiterate however that performance is just one variable among others and that architects and tech leads need to take an holistic view when making decisions.

I do get the point that you can compile away "components" under some circumstances -- that works for any abstraction (Kingly for instance compiles away its state machines). I do get the point that removing the necessity to create components for other reasons than the benefits of modularity actually frees the design space for the developer. All of that is good. Whether all that actually will be worth pussuing in your specific application under development/team/constraint context is another question. Your mileage will vary.

Collapse
 
peerreynders profile image
peerreynders

So I don't see the focus on overhead or performance as particularly interesting in general.

I think that this position is informed by past experience on the back end and on desktop.

  • Personal (i.e. client side) computing has shifted to handheld devices.
  • While some flagship devices are still increasing the fat CPU core single thread performance, average devices are opting for a higher number of low power/small core CPUs with lower single thread performance.
  • Moore's law is done.
  • While improved mobile network protocols promise performance gains under ideal conditions, growth in subscriptions and consumption can quickly erode gains on the average connection.

As a result future average device single thread performance and connection quality could be trending downwards under many circumstances.

Aside: The Mobile Performance Inequality Gap, 2021.

The headroom necessary to accommodate the overhead of microfrontends may exist over corporate backbones - not so on public mobile wide area networks. So in many ways a lean perspective, much like in the embedded space, is beneficial in web application development.

Most of React's optimizations in the last few years were geared towards getting the most out of the client's single (main) thread performance in order to preserve the "perceived developer productivity" of the Lumpers component model where "React is the Application/Architecture" to forestall the need to adopt an off-the-main thread architecture which moves application logic and client state to web workers - significantly increasing development effort. Svelte garnered attention because it is often capable of delivering a much better user experience (than React) to hyper constrained client devices by keeping the JavaScript payload small and the CPU requirements low - while also maintaining a good developer experience.

components are not going anywhere because modularity remains a necessity for any codebase of reasonable size.

The issue is that in many cases components aren't a zero cost abstraction at runtime. So while component boundaries may be valuable at design time, the cost shouldn't extend beyond compile time. Frameworks/tools should favour abstractions that only impose run time cost where there is a runtime benefit - all other (design time) abstractions should ideally evaporate at compile time.

Collapse
 
brucou profile image
brucou • Edited

I think that this position is informed by past experience on the back end and on desktop.

Maybe. But also I think that user experience is the thing that we care about. Performance, understood here as CPU bound, is one of many proxies to that (look and feel, network, offline experience, etc.). The idea is spending time chasing an X% improvement in "performance" that is not noticed by the target user is a waste of engineering resources. Microbenchmarks being by design not representative of the user experience are interesting for library makers but not that much for people picking libraries. That is, you would not pick a framework or library based on microbenchmarks. So that is why I never find the arguing over a limited definition of performance in unrealistic conditions remotely insightful.

The issue is that in many cases components aren't a zero cost abstraction at runtime. So while component boundaries may be valuable at design time, the cost shouldn't extend beyond compile time. Frameworks/tools should favour abstractions that only impose run time cost where there is a runtime benefit - all other (design time) abstractions should ideally evaporate at compile time.

JavaScript is not a zero abstraction either and you pay most of it at runtime. Should we compile JavaScript to binaries and send that to the browser? Compiling is great, inlining is great, anything to make the code run faster is great but my point is that it is not free. There are tradeoffs and I want to have a look at the full picture before adding yet another layer of complexity in a landscape that is already crowded.

Thread Thread
 
ryansolid profile image
Ryan Carniato

Yet the article is about removing constraints caused by the current abstraction. The foundations here predate React or this current component centric view and are echoes from a simpler time.

That being said I'm not suggesting going back there. My argument here has been about removing cognitive overhead of contending with 2 competing languages for change on the React side, VDOM vs Hooks, and liberating the non-VDOM from unnecessary imposed runtime overhead that hurts its ability to scale.

But if that isn't convincing enough consider the implications on things like partial hydration. This has much larger performance implications.

When I step back this isn't micro optimizing but adjusting the architecture to ultimately reduce complexity. Nothing like leaky abstractions to add undue complexity. Every once in a while we need to step back and adjust. But like the pool I bought last week that won't stay inflated, it often starts with finding the leak.

Thread Thread
 
peerreynders profile image
peerreynders

The idea is spending time chasing an X% improvement in "performance" that is not noticed by the target user is a waste of engineering resources.

How can you be sure that it isn't noticed? Squandered runtime performance is an opportunity cost to user experience.


A Quest to Guarantee Responsiveness: Scheduling On and Off the Main Thread (Chrome Dev Summit 2018)

And there are costs to the business as well:

In A/B tests, we tried delaying the page in increments of 100 milliseconds and found that even very small delays would result in substantial and costly drops in revenue.

Marissa Mayer at Web 2.0 (2006)
Google Marissa Mayer speed research

web.dev: Why does speed matter?

JavaScript is not a zero [cost] abstraction either and you pay most of it at runtime.

JavaScript is the means for browser automation. Ideally most of the heavy lifting should be done by capabilities within the browser itself coordinated by a small set of scripts. Unfortunately many JavaScript frameworks and libraries decide to do their "own thing" in pure JavaScript potentially bypassing features that are already available on the browser.

  • Treeshaking is already used to remove unused JS.
  • Minification which produces just functional but not readable JS is standard practice.

So tooling which emits the minimum amount of code necessary to get the job done sounds like the logical next step.

And at the risk of repeating myself:

Object-oriented development is good at providing a human oriented representation of the problem in the source code, but bad at providing a machine representation of the solution. It is bad at providing a framework for creating an optimal solution.

Data-Oriented Design: Mapping the problem (2018)

More than a decade ago part of the game industry, being constrained by having to deliver optimal user experiences on commodity hardware, abandoned object-orientation as a design time representation because the consequent runtime inefficiencies were just too great. In that case it lead to a different architecture - Entities, components and systems ECS - aligned with the "machine" rather than the problem domain.

Similarly in the case a (web) client application the "machine" is the browser. "Components" neither serve the browser nor the user at runtime - so it makes sense to make them purely a design time artefact that gets erased by compilation - or perhaps "components" need to be replaced with an entirely different concept.

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

In the same way, the fact that ESM modules can be bundled into a single file does not mean that ESM modules are unnecessary overhead.

In the same way, the bundler removes the ESM modules, frameworks can remove the components. If you were loading each ESM module independently I would argue that is unnecessary overhead. And it's the same thing here.

I'm not saying people won't modularize their code and write components. Just that they aren't needed to be mechanical part of the system and we should explore removing their weight. It started from a performance perspective but it has DX implications too.

I am familiar with the idea of driving everything from above. I'm just wholly not convinced. There are similarities to MVC and MVVM and those are perfectly good models. In fact, I think there is a good portion of pretty much every app that benefits from this. However, at some point, the rubber meets the pavement.

And sure you can write everything VanillaJS. That's always an option. As is hoisting state. The same can be said for web components. But there is that zone where cohesion matters and where I'm focusing. This is the domain of UI frameworks.

The reason that React and frameworks are looking so hard at solutions here is that they are essentially trying to see if we can hoist state but let the authoring experience be that of co-location. It's a sort of inversion of control-like pattern. Solid is like if you decided to write a renderer from a state management solution, and in so we kind of stumbled on a solution to achieve exactly that.

The too few or too many component issues still transfer outside of the components themselves. It's true of state management too. Any sort of hierarchical tree where there is ownership/lifecycles and the need to project that onto a different tree. I think it is important to see there are 2 trees here but just as important to not force things too far in either direction. Pushing things up further than they want is bad for different reasons than pushing things too far down.

That's really the whole thing here. About removing unnecessary boundaries from misalignment. Modularity has its place but you don't need a JavaScript framework to give you that. It comes down to the contract of your components. Some things naturally are coupled so why introduce the overhead in communication there as well. The problem with common frameworks is you aren't breaking things apart because they are too large but for some other reason. I want to remove that reason. Breaking stuff apart is perfectly fine but why pay the cost when it comes back together?

Collapse
 
brucou profile image
brucou • Edited

If you were loading each ESM module independently I would argue that is unnecessary overhead.

Overhead, maybe, unnecessary, not sure. There is the costs of things, and then also their benefits. So you need to sum both.

Just that they aren't needed to be mechanical part of the system and we should explore removing their weight.

Sure. That is the same idea than dead code elimination, i.e. not bundling library code that is not used. But that does not mean libraries are overhead right? The dead code sure is.

And sure you can write everything VanillaJS

Interestingly, that may be the zero overhead solution. But in the article I was not advocating using Vanilla JS only. With Functional UI you can still use React for instance but you would only use pure components. Pure components are actual modules. The module interface is the parameters of the function. They depend only on their parameters, that makes them independent, so they can be kept separate and reused in many places. They compose easily because they are functions. In fact, we haven't found yet a much simpler way to compose/decompose computations than functions.

Now, a user interface application can be seen as a series of computations (reactions to every incoming event - that;s Functional UI), but also as a process that is alive as long as the browser page is opened. So how to modularize long-lived processes? There have been several answers to that question. In microfrontend architectures for instance, modules are mini-applications that can be deployed entirely independently. They communicate with other modules through message passing (or events) to realize the whole application behavior. Mini-apps as module come with their own set of tradeoffs and overhead, but those who adopt that architecture find it worth the independent deployability advantage that they get. You can have different teams working completely independently on the smaller parts which gives you development velocity. But that's just one way to modularize, there are others.

Some things naturally are coupled so why introduce the overhead in communication there as well.

Yes, cohesion/coupling is a discussion worth having. What makes sense to group together? What is the shape of that group? How do the groups communicate? etc.

The problem with common frameworks is you aren't breaking things apart because they are too large but for some other reason.

Absolutely agree. How do we modularize in ways that preserve properties of interest? That is a discussion worth having.

Breaking stuff apart is perfectly fine but why pay the cost when it comes back together?

Sure but also why not? Costs have to be put in front of benefits. For instance, not paying the cost of reassembly (when your small modules become a big one) through compilation may have other costs that are not discussed or obvious; and/or produce benefits that are not worth the trouble. I can't talk about what you are proposing because I don't know what that is.

The general idea to be efficient or economical is a good one, that is the base of any proper engineering approach. But my point is that the devil is in the details. So I am looking forward to seeing how you approach the problem, and what are the benefits that your approach will provide, the associated costs, and what the sum of that looks like.

Collapse
 
aleksandrhovhannisyan profile image
Aleksandr Hovhannisyan

But more importantly, there is a solution to isolate updates that every React developer knows. No, it's not useMemo. Create a child component.

This part confused me, but I may have misunderstood what you were going for. Please correct me if I'm mistaken, but when a parent component re-renders, so will all of its children, unless those children:

  1. Are pure components (available in class components).
  2. Have a shouldComponentUpdate that always returns false (basically same as #1 but more explicit).
  3. Are wrapped with the React.memo HOC.
Collapse
 
ryansolid profile image
Ryan Carniato

I'm over-generalizing. So apologize for the inaccuracy. That differs between VDOM implementations. Some do check the props directly. But you are correct that is what it means for React specifically.

Collapse
 
aleksandrhovhannisyan profile image
Aleksandr Hovhannisyan

Ah, all good! I thought I'd misunderstood something.

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Thank you for this thoughtful analysis. I love this point:

While we were busy focusing on how VDOM libraries do all this potentially unnecessary work, we weren't paying attention to our reactive libraries doing all this unnecessary memoization.

IMO the real advancement that most UI libraries/frameworks today do is declarative UI. This brought a gigantic leap in developer productivity compared to the jQuery or vanilla DOM manipulation days. Of course they all tend to mitigate their success with a component model, which gives a concrete place to start but leads people to abstract prematurely and pay the overhead price, then the price to change that overhead later.

We like the MVU pattern. It lets us create abstraction as we identify it, not before. And we happen to use React for rendering only, not state or components-orientation, although components may be created automatically/transparently based on our declared HTML.

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

I've been seeing a lot of this and it is a reasonable solution to the problem. I just immediately wasn't happy with the fact we were still feeding into React etc.. I actually did a cool experiment(super rough) with XState where I granularly applied updates. So I think these ideas could play together nicely:

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

Our toolchain (F#/Elmish/Fable) had another option for rendering. But with a preponderance of React developers I think it didn't make sense for the maintainer to keep it. The important point here is that our developers don't have to know or care what the renderer is. We had to use keyed in some cases to make React do the right thing, but otherwise it just looks like HTML (as F# functions) to us. Our organization/abstraction strategy is at the language level rather than the UI library level. And it is possible to switch renderers if an alternative presents itself and the need arises.

We could use React bells and whistles (some projects do), but we choose not to. We don't want to get pulled into over-abstracting.

Thread Thread
 
ryansolid profile image
Ryan Carniato

In a sense, you've bought in on a different framework. Instead of betting on the renderer, you are betting on your business logic which is a good bet to make. What is interesting to me is that frameworks are continuing to develop new techniques like Svelte's animation or React's Concurrent mode or Server Components which are unique to them.

Obviously, we can opt not to use these features but to me, this is very much a different type of framework choice. Keyed is one thing, but there is a reason I'm convinced this isn't easily universalizable as much as I support research in the area even in things like Marko (idea is that HTML with some extensions as language could map to any framework).

I'm very interested in what the renderer is doing since only through its primitives do we have the ability to have the knowledge to fully leverage things like compiler analysis. There are benefits to separation but that always comes with a tradeoff in terms of optimization.

Thread Thread
 
kspeakman profile image
Kasey Speakman

I assume we might have to make some React-specific tweaks if performance becomes an issue. But we're still waiting for that day. I suspect it has to do with following FP, which should only create pure components.

Thread Thread
 
kspeakman profile image
Kasey Speakman • Edited

I wrote about it here. Edit: Well, just the organization part. I suppose the tech is glossed over a bit.

Thread Thread
 
ryansolid profile image
Ryan Carniato

I think client-side is plenty optimal in general. We are getting the limit of what we can do here. Honestly I think this is the last overhead to remove in the browser rendering part of the equation. And we are only getting to those last dozen percentage points or so. I like solutions like what you are proposing because if you ever do hit the need your decision will be more empirical. You will go with the smallest/fastest choice as you aren't that dependent on the renderer's features.

But as an author here I'm always going to be looking to do better. The real push here I imagine will be coming from Server Side rendering and isomorphic solutions. I didn't really touch on this but that same ability to analyze state can improve bundling. Tooling is the next frontier of JS frameworks. Fatigue is ending, now it's time to battle complexity.

Collapse
 
exelord profile image
Maciej Kwaśniak

I totally feel you! ❤️ Components should die as a concept and not be a bound to rendering and reactivity. Though, I think you could explain a bit more in the article, what are the components in your meaning as I'm 100% sure ppl will relate to it as code modularization, which is not the point.

Anyway, I'm so happy someone is moving in the same direction.

Collapse
 
ryansolid profile image
Ryan Carniato

Modularization is still important, just the re-rendering UI component as we know from React is restrictive. And this seems to be the one thing most libraries share even if they do it differently. Which is probably why this article can be classified under unpopular opinion.

But creators understand:


Collapse
 
valeriavg profile image
Valeria

If you create 50 000 variables instead of using a loop you'll notice significant performance drop. Does that mean that variables are doomed to be replaced with separate pointers and values to cover that use case or simply that it's not supposed to be used this way?

The problem with frameworks is not their limits but inability to fine-tune them to your needs. This goes hand in hand with how we obfuscate and distribute JS dependencies.

Imagine CRA, but with "framework" functionality hosted in lib folder. This way you could suggest a recommended structure yet let coders decide what's better for their project and avoid unnecessary functionality/overhead. That's what I'd love to see in the future.

Best regards, lurking purist:-)

Collapse
 
ryansolid profile image
Ryan Carniato

If you create 50 000 variables instead of using a loop...

If I was writing an optimizing compiler. Maybe.

I'm not quite following the CRA example. I'm gathering you mean something different than tree-shaking. Frameworks like Svelte are playing at only including the code you need by abstracting the underlying JS with nice DSLs. In so they narrow the band and capture intent better. Even things like JSX do this to some degree.

I was saying that the language of reactivity or hooks makes for a powerful DSL to describe application updates without relying on the components for that. Svelte already is going along this path. I find it takes a runtime solution to motivate a compile-time one. We do things manually before we automate them. If this unlocks this sort of capability at runtime. Compilers will follow (as the tooling becomes sophisticated enough to follow)

Collapse
 
valeriavg profile image
Valeria

TL;DR The problem of framework abstraction are not abstractions themselves, but rather inability to change them.

I was saying that I'd like to have control over the tech I'm using, control over its code. You would agree that forking React or Svelte to incorporate it in your codebase is a nightmare. But JavaScript frameworks can and should be distributed as code, as a template project with small editable library functions.

I agree with you, optimization will always be needed. Not ephemeral one-size-fit-all optimization, but a custom, particular project oriented fine-tuning. And the most efficient way to do that is to simply edit the code.

Thread Thread
 
ryansolid profile image
Ryan Carniato

Ok gotcha. Hmm.. first I've heard this particular argument. React is on one side with heavy VDOM abstraction at runtime, and Svelte on the other side where compiler takes care of everything.

I'm going to take note of this because Solid's everything is just a reactive primitive lends to this. Not sure what to do with that though. Templating is the one place where there is always a lot of code. Even things like Lit. Diffing solutions aren't really end-user tweakable and less diffing solutions like Solid are bulkier without leveraging tools like compilers.

Collapse
 
mwcampbell profile image
Matt Campbell

I wonder if it would be feasible, using a sufficiently advanced compiler, to not only make the divisions between components vanish in the generated code, but also translate reactivity into optimal imperative updates. That is, while the source code would be written in a declarative, reactive style, the generated code would have no signals, observers, memoization effect functions, etc., just direct imperative updates to the relevant DOM nodes colocated with the corresponding change in the data model, as if we had written it by hand in the painfully hard-to-maintain way. It would be better yet if we could eliminate all list diffing, though that might require us to change the way we fetch data from the server. Would this level of compile-time magic be too much to hope for?

Collapse
 
ryansolid profile image
Ryan Carniato

Bingo. Svelte actual does this at a component level. There is no actual subscriptions etc... The only thing is the component itself. But if you could take this further you could make all the code compile this way. I'm not sure we can get rid of list diffing. But for like MPA style frameworks you could probably not have it in most places.

And so it turns out this knowledge of what is reactive actually lends to Partial Hydration because you can instantly understand what could change at a subcomponent level. You could literally ship the least amount of JavaScript to the browser.

I'd be lying if I was to say I wasn't working on a project already that is on its way to doing all of the above.

Collapse
 
mwcampbell profile image
Matt Campbell

I'm guessing you're talking about Marko. If so, I can't wait to see where it's headed. Maybe it's too soon for you to answer this, but if I were to start writing an application now using Marko 5, would I need to do a big rewrite when Marko 6 comes out?

Thread Thread
 
ryansolid profile image
Ryan Carniato

The syntax is pretty locked. At least grammatically, open to suggestions on exact syntax. dev.to/ryansolid/marko-designing-a.... But it's a bit like the move to React Hooks. Old Marko will work(slightly less optimally) but if you are going to rewrite everything with Hooks anyway, I would see the desire to wait.

We have already written benched the server-side compiler and we've closed the gap with Solid in raw SSR speed. What is left to do is the browser runtime and subcomponent hydration. I'm going to write an article in more depth on this in the future. We started with basically a pre-optimized runtime reactive strategy but we basically were limited by the ability to analyze intention by the compiler (similar to Svelte assumes let's are signals more or less). With Solid explicit control gave it performance edge in that regard. But at the same time new Marko still had a certain amount of runtime overhead. It was fast, like slightly ahead of Svelte in the fast VDOM range. But we weren't happy that it felt like a compromise in a sense.

But about a month ago Michael Rawlings had an epiphany in terms of how to achieve Svelte's compile away reactivity with fine-grain component independent updates. We're still vetting that, but early benchmarks indicate we've succeeded removing the majority of the overhead of the framework. I will share more as I have more concrete things to share.

Collapse
 
javiervelezreyes profile image
Javier Vélez Reyes

Good article Ryan. I agree with all of your performance reflections. But, IMHO the problem with components is not in those concerns but in how they are being understood & used by the community.

Web components, at least the standard ones, were devised as a reuse solution and not a modularization one as @brucou greatly argues. From an old-fashioned perspective of Web Experience where it is assumed that users need & want to work with closed applications in an 80s style running on browsers, it could be true that here Web components technologies have not a relevant role.

However, the world is rapidly changing, and frequently we as developers don't realize them. Whilst users nowadays demand new interaction models based on an omnichannel multi-device world where interaction experiences are based on oral dialogues or micro-gestures on tactile watches & screens, developers go on creating fenced solutions based on Web & Mobile technologies far from what users expect.

As developers, we should be concerned with providing new experiential models aligned with user demands. A dentist appointment should be first a mail attachment, then a calendar date, and then a notification in my preferred wearable. In that no-fences world, experience flows liquidly from one channel to another (mail, web, push notification) and from one device to the next. Experiences are immersive. A youtube video is a small player on my mobile while I'm coming back home traveling in the underground. Then when I arrive at home, the video becomes a full experience simply by means of a gesture pointing to my smart TV while I relax on my sofa taking a cup of tea.

In this new world, Web components are the basis for supporting omnichannel multidevice liquid experiences where the interaction model with user traverses over more than a single application. Now closed web experience and in particular, the application term is a forbidden word because users are not interested in this kind of metaphor.

From the B2C point of view, businesses use Web components as a means to create a corporative dialog with final users. Components are transactional access points to enable fresh business spreading strategies around the Web. There is not a single corporative Web where a corporation centralizes user dialogs. There are a lot of interactions on my Google results, my ads on the web, my voice assistant in the car, my watch notifications, etc. All of those realities are Web components.

From a B2B perspective, REST APIs are becoming declarative HTML-centric dialects allowing business experts to insert access points to other businesses on the Web. Cloud-based companies offer payment, commerce, or whatever solutions and I only need to insert HTML snippets for those companies to get easy & straightforward integration with them. Here Web components work collaboratively on foreign webs as DSL to create a declaratively organic experience based on composition including both own & external tag families.

The user demand is nowadays a reality. People don't write down the text to whatsapp. They dictate to the microphone. Components are the technology ready to be used in that direction to abandon silo-based experiences. Just the question is about when we, as developers, will realize that the world has changed.

Collapse
 
ryansolid profile image
Ryan Carniato

I've definitely said for the longest time Framework UI Components aren't the same Web Components. Different goals etc. Interopt goals of Web Components are also different than the re-usability goals. I actually wrote a whole article on this that I haven't published as of yet.

Web components are a wonderful widget platform, but I'm not convinced anything beyond the most basic are great application building blocks. I was talking with Justin from Lit a few months back when we looking at the potential of using the Declarative Shadow DOM at eBay and the one thing that was clear at least from his perspective is that for Web Components to work across environments you will be relying on libraries/frameworks. There are always going to be gaps in the standards. We end up replacing one type of framework with another.

On one hand, I don't think these things need to be at odds with each other as the framework can live inside the component. On the other hand when you hear people like Rich Harris talk about Web Components (and I am generally in agreement), when compared to the power he has to optimize and orchestrate with Svelte especially around things like animations there are clear tradeoffs. Not everyone needs this sort of orchestration but SPAs exist for a reason.

There are places where this interopt is key and there are others where optimized single experience trumps. It's where they meet that is interesting I think.

Collapse
 
ryansolid profile image
Ryan Carniato • Edited

This is so hard because in terms of production Marko is rock solid but I mean we all want to play with the new toys. And the new Marko brings a lot of cool things. We are still looking for later this year, and hoping to do a beta release this summer.

Maybe we can meet half way. Projects like Vite + HMR have made the dev experience a lot smoother. At some point of the process we are backporting the new Marko syntax back into Marko 5. This will mean while you won't get to leverage the technology you will be able to slowly migrate old projects, or in your case use the new syntax so the bump to Marko 6 won't require a code change to benefit. Obviously writing new transformations for the compiler is a time investment against working on Marko 6 but it sounds like there is interest here.

I will look at what it takes to get this out.

Collapse
 
drsensor profile image
૮༼⚆︿⚆༽つ

What approach do Marko lsp-server use for the type checking going to be?

  • Is it wrapping up a bunch of stuff like tsserver, vscode-css-languageservice, etc just like in Svelte langserver? or
  • Roll up Marko's own type checker? (mean it doesn't depend on 3rd party which can be more perfomant)
Thread Thread
 
ryansolid profile image
Ryan Carniato

I was waiting for more information from Dylan but he's out this week. To my knowledge probably something similar to Svelte. We see the limitations and were sort of surprised by them, but if that is enough to satisfy the requirements and it's quicker it can help perception a lot. Getting TS in the templates would be a huge win as it is and we want to prioritize the architectural considerations first.

Collapse
 
Sloan, the sloth mascot
Comment deleted
Collapse
 
ryansolid profile image
Ryan Carniato

I spend a decent amount of time pointing out the tradeoffs of framework decisions and benchmarking with vanilla js as the control. Which is constantly met with more or less, then stop using a framework, or see vanilla wins all the benchmarks. I figured a section header that reads "Your Framework is pure overhead" was just inviting more of the same. Since it is really beside the point and doesn't lend anything to the discussion.

Collapse
 
dannyengelman profile image
Danny Engelman • Edited

Great article, I share most part of your tech vision, but with a different point-of-view.

Traditional Frameworks (and Svelte) are only "Components" for the Programmer.

To the User the end result "Product" is one big monolith.

From 1994 onward, I saw the Web grow big because Users could easily copy "code" from other websites.
WWW wasn't the only technology back then.
But it was the only technology where entry-level was low.

And we can't but agree "Web Development" has turned into something for-rocket-scientists-only

Using Frameworks (and Svelte) is like buying a IKEA Billy bookcase glued together, never to be taken apart again. Unlike early Web days, it is impossible to learn how to build/copy/enhance/extend your (own) bookcase.

That is not how Tim Berners-Lee envisioned the Hyper-Web!


Web Components technology is not about technology.

Web Components are about modularizing the whole stack.

Functionaly like how we use CDNs and Libraries.
(Alas, Lea rightly complained; there is no technology yet to rate & share good Web Components)

Web Components bring the Web back to its roots.

Web Components are Web Components are Web Components:

Web Components Technology

  • Can the implementaion be made better?
    Sure, Apple, Google, Mozilla and Microsoft are activly working together

  • Will the implementation be made better?
    Yes, bright minds like Rich Harris inspire others

  • And don't forget, CPUs still get faster every year.
    "Performance" is becoming a non-argument fast

PS. Most currect Web Component developers are developing monoliths.

Collapse
 
peerreynders profile image
peerreynders • Edited

And don't forget, CPUs still get faster every year.
"Performance" is becoming a non-argument fast

Largely repeating my earlier comment:

So even if there are faster CPUs every year, trends are conspiring so that a web application will more frequently encounter devices with lower single thread performance - which is a problem as most third-party browser technologies are still single threaded (Why can't we just make everything multithreaded?; meanwhile the browser itself is moving many non-JS tasks off-the-main-thread). At this point in the game being able to do everything on the main thread simplifies your application development, while leveraging web workers introduces you to an entirely new set of trade-offs. So getting the most work out of the main thread is still very much an issue.

Also I really wanted to like Web Components.

Having been introduced back in 2011 they seem to be a product of that time where most innovations were entirely client focused (i.e. CSR). By the time they became viable, CSR frameworks were already scrambling to retroactively bolt on SSR, while Web Components seemed to lack a server-side/hydration story. Being able to split server and client-side aspects of a component or being able to share the markup template(s) in an implementation agnostic manner would seem like a good idea.

Collapse
 
dannyengelman profile image
Danny Engelman • Edited

The most important aspect all these discussions forget is:

It has NOTHING to do with technology

In august 2019 the W3C and WHATWG agreed the WHATWG would be in the lead on Web development.
The W3C will only give the final "its a standard" approval.

The WHATWG is By-inivitation-only
And to date, Apple, Google, Mozilla and Microsoft haven't invited Facebook yet.

and it is ALL about technology

This means no single company can get away with single-company dominanting technologies

If you follow the threads, you see 4 companies more and more working better together,
something I have never seen in my 31 active Internet years.
And ofcourse they are slow... they all have to agree. I can't even agree with my wife on everything.

So/but the "V1 Web Components" standard (V0 was a Google party, not a standard) will only get better

And yes, Facebook "owns" 60-70% of the Front-End market, and doesn't even mention Web Component technology in the last React release.

  • Once AltaVista owned the search market

  • Once IE had 90% of the Browser market

  • Once Flash was installed on every device

React is the new Cobol

Thread Thread
 
peerreynders profile image
peerreynders

The WHATWG is By-invitation-only
And to date, Apple, Google, Mozilla and Microsoft haven't invited Facebook yet.

Apple (WebKit), Google (Blink), Mozilla (Gecko), Microsoft (Trident/EdgeHTML), Facebook (?).

  • Given this situation what does Facebook have to contribute?
  • Facebook has no interest in browser-engines. As far as they're concerned the Web could burn down tomorrow and they would happily continue making native clients for every platform under the sun.

So/but the "V1 Web Components" standard ... will only get better.

I think the point you are trying to make is that Web Components are a standard while React is not.

However not all standards are adopted by the industry as a whole.

Example:

Do not use this application cache feature! It is in the process of being removed from the Web platform - Using the application cache

AppCache: Douchebag

Also from the article's author: Maybe Web Components are not the Future?

React merely has a visible, vocal support base - which makes its component model seem popular.

And yes, Facebook "owns" 60-70% of the Front-End market.

With reference to what 100%?

React is used by 2.5% of all the websites whose JavaScript library we know. This is 2.0% of all websites.

Usage statistics and market share of React for websites

Top 1m 10.26%
Top 100k 22.42%
Top 10k 39.25%

React Usage Statistics

Top 1m 75.42%
Top 100k 80.9%
Top 10k 79.62%

jQuery Usage Statistics

Top 1m 44.06%
Top 100k 49.22%
Top 10k 52.91%

PHP Usage Statistics

React is the new Cobol