Sometimes there are problems that have no universally good solutions. There is some tradeoff to be made. Some perspectives that can't be protected. Sometimes it isn't even clear if any of the options are preferable to the others.
What we ended up with in the log was:
React 0 0 0
Vue 1 2 0
Svelte 1 0 0
Solid 1 2 2
I first posted this a year and a half ago but it's been haunting me ever since. I keep revisiting it. In my dreams, and my day job. When working on Marko 6, we couldn't make a decision and decided to throw an error if one tried to read a value already updated in that cycle until we could make up our minds.
So how can all these JavaScript frameworks all have different behavior? Well, there is a good argument for each. I had people reply to that tweet about how their framework did the only sensible thing. And they are all right, and perhaps all wrong.
Batched Consistency
Let's start with React. When you update state, it holds off committing those changes until the next render cycle. The benefit here is that React is always consistent. count
and doubleCount
and the DOM are always observed to be in sync.
Consistency in frameworks is important. It builds trust. You know when you interact with the view what you see is what you get. If the user sees something but the state of the app is different, that can lead to obscure bugs because user-driven actions can cause unexpected results while appearing intentional. Sometimes to serious consequences (financial or otherwise).
This extends to development. If a developer can be sure everything they are dealing with is in sync they can trust their code will run as expected.
However, what this means is the often painful:
// updating state in React
count === 0; // true
setCount(count + 1);
console.log(count, doubleCount, el.textContent); // 0, 0, 0
Updating state does not update right away. If you are doing a sequence of changes, passing values around you will have the old value. On the positive, this pushes you to do all your state changes together which can be better for performance, but you need to be conscious that if you set the same state multiple times the last set wins.
React's batched update consistency model is always the safe bet. No one is thrilled about it, but it is a really good default.
Reactive Consistency
Even if "correct", batch consistency often leads to its confusion and bugs because of the expectation of values updating. So doing the opposite is what Solid does and by the next line, everything is updated.
// updating state in Solid
count() === 0; // true
setCount(count() + 1);
console.log(count(), doubleCount(), el.textContent); // 1, 2, 2
This is perfectly consistent and it fits expectations but as you can imagine there must be a tradeoff.
If you make multiple changes you will trigger multiple re-renders and do a bunch of work. Even though this is a sensible default in a framework like Solid which doesn't re-render components and only updates what changes, sometimes this can still cause unnecessary work. However, independent changes have no performance overhead. But like React it might push you to apply all your changes once.
Solid's consistency model also prices you into being aware there is a batching mechanism, as it is important for optimization.
Reactive Batching
The author of the $mol framework makes a pretty good argument to defend his framework and Vue's position. In Vue, things update reactively but are scheduled like React. However, they apply the direct state changes immediately.
// updating state in Vue
count.value === 0; // true
count.value++;
console.log(count.value, doubleCount.value, el.textContent) // 1, 2, 0
The trick that these libraries do is, they mark values as stale and schedule them, but don't run the updates immediately unless you read from a derived value. Only then will they will eagerly execute it instead of waiting to where it usually will be scheduled. This has the benefit of being as performant as it needs to be while pushing off the heaviest work like the rendering side effects.
This is the first approach that isn't consistent we've talked about. You have partial consistency of the pure calculations but it isn't immediately reflected in the DOM. This has the benefit of appearing consistent for most things. However, if downstream side effects would ever update state, then those changes are also not applied until after even if read.
Vue's batched reactivity is probably the most effective at making this all a "non-thing", but it might be the least predictable.
Natural Execution
In the company of the others, Svelte's execution might not seem that desirable. It isn't consistent. And does not attempt to appear to be. It also is sort of perfect for Svelte.
// updating state in Svelte
let count = 0;
count++;
console.log(count, doubleCount, el.textContent); // 1, 0, 0
In Svelte everything looks like normal JavaScript. Why would you ever expect the derived doubleCount
or the DOM to be updated on the next line when you set a variable? It makes no sense.
Like Vue, people won't think about this much. However, they are much more likely to hit that inconsistency with derived data sooner. Initially, this requires no explanation to get up and running, making this model feel the most natural to those with no pre-conceptions. But is it what we are really looking for?
Svelte doesn't even try to be consistent. This might be a blessing and a curse.
Choosing the Best Model
This is the point of the article where I'm supposed to say the right answer is "it depends" and leave you all with some profound thoughts. But that's not where I'm at.
There is a mutability vs immutability argument behind all of these. Like picture grabbing an item at a certain index in an array and putting it at the end of the array.
const array = ["a", "c", "b"];
const index = 1;
// immutable
const newArray = [
...array.slice(0, index),
...array.slice(index + 1),
array[index]
];
// or, mutable
const [item] = array.splice(index, 1);
array.push(item);
In either case, one would expect to end up with ["a", "b", "c"]
.
As you can see the immutable change can be applied as a single assignment to the newArray. However, with our mutable example, we change the actual array with 2 operations.
If the state did not update in between our operations like React (maybe picture something like Vue's proxy) we'd end up with ["a", "c", "b", "c"]
. While we would get "c" as our item from the splice. The second array operation ("push") would effectively overwrite the first so it would not get removed from the list.
In addition, reality is a little bit more complicated than these examples. I intentionally chose an event handler because it is outside of the typical update/render flow but inside you will find different behavior.
Using React's function setters gives up to date values:
// count === 0
setCount(count => count + 1);
setCount(count => count + 1); // results in 2 eventually
console.log(count); // still 0
Vue can mimic Svelte's behavior with Effects:
const count = ref(0);
const doubleCount = ref(0);
// deferred until after
watchEffect(() => doubleCount.value = count.value * 2);
console.log(count.value, doubleCount.value, el.textContent) // 1, 0, 0
Solid's updates work like Vue's default while propagating any internal change from the reactive system. This is necessary to prevent infinite loops. However, it's explicit batching and Transitions API leave things in the past like React.
So... ?
So honestly, this all sucks. Enough that I feel the need to be aware of batching behavior. And with that awareness then I'm compelled to offer a consistent default as it feels like the sanest thing to do.
For many of you this is probably unsurprising.. I'm the author of SolidJS, so why wouldn't I say that? Solid's eager updates work well with its rendering model and are complemented by an opt-in for batching.
But the real revelation to me was just how much my opinion changed in the past couple of years. When I first saw this problem designing Marko 6, I was all in on Vue's batched reactivity. Being a compiled syntax having explicit opt-in felt out of place and mutation not updating is awkward. However, I definitely would have put Svelte's approach as my least favorite.
But now I'm not nearly as certain. Working on Solid which embraces explicit syntax I have all the tools at my disposal. If batching is opt-in, and if I'm going to give up consistency for "intuitive behavior" (and supporting mutation), I want predictability atleast. And in that Svelte's too-simple model makes a lot of sense.
So coming into Solid 1.5 we are evaluating a new "natural" batching model to complement our eager consistent defaults (and our in-the-past batching of Transitions). I don't know if there is a lesson here. I can't fault anyone for coming to a different conclusion. These tricky problems are why I love this work so much.
The skeptic might point out that would Solid have all update models in it, and they'd be kind of right. I don't know. Can't beat them, join them?
If you have opinions on this and want to be part of the discussion come join the SolidJS discord where this topic is being discussed currently.
Top comments (41)
Really?
I'm pretty sure all four are internally consistent.
When it comes to solution approaches there is no universal model to consistency ("All models are wrong, but some are useful", remember).
What is important is that once you are equipped with the mental model that is appropriate for the framework/library that everything is predictable.
The Principle of Least Astonishment (POLA) isn't an unassailable law; it fails to acknowledge that an individual's level of surprise is directly correlated to their past individual experience. So what level of "majority experience" is required for POLA to be valid?
[Douglas Crockford, JavaScript - The Good Parts, 2008; p.2]
For decades JavaScript has aggravated developers worldwide because it didn't meet the expectations set by their first or favourite language. That didn't stop JavaScript from "eating the world". If JavaScript surprises you then your "mental model of JS" is faulty.
Similarly when switching between frameworks/libraries it should be expected that adjustments to one's mental models have to be made. (OK, that part sucks but that's the cost of doing business.) Ideally any tool should adopt the internal consistency model that allows it to operate the most efficiently and effectively as an end product (i.e. "in production").
That said every effort should be made in the documentation for that tool to lay bare the optimal mental model so that developers don't have to waste time "being surprised" during the on boarding process.
There is a reality that all abstractions/models are imperfect but I dislike being reminded of it in such an obvious way. I never feel I can choose one, yet I think the most important part is choosing. And it's one of those things where certain options feel more correct to me, but I dislike the implications. And in so I end up just disliking all the options.
The truth is this example is such a small piece of the overall puzzle so it doesn't matter much. But just enough when you have to explain to someone why you made the specific choice.
The "why" is important and should be clearly documented so that there are no mental gaps in the puzzle.
Also more and more personalities like Miško Hevery and Stefan Judis are expressing concern about the current hyper-focus on DX.
Rust gave us the notion of "Zero Cost Abstraction". Perhaps we need more "Zero Cost DX".
I guess I have some thoughts on that. I think DX is overvalued, but I think the focus on it from tool builders isn't. Honestly it is why all of this stuff exists. Even Qwik. Otherwise go vanilla over some PHP or whatever. I generally say UX over DX, but the truth is there is always some way to make better performance, some lower level you can go, but the challenge is making it accessible.
Even if the focus is too far, it's a very hard place to step back from. Almost table stakes to have that DX story. I don't see unless we hit something critical for that to change. So it won't, for now. More solutions continue to get more complicated. It's concerning to me.
I think there are steps in the right direction, though. I haven't used Solid, but IMO Svelte (despite being a variation on the theme of complex local tool stacks and compiling to rather than authoring browser compatible code) is a few degrees different in a good way, especially if you used Rollup instead of Webpack (or use SvelteKit, now). The design and results seem to make fewer extreme tradeoffs between UX and DX. There's definitely still a big ol' industry wide problem, but this next generation of things slowly gaining prominence give me some hope. 'Course, I've said that before and it's still, um, the way it is, mostly.
I'm not allergic to build steps obviously, and I think they achieve things otherwise basically impossible. That new tooling might hide the complexity but it doesn't change the nature of it.
My concern about complexity is more on SSR/Hydration/Compilation side. It is exceedingly hard to achieve what we want without more advanced compilers. Solid compiles but only JSX, just like React and most historical JS frameworks. There is something nice there from the portability standpoint and that there is a clear division between JavaScript and your DSL. You can pick it up use as much or as little as you want and slice it however you want.
What seems to be coming is that we will need compilation to re-order end user code. This troubles me. Svelte, React Forget, Marko, Qwik.. maybe soon Solid.. I've been trying to see if there is another path without handing the keys over. But where I am right now it looks unlikely. So I'm going to attack it piecewise.
So it isn't about build step existing. But figuring out how we can make sense of the code we write. Is it still JavaScript? Maybe the answer is it shouldn't be and we need all encompassing DSLs but I have to ask the question.
On the positive performance is getting some attention again which is good. But it's 2 sides of the same coin.
「If JavaScript surprises you then your "mental model of JS" is faulty.」
LOL
I would call Vue's lazy computation approach "invisible inconsistency". What's inside doesn't really matter, as long as everything looks consistent to the outside observer.
When you access the reactive properties programmatically, the desired part of the subgraph is updated on the fly.
Yes, if you access the DOM directly, you get inconsistency in Vue. But that's a Vue-specific problem that Vue can easily solve. For example, in
$mol_view
, each component has two methods:dom_node
that returns the current DOM element as is, anddom_tree
that returns the same DOM element, but ensures all its subtree is up to date.The user will not see an inconsistent state either, because the whole reactive state tree will already be updated automatically by the time it is rendered. In
$mol_wire
, we go one step further and don't do the full update at the end of the current event handler, but postpone it until the next animation frame. Thus recalculations caused by different events for a short time (about 16ms), when the user will not see their result, are not performed unnecessarily.The Solid model has exactly the opposite approach, resulting in low default performance, and consequently the need for additional gestures for optimization. But the worst thing is not even that, but that exceptional situations at least break consistency, and at most break the application (although the second seems to be just a critical bug):
My biggest concern with it is the example where I show Vue imitating Svelte. I don't think things that should be derived should be modelled that way, but when people do that inevitably out of laziness it's awkward because part of it has updated and other parts haven't. With React or Svelte you at least know that nothing downstream has run.
Yeah that seems like a bug if it never picks up again. I think I know when I broke that. Thanks for the heads up.
Out of curiousity though what do you expect to happen around the first error case. In Vue or Solid or any reactive library
count()
is set by that point anddoubleCount
couldn't be calculated. Should it throw on read?For this reason, no separate effects are used when using $mol_wire. Instead, the same invariants are used that describe the usual reactive dependencies as well, but neither can change something on the side besides the return value. This gives a stable, predictable and manageable moment of applying side effects.
Sandbox
$mol_wire and MobX have similar exception behavior - they are transparent to them as [if there were no intermediate memoization and the computation starts from scratch every time they are accessed] - this gives a simpler mental model.
Sandbox
Thanks this is good information. I appreciate you taking the time to explain.
Dude, calm down with you $mol , habr is fed up with your mess already. When you’ll stop imposing your craft on everyone, maybe someone will find interest in it, but for now you are just an annoying hypocrite. I checked your $mol, bad technics, spaghetti code and it deffo not even comparable to Big Three Libraries/Frameworks.. stop it, please.
Could you describe your complaints about $mol in more detail so that I can correct them? Where did you find the spaghetti there? Which practices seem bad to you and why?
I think you are comparing pears to oranges or apples here ... the Solid code does something obvious the react one cannot literally do.
And uland would do the same React does, without needing any tooling around, but that's absolutely expected.
This whole thread looks to me like this question:
As summary, I am not sure this post was super needed, fair, or relevant, as much as I appreciate all the work done behind Solid.js.
P.S. shall we talk about that blocking thread Solid.js behavior instead?
edit if anything, I think this is the only inconsistency React (and uhooks in uland) might need to solve:
Within the same execution stack calling
setCount(count + 1)
multiple times will result into havingcount + 1
live as next state, but using a callback might indeed produce shenanigans because the previous count might have been already updated ... however, to be honest, I've never seen auseState
update function used more than once or in multiple places and within a callback ... but fair enough, this is a consistency bug, imho, not a feature, if that's the result.I mean React is very consistent. That wasn't really the point. It's that they are all different. That the comparison can be made because the behavior isn't necessarily a foregone conclusion for all libraries. The thing is I think about this a lot because it really is a choice. Maybe not for React, but it is for others.
And this isn't about tooling or build or anything. As you said you can do this behavior without any of that, and that's the case with Solid's HyperScript or Tagged Template Literal versions.
Someone wise once said once you need to explain something 3 times you should write it down. I've explained this one way more than that. And it's been a critical decision in a number of the frameworks I've worked on/with. So I think it is very relevant. And if you felt it singled out a single solution unfairly that wasn't the intention. This is very much a behavior/design expectation thing. With modern DSLs the expectations of what behavior should be is not very cut and dry and I think the exercise is valuable.
No idea what you are talking about. We aren't doing anything specifically awkward that I've seen, and we have means to schedule things, even concurrent rendering for those into that sort of thing.
beside me editing previous post to add another (fair) point ... let me answer the rest:
all I meant to say is that if you have
const [value] = thingy
it's not a design decision thatvalue
will be the same for the entirety of its existence in that scope, it's simply how ECMAScript defined constants in a scope.Saying that a constant is different from a property access (Vue) or a function invoke that "who knows what's going to return next" (Solid) is how JS works .. nothing to do with frameworks, hence my question: what were you trying to achieve here? If you wanted to explain differences you did, but it should never have been a competition, imho, as these are the basics of JavaScript, not frameworks.
If you calling
setCount(count() + 1)
updates the DOM element textContent it means you are side effecting synchronously in the DOM world blocking the main thread ... that's what I mean.The reason uland and React or others return
0
as textContent is because they don't synchronously update the DOM when the state changes, they schedule it, like you said before.Now, if Solid via tools does some magic vDOM thingy there, OK, I didn't know, but if Solid.js applies state changes synchronously to the DOM it's blocking and blocking is not cool.
More explicitly, in React:
will result into a single update with value
3
next time ... with your examples instead, it looks like with Solid, the same operation viacount() + x
will result into 3 DOM updates instead, no scheduling ... that's slower, and a pattern you chose, not the best one for sure, likely the worst out of the 4 libraries or frameworks you decided to compare in here.It is a design decision for me. We have the ability to choose any API we want when designing things. I've hit this over and over. Change it to
let
if you feel more comfortable but it is irrelevant. This is an over simplified example to illustrate change management behavior.It isn't a competition because as the article eludes to at the end. I'm actually back here again trying to see if I should revise my previous design decisions here. I may have opinions but this is not saying which framework is better. There is a tradeoff to all of these, and that's life.
It isn't a VDOM thing either. Inferno works like Solid in this example. It is very much a choice.
On the topic of synchronous blocking I want to challenge that a bit. I mean all DOM updates happen on the main thread so this has to be a matter of how much work you do. As I said it isn't that batching/scheduling behavior doesn't exist in Solid, more that there is a question here. Synchronously updating the DOM could be fine if you do basically no work but update the DOM. If someone wrote:
would you get on their case for synchronously updating the DOM? Solid is just a system of events that do stuff.. It runs once and wires it up and then goes and fires. You can schedule those events etc.. but that's it. At somepoint something blocks someone because the work needs to be done. Solid fine-grained just tries to keep that at a minimum always.
That's the point you are allowed to have your opinion. I'm not going to tell you that ___ choice is worse for uland or whatnot. But you just did. Thank you for your input.
that's what every framework and library does at some point in time ... but you do it without scheduling updates, and that's an issue because you might involuntarily trigger repaints and reflows while executing JS ... Solid.js here is as bad and blocking as accessing
element.offsetWidth
could be, because a state can cause other DOM parts to move right away so you'll have, on a complex stack, 10+ blocking things instead of maybe 1 or two, because style changes are scheduled next paint tick anyway, unless you force re-calc there, and Solid.js will never be able to batch (the topic of this post indeed) all changes at once.You made a choice to use callbacks and you have pros and cons, but it'd be great if you could recognize the cons ... and blocking the DOM per each state change is a HUGE cons to me, and I didn't know, so I was surprised you wrote this as if Solid.js was cooler (I know you didn't mean it ... but you know, you kinda tried hard to justify that 1 2 2 and I tell you, that 1 2 2 can be a real-world performance issue in scenarios textContent is not the only thing involved in the state change).
Feel free to ignore my comment, or maybe rethink scheduling in Solid.js because this thing I've learned today is a pretty bad news to me, something I believe if properly tested in js-frameworks would push it down the slope with extreme ease.
P.S. custom elements attributes changes, as well as connection and disconnection events, are synchronous and blocking too ... if a state change an attribute and you have a reactive chain in there, for a single state change that maybe involved the whole stack you'll have a clunky experience due blocking and undesired things moved around.
Maybe for 95% of sites out there this is not a real concern, but if everyone else went for a non directly blocking approach when a state gets updated, there's probably a reason behind ... right?
Gotta disagree there. You're effectively saying that given a choice between correctness and performance you should choose performance, without even evaluating whether the correct solution's performance is sufficient.
JS is a wildly mutable language. The contexts in which you might be accessing state are uncountable. Solid preserves consistency of the dataflow graph when accessed from all contexts and is consistent with JS mutation semantics. Therefore it's approach is the least surprising and most flexible, able to slot in where you need it. It is clearly the right default, and should only be bypassed when profiling actually demonstrates a problem.
I chose correctness like I've already mentioned ... my libraries chose correctness which is the safest bet as the OP wrote.
The first version of uhooks was synchronous and it's been profiled, benchmarked and tested until the point it became like React, so that all states that changed in an entire stack call would side effect with a final state once (as effect) and that was both correct, more predictable, but also enables fiber/suspense like patterns but again, effecting synchronously plays badly with Custom Elements and some other HTML or SVG attribute, but also I never talk or expose concerns by accident, too many years on reactive patterns and yes, I made my choice there.
Not to me and apparently not for React neither, where hooks come from. Maybe solid should stop comparing its mechanism with hooks, as it's clearly too different there, for better or worse.
Svelte can have consistency similar to Solid's, when (and where) you need it:
Yeah. I think the key part here (and if I had added this example it would re-enforced this more), is all of these do lead to needing to be aware of this sort of behavior. React 18 I believe also has a way to flush changes early, sort of the opposite of this. Instead of wait for it, it's like force it. But similar results.
I think this detail is pretty important for this article, since the tutorial teaches you what tick does and why it's there.
Leaving it out and calling Svelte inconsistent feels a bit disingenuous to me.
Edit: Otherwise, I love the article! Thanks for writing it, Ryan!
All the frameworks have a way to bypass their default behavior(React has
flush
, Solidbatch
, Vue and Sveltetick
) but that isn't distinguishing, because ultimately we all end up having to choose a default. I didn't show any favoritism in that sense as I didn't focus on those as part of their description and instead focused on the behavior.I don't have any malice in calling Svelte's default approach inconsistent, and even show some admiration in the conclusion.
Given this comment section, I understand the desire to come to the defense of your frameworks. I wasn't taking potshots, but noting this wasn't a simple question or answer. So I choose to take any comments to sincerity of my character under that light.
That's totally fair, and i wasn't aware all frameworks had a tick-like util.
I still feel it's consistent in my mental model of svelte: The variable changes because we assigned to it (like js), but reactivity (i.e. $: and template bindings) hasn't kicked in since the last round of invalidation. Use tick to force.
I guess I don't really get why it's inconsistent for you.
This is a bit late, but I'm a bit surprised to not see the words "database" or "transaction" appear in a discussion about state consistency / reactivity. One could imagine implementing reactive components basically as SQL-style (materialized) views, and implementing state management with SQL-like transactions -- e.g. with ACID-style guarantees. I'm not the first person to think of implementing UIs in this way, see for example
cs.columbia.edu/~ewu/files/papers/...
I'm not a web developer, so I apologize if this is all old news!
Is
console.log
the right metric to compare these frameworks with though?The user doesn't see it, and it isn't shown in the UI, so is the entire concept of "which changes first" or "inconsistencies" lean more toward when side-effects (like printing to the console) occur rather than consistency in the UI?
This might be a bit too much of a "creating a problem to satisfy a solution" situation.
console.log is the easiest way to show where it can be observed by a developer. There is probably only the smallest window for things to get out of sync here between the state and the UI, but for React for example this consistency is very important to them. Others to varying degree.
The interesting part I think is that these all have very similar syntax and they work a little differently. I don't think this trivial example is as important as what it shines a light on in terms of mental model and values etc...
In this case, console.log is merely used to show the behavior. In real-life applications, it's the output of this console.log example that would otherwise go into your side-effects' calls that may lead to unpredicted behavior.
I have converted a react app (desktop like ui with apps and folders) with heavy use of drag and drop to Svelte and I ended up with corrupted wrong state everywhere, so I stayed with react. No idea if its just me or really its related to something mentioned here but the way state behaves in complex situations in Svelte just feels unsafe.
I would point out that a lot of the surprise comes down to the semantics of the function names.
I too have seen bugs come from failure to understand that React's changes are scheduled later. I submit that the reason React's result is so surprising for many devs is that the name
setCount
(and it being widely documented by React and others as a setter function) strongly suggest that it will, you know, set the value ofcount
and do it now — not after this render call returns, not a few milliseconds after that, but immediately. Otherwise how can the code have a predictable execution context? And if it doesn't setcount
now, then why is it referred to as asetter
function, namedsetCount
? The naming semantics are bad.Svelte similarly fails on semantics and perhaps false promises. It sets
count
immediately, but thendoubleCount
— which is supposedly is derived fromcount
— is not recomputed immediately and is therefore inconsistent its own name —doubleCount
is no longer double the value ofcount
.Variable and function names are important for understanding code and communication design contracts. Any time a function or variable doesn't have the behavior or value indicated by its name is a sure sign that something is amiss, and bugs are likely to follow.
So I would say that of the above approaches, Solid is doing it right. Yes, that may sometimes involve some redundant calculations and a little less efficiency. But if the second function returned by
createSignal
is going be called a setter function — with a convention of being namedsetSomething
— then it better damn well set that something immediately, each and every time that it is called. There's too many decades of precedent in this matter for a setter function to do anything else.Wanna have a batch mode for maximal efficiency? Fine, but make it a new variant of signal — e.g.
createDeferredSignal
— and change the docs for that to suggest that the second function returned by it should be calledupdateSomething
to indicate that such updates will not be immediate. Then it should be sufficiently clear that recompilation of any memos derived from that deferred signal will not be immediate either.Imo the most expected flow is the one solidjs has. U write a sync code -- expect sync results. But considering the harsh reality of the slow dom and the fact that model-first concept is most comfy one, one can sacrifice the dom update consistency (as no one really cares about what dom displays at all, data-only remember) and then vue's approach seems the most predictable data-first, dom performance-oriented.
I'm proposing an alternative: what if you just ban eager read