Haskell has a particularly elegant implementation of the quicksort algorithm:
qs :: (Ord a) => [a] -> [a]
qs [] = []
qs (x:xs) =
let sma...
For further actions, you may consider blocking this person and/or reporting abuse
JS and current devices are slow enough.
Even methods like
Array.filter
should be used sparingly, since they eagerly allocate a new array on every single call.Unfortunately, neither are iterators and generators optimized enough (yet?) in JS runtimes.
The Haskell definition is indeed elegant, but it just doesn't scale to JS, at least for the time being.
I agree with your sentiment, and would not recommend actually using this code in production. It definitely won't be quick! JavaScript performance is less of a concern as it used to be. As for
Array.filter
, the tradeoff has to be made between saving machine performance (both speed and space), and human performance (clarity and maintainability).I appreciate your readability concerns, but that article looks rather insane to me.
He starts by benchmarking absolutely bare,
asm.js
level code:Actually, astonished that it didn't optimize to O(1)/constant time/noop in both languages, since the output is unobservable. Was the C compiled without optimization? 🤔
He then proceeds to rewrite
again, asm.js spartanity code, but both of the things mentioned in the comments actually optimize well in JIT
into
instead of
which in the future can be written as follows to avoid the intermediate array:
introducing a whole ass mutable sorting step instead of a min reduce, fundamentally transforming the algorithm from
O(n)
toO(n log n)
, on TOP of the temporary memory allocation and slower constant factor.Up next, he has
delete user.password
vsuser.password = undefined
.Oh, it takes 1 billion iterations to show a difference between the two? Try benchmarking the whole application that contains this line. Look at memory use and battery power consumption.
How about the fact that the deletion affects the speed of all of your code that handles
user
-type objects by turning them from template objects into dynamic dictionaries or at best duplicating the monomorphised jitted code in memory, meaning less of it fits into the CPU cache?Would love to see a better real-world benchmark! If you can find codebase where overuse of
Array.filter
has resulted in an application that is unusable, I would love to see it!And last but not least, see how competitive WASM is with JS, even when supposedly having to run through a JS glue layer, in JS's historical home of the browser.
And this is not about fancy ergonomic end-user-written business logic, this is about web frameworks, worked on for years to improve performance at any cost. You know they aren't using
delete
orforEach
.And yet just look at those memory allocation numbers.
How fast JS runtimes are, given the language specification, is a huge achievement and nothing short of a miracle. But when you regard JS as the delivery target, because that's what runs in all browsers, I don't think it's ever right to completely forget about how to write to make the best use of those efforts.
I'd love to see that too. I'd tongue-in-cheek say facebook.com, but that's rendered unusable (performance-wise) by quite a bit more than
Array
methods :)(They are used there, and not transpiled, though, which I found rather surprising.)
Haha, truthfully, I've been thinking the same thing about Facebook. There are times I can barely get it to fully render. I think their problem (and the problem with React in general) is that literally everything is replicated in the virtual DOM. I would love to see a JavaScript compiler to WebAssembly that turns things like
Array.filter
into faster solutions. Paired with a UI library that does virtual DOM in web workers, it could give us the best of both worlds (declarative code that compiles to optimized low level constructs).I'm just assmad that
Array.filter
et al got added to the language specification, with all its awkwardness like creating an array, and passing so so many arguments to the callable passed. In addition to thwarting attempts at.reduce(Math.max)
and.forEach(console.log)
, it causes arity mismatch which once again causes a miniscule deoptimization. Because, honestly, who writes.reduce((a, x, _, _) =>
? Not many :vVirtual DOM seems to be fundamentally too expensive for the performance people want, so a different model like reactive signals that bypasses it entirely is needed. I shitposted about it here yesterday.
Writing the business logic (including bringing in performant libraries that you use in the backend or native application) in the same language as the UI helpers and compiling it to WASM makes sense to me. The speed of WASM DOM modification is sure to increase somewhat in the future, but currently the critical performance downside for me is first-time startup performance. Sure, caching compiled modules is fast and efficient, but is shipping a raw HTML loading/landing/login page while the WASM downloads and compiles really enough?
Perhaps the situation will improve as people make sense of dynamically linking multiple pieces of WASM together, with the granularity and enthusiasm they show for JS bundle code splitting?
Just a question, I may be missing something.
You said
Yet your code is:
This will return an empty array when x is not undefined, and do more sorting if it is undefined.
Indeed, I ran the following code, and it logged
[]
.To fix, simply change the
!== undefined
to=== undefined
.You could also swap the clauses in the ternary expression instead.
I tested this and it works.
That will filter out any
undefined
s, or return an empty array if the first element isundefined
.null
gets sorted to the the beginning of the array.Update:
I played around with it and came up with this:
This allows for having
undefined
at the beginning of the array. Not as elegant, but required due tox
beingundefined
if an empty array is passed in (which is when the quicksort recursion ends and starts "undoing").I did a few runs in JSBen.ch using my Pi 4 4GB.
JSBen.ch test.
JSBench.me was less favorable.
I get this:
JSBench.me test
Note: see updated benchmark code in my later comments.
Or use nested ternary operators:
I tried to figure out something like that, but got stuck.
Much better, thanks!
(Now to rerun the benchmarks...)
New benchmark results:
Run 1:
Run 2:
New JSBen.ch test
P.S. I dunno why I have got so stuck into this benchmarking :).
Cool! Thanks for adding those benchmarks! I wonder how it compares to this version:
I did it with just
.sort()
(labeled as native), but I'm not sure how that compares.It does affect the sort order though: plain
.sort()
orders it[numbers, nulls, undefineds]
, while this orders it like the other implementations, withundefined
s being put at the end instead of being filtered, i.e.[nulls, numbers, undefineds]
.Feel free to write some more benchmarks yourself, if you want :).
I did those on my Pi 4 4GB, which is slowish.
It gets 24k ops/s for native sort while my phone (Galaxy A51 4GB) gets 30k ops/s.
Update
I remembered something "quirky" about JavaScript's default sorting algorithm: it coerces values to strings and sorts them lexicographically.
The reason the plain sort worked was because all then numbers had the same number of digits. If they had varying numbers, it would have sorted them like this:
[2, 20, 3, 30]
.I have made updated JSBen.ch benchmarks using
arr.sort((a, b) => a - b)
(I can't manage to login to JSBench.me):Thanks for the catch! I'll update!
It's a cool refactor example, and its an example for a common error ...
The ternary operator nice replacement for the overload, but the correct way would
surprise, surprise
x !== undefined ? etc.
'cause this way qs filters out all falshy element from the array and drops there tail too.
Good catch! Without explicitly comparing against undefined, my version would fail for arrays containing zeros. Since this is a sorting algorithm, I'll assume that an array of numbers is being passed in. I'll update to correct it!
Sorry, i just replaced your original bug with a bigger one. And maybe i didn't emphasized enough, that those zeros are not just the zeros. Boolean false, empty string and even the document.all (in browser it's an
arrayor object with type of "undefined" – what?) is falsy...Yeah, that's true. But I guess what should you expect if you try to put a boolean, empty string, or an object in a number sorting algorithm... There is a reason why
Array.sort
coerces all items in the array to strings.Beautiful. For all its faults, JavaScript does offer a level of versatility uncommon to most languages. That means we can always learn and apply effective and efficient techniques found in other OO or/and FP languages.
Cool stuff. Maybe not the most practical, but certainly interesting.
I learned more about Haskell syntax (as, alas, I still haven't 'learned me a Haskell for great good') than anything else, but that's never a bad thing.
Glad you found it interesting! To be honest, this post is mostly about explaining Haskell in JavaScript terms, and isn't meant to be all that practical.
Quick sort done inefficiently = Bubble Sort