Have you ever wondered why Quicksort called quick? Because it's one of the fastest sorting algorithms out there. By the way, that's not why it's called quick. Yet, it's still faster than a bubble, insertion and selection sort and it works fast pretty much in all cases. In the average case, quicksort has Θ(n log(n)) complexity, at worst Θ(n2). When the problem is divided into little chunks and those chunks recursively divided into more chunks and more and more. This is can be perceived as a Θ(n log(n)) complexity. Check the below to solidify this information.
xn
xn/2 xn/2
| |
| |
˅ ˅
xn/4 xn/4 xn/4 xn/4
| | | |
| | | |
˅ ˅ ˅ ˅
x x x x
Since we get past the boring part we can start typing our first example.
[1, 3, 2] // First, we need a pivot so we choose the first element as pivot. If any element greater than pivot goes right if smaller goes left.
[1], [3, 2] // We now have 1 on the left so need to sort one more time. This time we pick 3 as a pivot. Since 2 is lower than 3 we push it 2 left we end up having
[1, 2], [3] // Then we concat those array into this
[1,2,3]
Let's see it in action.
Code
function quickSort(arr) {
if(arr.length < 2) return arr
const pivot = arr.shift() // Shift() mutates original array and return first element. Opposite of pop()
const leftOfPivot = []
const rightOfPivot = []
for(let i = 0; i < arr.length; i++) {
if(arr[i] <= pivot)
leftOfPivot.push(arr[i])
else
rightOfPivot.push(arr[i])
}
return [...quickSort(leftOfPivot), pivot, ...quickSort(rightOfPivot)]
}
This might seem scary at first, but belive me it's not. We make use of recursive function and destructuring. Whenever you write recursive function, always define a base case first which in our case is, if array has less than two elements it means array has only one element and doesn't need sorting, we just return the arr
. If array size bigger than 2, we first pick pivot using shift()
which deletes first element from original array and returns it.
Then, we need two arrays to store bigger and smaller elements that sorted against pivot.
Then, we iterate our original array. If, item in the original array smaller than pivot push it to leftOfPivot[]
if it's not push it to rightOfPivot[]
.
Here comes the crucial part. We use ...
to destructure and call ourquickSort()
with that leftOfPivot[]
and rightOfPivot[]
array to repeat all this process. All these individual recursions will keep running until their array size is smaller than 2. Each recursion will finally yield it's own sorted array.
Thanks for reading.
Top comments (3)
Unfortunately in real world you also need:
...
Otherwise it can be shown as an example of a binary algorithm, but actually it's so strongly slow that even doesn't make any sense. Just because ... has O(n) where n is .length and you repeat it again and again. Whereas mutating an existing array you will just sometimes shift elements.
Array push has O(1), but sometimes it costs O(n). Becase it's like a rubber. It has some extra space at the end and when this space ran out it just copies everything to a new array with bigger extra space.
In case when you need to create a new sorted array you can just do it at the very beginning. I mean create a clone-array and then mutate it.
I used this example to teach the idea of quicksorting and recursion. This is not a real-world example. By the way, you can easily get rid of
shift()
using first item as an indexarr[0]
and start iterating by staring from index 1. And I don't think...
gives you any side effects.But you might be right about
push()
. If you truly improve the performance of your sort, you should really focus on choosing better pivots I picked the first for simplicity. Maybe try using Median of three(First,Middle,Last) or maybe median of three random elements to avoid O(n2).So anyway, I'm open for improvements to ease out the learning curve of this algorithm.
...
doesn't give you any side-effect, you're right. It's FP pure. But it is copying the whole array every time :) That's where the problem is.I think you may even set pivot equal 0 each iteration and even then it would be faster than version with
...
:)...
ruins this algorithm. Most of algorithms have to be impure to be useful.So you can dramatically improve it by getting rid of excess copies. I think it'd be better even in educational point of view