Sorting Algorithm Articles |
---|
Bubble Sort |
Selection Sort |
Insertion Sort |
Welcome to the second part of the sorting algorithm, from this article, we are going to face some intermediate sorting algorithm starting with Merge Sort.
These algorithms are way faster than the ones we saw already Bubble, Selection, and Insertion sort and the reason behind is that they do scale well; i.e: they do work pretty well on a large amount of data.
if for example, we compare Merge sort and bubble sort worst-case scenarios, merge sort's time efficiency would be O(n log n) which is better than the quadratic complexity O(n^2) of bubble sort.
As others, Merge sort is also a comparison based algorithm, and it exploits the facts that arrays of 1 element are always sorted, it employs a divide and conquer algorithm pattern, where it has to first divide an array till it reaches the point where only an array of 1 or empty element is left. after that process, it then starts to merge those single elements together by comparing one array's elements to the other.
Let's quickly look into merge sort pseudo code
- we will first break the given array into halves until we have an array that has one element or which is completely empty.
- once we have smaller sorted array, merge those arrays back together until you're back to the full length of the array.
- And once the array has been merged together, return the merged array.
As the pseudo-code says, we need to merge back the sorted array, so for that, we will need to implement a helper function that will do that job for us.
Merging Array (Helper function)
this function should run in O(n+m) in time efficiency because we are iterating on each item in the array once.
function merge(arr1, arr2){
let result = []; // the array to hold results.
let i = 0;
let j = 0;
// as the pseudo-code implies, we have to loop through the
// arrays at the same time and it has to be done once.
// note that if one array completes its iteration, we will
// have to stop the while loop.
while(i < arr1.length && j < arr2.length){
// compare the elements one at a time.
if(arr1[i] > arr2[j]) {
result.push(arr2[j]);
j++;
} else {
result.push(arr1[i]);
i++;
}
}
// these other while loops checks if there's some item left
// in the arrays so that we can push their elements in the result array.
while(i < arr1.length){
result.push(arr1[i]);
i++;
}
while(j < arr2.length){
result.push(arr2[j]);
j++;
}
return result;
}
Below is the merge sort function implemented
it splits the array into two halves and then splits those arrays till it reaches a single array element. Remember that principle.
it merges the arrays using the helper function we created above.
function mergeSort(arr){
// recursion base case
// it checks if the array length is less than or equal to 1.
// if that's the case return the arr else keep splicing.
if(arr.length <= 1) return arr;
// remember that we said merge sort uses divide and conquer
// algorithm pattern
// it firsts know the half point of the array.
let halfPoint = Math.ceil(arr.length / 2);
// and then splice the array from the beginning up to the half point.
// but for the fact that merge sort needs the array to be of one element, it will keep splicing that half till it fulfills the condition of having one element array.
let firstHalf = mergeSort(arr.splice(0, halfPoint));
// second array from the half point up to the end of the array.
let secondHalf = mergeSort(arr.splice(-halfPoint));
// merge the array back and return the result.
// note that we are using the helper function we created above.
return merge(firstHalf, secondHalf);
}
That is all about merge sort implementation but as always we have to wrap it up after looking at Big O Notation analysis.
To understand it better, let's decompose the steps the algorithm will take in order to sort the array.
Based on the length of the array, the algorithm needs to first decompose the array until it reaches a single-element array.
so the analysis question will be how many time will I have to split this array this it reaches to a single-element array. and that is O(log n).
Also Remember we have to merge back the array and compare their elements so that they can be sorted, and here the loop will grow proportionally to the length of the array. so the process of merging back the arrays will be linear O(n + m) which we simplify back to O(n).
And then if we add both O(n) + O(log n) = O(n log n) in every case scenarios
Worst-Case Scenarios: O(n log n)
Best-Case Scenarios: O(n log n)
Average-Case Scenarios: O(n log n)
Space complexity: O(n), and this is due to the fact that it has to store multiple chunked arrays in the memory.
Yeah, That is all I got on Merge Sort and I hope you enjoyed it. for any question you can slide in my DM here or on Twitter
Ciao.
Top comments (1)