This blog assumes prior knowledge of fundamental data structures/algorithms. Examples are in JavaScript. Information references "Cracking the Coding Interview" by Gayle Laakmann McDowell
Introduction
Mastering technical interviews demands a diverse skill set, but two abilities stand out as crucial: understanding Big O notation and analyzing algorithm efficiency. These skills are the keys to unlocking your potential in algorithm complexity discussions.
What is Big O? It is the language and metric we use to describe the efficiency of algorithms.
In this blog, we'll review time/space complexity, how to measure them, amortized time, and logarithmic/recursive runtimes.
Time Complexity
Time complexity is our way of describing how an algorithm's runtime changes as the input size grows.
- Best Case: the minimum time an algorithm needs to run.
- Worst Case: the maximum time an algorithm might take.
- Expected Case: the average time an algorithm usually takes.
While it's tempting to focus on the best case, it's often misleading. Any algorithm can perform well with a perfectly crafted input. That's why we typically focus on the worst and expected cases - they give us a more realistic picture of our algorithm's performance.
Interestingly, for many algorithms, the worst and expected cases are identical. But sometimes, they can differ (explained later).
Space Complexity
Space complexity is a parallel concept to time complexity, measuring the memory (space) required by an algorithm.
For instance, if we create an array of size , we're looking at space complexity. It grows linearly with our input size.
But here's a twist: space isn't just about data structures. Every time we make a recursive call, we're adding to the call stack. Take this simple recursive sum function:
function sum(n) {
if (n <= 0) {
return 0;
}
return n + sum(n - 1);
}
this would take O(n) time and O(n) space because sum(4), sum(3), sum(2), sum(1), sum(0) all exist in the call stack simultaneously.
Measuring Complexity
When it comes to measuring the complexity of algorithms, we're not after pinpoint accuracy. Instead, we're looking for a big-picture view that helps us understand how our algorithm scales. Let's dive into some key "rules" that will help you with this.
Multi-Part Algorithms: Add vs. Multiply
For an algorithm that has 2 steps, when do you multiply the runtimes and when do you add them?
- if your algorithm is in the form “do this. when done, do that” then you add the runtimes
- if your algorithm is in the form “do this for each time you do that” then you multiply the runtimes. Take a look at the examples below:
// O(A + B) - "Do this, then do that"
arrA.forEach(a => console.log(a));
arrB.forEach(b => console.log(b));
// O(A * B) - "Do this for each time you do that"
arrA.forEach(a => {
arrB.forEach(b => {
console.log(`${a},${b}`);
});
});
In the first example, we're doing two separate things sequentially. In the second, we're doing something for each combination of A and B.
Drop the Constants
In the world of Big O, we're more interested in the overall rate of growth than absolute speed. That's why we drop constants. Whether your algorithm runs in
or
time, we simplify it to
. As
grows larger, the constant factor becomes less significant. Consider the two code snippets:
// Version 1
let min = Number.MAX_VALUE;
let max = Number.MIN_VALUE;
array.forEach(x => {
if (x < min) min = x;
if (x > max) max = x;
});
// Version 2
min = Number.MAX_VALUE;
max = Number.MIN_VALUE;
array.forEach(x => {
if (x < min) min = x;
});
array.forEach(x => {
if (x > max) max = x;
});
Both have the same O(N) time complexity, despite Version 2 having two separate loops.
Drop the Non-Dominant Terms: Focus on the Heavyweight
When you have multiple terms in your Big O expression, focus on the most significant one.
- becomes
- simplifies to
- boils down to
However, be cautious. If you have O(B² + A), and A and B are independent variables, you can't simplify further. Each represents different parts of the algorithm’s complexity
Amortized Time: The Secret Behind Efficient Operations
Amortized time is a fascinating concept in time complexity analysis that shines when there's a stark difference between the worst-case and average-case scenarios.
Take Java's ArrayList, for example—a dynamically resizing array. When you add an element to an ArrayList that still has space, the operation takes time. When the array is full, the ArrayList has to create a new, larger array (typically double the size) and copy over all the existing elements, which takes time. This operation is a rare occurrence. Most of the time, adding an element is a quick operation. When you look at the average time per operation over a long sequence of insertions, the amortized time remains a highly efficient . This clever balance is what makes data structures like ArrayList incredibly efficient in practice.
Logarithmic Runtimes
What does runtime really mean? An runtime typically means that the algorithm divides the problem in half each time, such as with binary search. This logarithmic complexity grows much slower (as input grows) compared to linear or quadratic time complexities, making it very efficient for large inputs.
In the example below, we are performing binary search. Every time we make a "guess", we halve the input space in half each time. This means that we've reduced the space in which we search for "target" by one half! In the worst case, it would take
iterations to find "target".
function binarySearch(arr, target) {
let left = 0;
let right = arr.length - 1;
while (left <= right) {
let mid = Math.floor((left + right) / 2);
if (arr[mid] === target) {
return mid;
} else if (arr[mid] < target) {
left = mid + 1;
} else {
right = mid - 1;
}
}
return -1; // target not found
}
Recursive Runtimes
The code below may at first seem like it runs in
time. It actually runs in
time because for each call, we call again twice. You can visualize the recursive calls as a tree. The tree would have a depth (height) of
where each node has two children. At each level, we have twice as many nodes as the previous level. For a recursive algorithm that makes multiple calls, the runtime will often look like
where branches is the number of times each recursive call branches.
function f(n) {
if (n <= 1) {
return 1;
}
return f(n - 1) + f(n - 1);
}
The space complexity is . Although we have nodes in the tree, only exist at any given time.
Ending
Hope this helps with your technical interviews or your understanding of algorithm analysis. Good luck and look out for more blogs on Big O analysis, focusing on important examples.
Top comments (0)