Here is a cheat sheet for referencing to help determine the runtime of an algorithm.
Notes on Runtimes:
Runtime refers to the performance of an algorithm in terms of processing power.
Constant Time (1) - No matter how many elements we are working with, the algorithm/operation will always take the same amount of time. This the holy grail of algorithms.
Logarithmic Time (log(n)) - You have this if doubling the number of elements you are iterating over doesn’t double the amount of work. It's pretty safe to assume that searching operations are log(n).
Linear Time (n) - Iterating through all elements in a collection of data. If you see a for loop spanning from ‘0’ to ‘array.length’, you probably have ‘n’ or linear runtime. Most common for the ‘simpler’ algorithm problems like reversing a string or counting the number of vowels in a string.
Quasilinear Time (n * log(n)) - You have this if doubling the number of elements you are iterating over doesn’t double the amount of work. You can often assume that any sorting operation is n*log(n).
Quadratic Time (n ^ 2) - Every element in a collection has to be compared to every other element. A trick to help remember this is to think of ‘The handshake problem’. Imagine there is a group of people in a room and a new person walks into it. That person has to be introduced to all other people in the room and shake each person's hand. Everyone in the room has already shaken everyone other person's hand. So every element is interacting with all the other elements.
Exponential Time (2 ^ n) - If you add a single element to a collection, the processing power required doubles. Realistically you never want to propose this as a real solution because it is just too awful to compute.
Big ‘O’ Notation
As software engineers, we can often think of Big 'O' notation as the runtime of the algorithm:
O(n) => linear
O(1) => Constant
O(n^2) => Quadratic
Identifying Runtime Complexity - Tips and Tricks
Note: These are not rules, they are generalizations. Unfortunately, there is no list to memorize that will tell you, 'these algorithms are always this'. It depends on each independent algorithm. So it really comes down to practicing and writing different algorithms and learning about their corresponding runtimes.
Iterating with a simple for loop through a single collection, for example reversing a string. => Probably O(n)
Iterating through half a collection. => Still O(n). There are no constants in runtime since these become meaningless as data sets become infinitely larger.
Iterating through two different collections with separate for loops. For example, you want to reverse two strings both in the same function in two different for loops. You want to consider both n and m because one may be huge and the other much smaller, so a new variable is introduced. => O(n + m)
Two nested for loops iterating over the same collection. => O(n^2)
Two nested for loops iterating over different collections => O(n*m) [very similar to n^2]
Sorting => O(n*log(n))
Searching a sorted array => O(log(n))
Interview cake and leet code are some great resources for getting practice in with algorithms while learning about their runtimes.
Happy coding!
Top comments (0)