Intro
To talk about Big O, we have to talk about algorithms. As someone coming from a non-traditional tech background, hearing the word algorithm sounds daunting. Like “Wow the algorithm for this and that was blah blah blah...”
When I learned an algorithm is just a set of instructions for accomplishing a task...I was like...
Here is an example
Make Coffee
- Grind Coffee
- Brew Coffee
- Pour Coffee in a cup
The first time I heard about Big O was in coding boot camp. Honestly, I was too focused on solving the algorithm by coming up with the shortest code possible and just making it work. So when the talk about performance arises, it was a Big O(n) moment to me, at the time that meant Big Oh no LOL.
After a while, I started to make sense of how to come up with solutions. I liked solving algorithms. The part that I liked most is seeing other people’s approaches and how it differs from mine. Eventually, I started to wonder how can I tell, if my solution is more efficient than another solution? I know that deep down that just making it work is not the answer. And here we are, Talking about Big O!
By no means I’m an expert at this. I am still learning about it but I got over the hump of understanding what Big O is.
I wanted to write about this because I want to introduce Big O in an approachable way. (If you find any discrepancies, please let me know! I'm here to learn as well!) I hope that you get an idea of what Big O is by the end of this post.
Big O
I had this misconception that Big O is measuring how slow or fast an algorithm takes. It's not entirely wrong making it not entirely true either.
When I first googled “What is Big O?”
“Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. ... ” -Wikipedia
Did you get it? I don’t think you need to keep reading this post if you did. 😅
In plain English?
Big O notation is used to identify the trend of how a runtime of an algorithm grows as the input grows. We refer to this input as “n”.
We are not directly measuring an algorithm's runtime because the speed varies depending on the computer processing the algorithm. We use Big O to analyze the relationship between runtime and input. When n grows, what is the runtime gonna look like?
It's about understanding the number of operations it takes for your algorithm to run as the input(n) increases.
When we’re talking about Big O, we are talking about the worst-case scenario. What is the most time/space my algorithm would require given the worst case input (ie. n=1 versus n=1000)
Examples
let arr = [1,2,3,4,5,6]
const accessFirstElement = (arr) => {
console.log(arr[0])
}
// 1
This function runs O(1), also known as constant.
No matter what the size of n is, (in this case the length of the array) it does not affect the run time.
n
could be 1 or 1000, accessFirstElement
would just require 1 operation every time. Hence, constant.
let arr = [1,2,3,4,5,6]
const printEvenElements = (arr) => {
for (let ele of arr){
if(ele % 2 === 0){
console.log(ele)
}
}
}
//2
//4
//6
This printEvenElements
runs O(n), also known as linear. The runtime increases as n increases, again were referring to the length of the array. This is to say the number of loops increases as the size of the array increases
because printEvenElements has to access every single element in the array.
let's just say we only want to print the even elements of the first 4 elements of the array.
let arr = [1,2,3,4,5,6]
const printEvenElements2 = (arr) => {
for (let i = 0 ; i <= 3 ; i++){
let ele = arr[i]
if(ele % 2 === 0){
console.log(ele)
}
}
}
//2
//4
Do you think this function is O(n) linear or O(1) constant?
If you answered O(1)constant, you are correct. n
doesn't matter here. No matter how small or big the length is, the function will run 4 operations(4 loops) every time
because of this i<=3
. Continue to loop as long as this is true, it is not dependent on n.
Space complexity
We also use Big O to analyze the space the algorithm takes as the size of the input increases. In other words, how much memory is required to run our algorithm?
When analyzing space complexity we don’t care about the space taken by the input. We care about what our function does with that input and how it affects the memory required for the algorithm to run.
That may be a lot to take in. I probably could explain that better but we have examples below
Things to keep in mind
Booleans, numbers, undefined, null are constant space. It doesn’t matter if it's true, false, 1, or 1000.
Strings, arrays, and objects require O(n) or linear space where n
is the length of the string, array, or the number of keys for objects.
Examples
let arr = [1,2,3,4,5,6]
const accessFirstElement = (arr) => {
let first = arr[0]
return first
}
This function takes O(1) constant space.
No matter what the size of the arr is doesn't have an impact on the space it’s taking up. Every time this code runs, we always just have a number assigned to a variable.
Here is another example
let arr = [1,2,3,4,5,6]
const squareElements = (arr) => {
let even = []
for (let ele of arr){
even.push(ele * ele)
}
return even
}
This function takes O(n) linear space.
First, we made a new array. Ok, so that's constant.
But what we care about is how space is affected by the input.
Here as the input array length grows, our even array gets longer as well. We are squaring every element and pushing that squared element to the new array. The space it takes is in direct proportion to the input(array length).
Conclusion
When I first started learning how to code, I wanted to write code as short as possible. Learning Big O clarified that shorter or cleaner code does not mean it is more ideal than a longer code. We use Big O notation to analyze time and space complexity. How are runtime and space affected as the input grows? I think having Big O in mind while writing code, allows someone to write more efficient code as opposed to code that just works. I hope this was helpful!
Top comments (0)