DEV Community

Cover image for BIG O Notation: A Quick Introduction
Aderibigbe Samuel
Aderibigbe Samuel

Posted on • Edited on

BIG O Notation: A Quick Introduction

When approaching problems in coding, we often come up with an algorithm, which is a set of instructions for solving a problem. To make you understand algorithms, Imagine you have a bunch of unsorted numbers, and you want to find the largest number among them. Here's a straightforward algorithm to solve this:

  1. Start with the first number in the list and call it the "current largest number."
  2. Compare the "current largest number" to the next number in the list.
  3. If the next number is larger than the "current largest number," update the "current largest number" to be the next number.
  4. Move to the next number in the list and repeat steps 2 and 3 until you have compared all the numbers.
  5. At the end, the "current largest number" will be the largest number in the list.

There you have it! a simple algorithm to solve the problem. some algorithms are faster and more efficient than others, especially when dealing with large amounts of data. Consider that there are two ways to arrange a stack of papers. One approach is to examine each paper individually, compare them, and rearrange them until they are in the correct order. The alternative procedure entails cutting the pile in half, sorting each half independently, and then combining them. Which approach do you believe would proceed more quickly as the pile gets bigger? We can respond to questions like these using Big O notation. It enables us to gauge how an algorithm's resource or time requirements change with the size of the input. You can select the right algorithms for various tasks by having a solid understanding of the Big O notation.

What is Big O?

Big O is a way to measure the efficiency of an algorithm or the speed at which it runs, it helps us understand how the algorithm's performance changes as the amount of data it handles grows.

Oftentimes, interviewers would ask questions about big o or ask you to write codes and then evaluate how fast or efficient is your code.

Time Complexity vs Space Complexity

Let's say you have a stopwatch and you have written two codes, Code A and Code B. If you run code A and start the stopwatch and it took 20 seconds for code A to run, you reset the stopwatch and you run code B and it took 40 seconds for code B to run, based on this you would say code A runs faster than code B that is, you can measure it, This is called Time Complexity . A very interesting yet confusing thing about Time complexity is that it is not measured in time but in number of operations that it takes to complete a task. Imagine you took code B(from the example above) and run it on a computer have a processor thrice as fast, it will complete before code A but it does not mean the code B is faster or more efficient, it just means the computer is better, so we use the number of operations instead of time to evaluate Time Complexity.

Space Complexity on the other hand, refers to the amount of memory or storage space an algorithm requires to solve a problem. It measures how the memory usage of an algorithm changes with the size of the input.

why is space complexity also important? we mostly prioritize speed when writing codes but why do we need to consider space complexity too?

Image description
Again from the comparison between code A and code B, let's say, while code A runs faster, it requires more memory space but code B requires less memory space even though it runs slower. There will be some occasions when memory space will need to be considered, so it is important to consider space complexity too while writing codes.

Big O (O), Omega (Ω) and Theta (Θ)

These are the commonly used notations in time and space complexity and they represent the following:
Big O (O): The Big O notation represents the upper bound or worst-case scenario of an algorithm's time or space complexity.
Omega (Ω): The Omega notation represents the lower bound or best-case scenario of an algorithm's time or space complexity.
Theta (Θ): The Theta notation represents both the upper and lower bounds of an algorithm's time or space complexity.

The way I like to think of these notations is having a list of seven items and I write a for loop to iterate through these items to find a particular one.

Image description

Let say you are looking the number 1, that is your best-case scenario (Omega (Ω)) because you will get the first item in the list in just one operation but if you want to get the number, 7, that is your worst-case scenario (O) because 7 is the last item in the list and you have to go through the entire list to get through it. To get number 4, that's your average-case (Theta (Θ)) because it's between your best-case and worst-case.

Common Time complexities

O(n):

This is the simplest to understand. Consider being tasked with counting the oranges in a basket. Let's say you count the 10 oranges you have one by one. You need 10 units of time or effort to finish the job. Now, if you had 20 oranges instead of 10, it will roughly take you twice as long or as much work to count them.

O(n), where "n" stands for the input size, is a mathematical representation of this linear relationship. It shows that the time or effort needed to complete the activity rises at the same rate as the input size (number of oranges). It takes about twice as much time or effort if you have two times as many oranges.

A simple code example of this:

def print_items(n):
    for i in range(n):
        print(i)
print_items(5)
Enter fullscreen mode Exit fullscreen mode

output:

0
1
2
3
4
Enter fullscreen mode Exit fullscreen mode

The code printed five (5) items because we passed the argument n = 5, so it took 5 units of effort to get the task done, so whatever n was, that would be the number of operations. It means that the time it takes to complete an algorithm or operation increases linearly with the size of input.
let's look at the graph of O(n):
Image description

O(n^2):

In simple terms, O(n^2) means that the time or space complexity of an algorithm grows quadratically with the input size. To illustrate this with a code, let's write a 'for loop' inside another 'for loop'

def print_items(n):
    for i in range(n):
        for j in range(n):
            print(i,j)



print_items(5)
Enter fullscreen mode Exit fullscreen mode

output:

0 0
0 1
0 2
0 3
0 4
1 0
1 1
1 2
1 3
1 4
2 0
2 1
2 2
2 3
2 4
3 0
3 1
3 2
3 3
3 4
4 0
4 1
4 2
4 3
4 4
Enter fullscreen mode Exit fullscreen mode

so from 0,0 to 4,4 we have 25 items printed, the input size n = 5 but we got n * n = n^2 items printed which is 25, hence illutrating the quadratic relationship. 'for loop' inside another 'for loop'is a good example of O(n^2)
The graph:

Image description

0(1):

O(1) means that the time or space complexity of an algorithm remains constant regardless of the input size. This constant relationship is represented by O(1), also known as "constant time complexity." It means that the time or effort required to perform the task remains the same, regardless of the input size. Addition of the same number is an example of this:

def add_num(n):
    return n + n

add_num(1)


add_num(2000)
Enter fullscreen mode Exit fullscreen mode

output:

2

4000
Enter fullscreen mode Exit fullscreen mode

The other operations we've seen, as the input size increase, so does the number of operations but this constant time complexity is different. even if n = 100000000, there's just one operation to be done i.e, n + n = 100000000 + 100000000 = 200000000 and that is o(1), the constant time.
on the graph:
Image description

o(1) is represented by the horizontal blue line on the graph and this is the most efficient of big O, anytime you can optimize your code to O(1), that is the most efficient you can have it.

o(logn)

O(log n) indicates that an algorithm's time or space complexity increases logarithmically as the amount of the input increases.

Let's use the example of looking up a word in a dictionary to further comprehend this. Instead of starting from the first page and flipping through each page one by one, you can use a more efficient strategy. You might open the dictionary roughly in the middle and check if the word you're looking for comes before or after that page. Based on the result, you can eliminate half of the remaining pages and repeat the process. You may easily zero in on the place by repeatedly 'halfing' the search area.

O(log n), where "n" is the input size, represents this logarithmic relationship. It means that the task's time or effort requirements rise, but not proportionally, as the size of the input expands. As an alternative, it rises logarithmically. Let's write a traditional binary search in python

def binary_search(arr, target):
    low = 0
    high = len(arr) - 1

    while low <= high:
        mid = (low + high) // 2  # Find the middle index

        if arr[mid] == target:  # if the the middle element is the target
            return mid
        elif arr[mid] < target:  # If the target is greater, search the right half
            low = mid + 1
        else:  # If the target is smaller, search the left half
            high = mid - 1

    return -1  # Target not found

# Example usage
arr = [2, 5, 7, 12, 16, 21, 28, 36]
target = 16
result = binary_search(arr, target)
if result != -1:
    print("Target found at index", result)
else:
    print("Target not found")
Enter fullscreen mode Exit fullscreen mode

The binary_search function in this code runs an O(log n)-time complicated binary search on a sorted list. The search space is divided in half repeatedly until the target element is located or the search space is used up.

The input parameters for the function are an array (arr) and a target value (target). Two pointers, low and high, are initialized to maintain track of the search range. It determines the middle index (mid) and compares the middle element with the desired value within the while loop. It updates the low and high search parameters in accordance with the comparison. It returns the index if the target can be found; else, it gives -1 to show that the target is not in the array.
The binary search algorithm exhibits the O(log n) time complexity property in that the number of iterations grows logarithmically as the size of the list increases.

Conclusion

Even if you are not preparing for an interview but you are interested in developing effective code, learning Big O notation is very helpful. Big O notation enables you to comprehend algorithm efficiency, make wiser software development decisions, and interact with other developers. It is a helpful tool for increasing performance and developing software that is more effective.

I published an article detailing how I used Python to solve the LeetCode challenge "Two Sum". I came up with two methods, the first of which was my first strategy. I was simply interested in getting the code to function; I didn't care about how much space it took up or how quickly it ran. By using my understanding of Big O, I was able to create a second, more effective approach. The article is accessible here. Two Sum - LeetCode problem 1 solution in python

here's also Big O cheatsheet

say hi to me on Twitter @Sammie_savvie

Top comments (0)