DEV Community

Cover image for So, What Exactly Big O Is?
Tural Suleymani
Tural Suleymani

Posted on

So, What Exactly Big O Is?

Regardless of what programming language we write in, not all questions in the interviews we participate in are necessarily related to the specific technology or programming language we are invited to interview for.

So far, based on the interviews that I and my students have been invited to, I can say very comfortably that you will be asked questions about algorithms and data structures in many companies.

Algorithms and data structures are the fundamental basis of programming. This is a specific area that contains fundamental information that every programmer should know, no matter what programming language you write in.

In this article, I will talk about measuring the speed of algorithms.

Algorithms, as we know, consist of a sequence of operations performed on the initial data to obtain a finite result. In short, an algorithm is a sequence of specific steps (operations) that lead us to a final result.

algorithms are so easy

A good algorithm must first have correctness. As long as the algorithm does not retain the correct way to solve the problem, the other attributes of the algorithm are irrelevant.

The second attribute of a good algorithm is its understandability/readability. A well-written algorithm should follow up. That is, it should be written in such a way that it is easy for you or those who work with those codes after you to understand, follow, and change the algorithm.

And the issue that will cover our topic today, is related to the 3rd attribute of the algorithm. This is because the algorithm is scalable and efficient. (effectiveness/ scalable) Big O notation is studied in the algorithm effectiveness section. By the efficiency of the algorithm, it is meant to measure its speed.

The effectiveness of the algorithm is reviewed in the complexity theory section. The speed of the algorithm depends on many internal and external factors. We can attribute the following to some of those factors:

Time
Space (memory)
Others:
Network
Hardware (Sensor, Scanner, Printer, etc.)
Graphics etc.
Among the above factors, the most important ones are **Time and Space **factors.

Reasons affecting space complexity

calling functions
variables
data structures

Reasons affecting time complexity

Operations ( + * — /)
Cycle processes (for, while)
Comparisons ( < > ==)Affecting Space Complexityime and lost in space, while others gain in space and lose in time. Very few algorithms allow us to win both at the same time and in the same place.

When measuring the effectiveness of the algorithm, the following cases appear,

Every case — in every case of the algorithm
Best case — in the best execution case
Average case — average implementation case
Worst case — in the worst case
Excepted case etc.
Big O notation is to measure the effectiveness of the algorithm in the worst case.

The Worst case is a form of measuring what speed the algorithm will have in the worst case.

But why is the speed of algorithms not measured by a timer in the usual way?
If we measure any algorithm written by you with a timer, then we get a relative speed. This means that the algorithm we wrote can be executed in 0.5 seconds on computer A and 1.3 seconds on computer B. In this case, the speed of the algorithm depends on the computer parameters.

If the algorithm performs calculations depending on the CPU, the algorithm works fast on a computer with a fast clock frequency, and poorly on a slow computer. We need such a unit of measure that the absolute (exact) speed of the algorithm does not change, regardless of the speed of the computer, whatever parameters it has. That is, we need a unit of measurement that will measure the speed of the algorithm without depending on external factors.

As this unit of measure, we take what speed the algorithm will have in the worst case. Big o asymptotic notation or simply Big O is intended to calculate the worst case.

Big O also allows us to determine the efficiency of an algorithm regardless of the number of parameters (arguments) passed to it!!

That is, whether the algorithm is fed an array of 10 elements or an array of 10,000 elements, we can still calculate the exact speed of the algorithm, and this speed is the same regardless of the parameters. A capital letter O is used to describe Big O.

Depending on the variety of algorithms, their internal structure, and ways of solving the problem, several Rules emerge.

Those rules are as follows,

  1. If the algorithm iterates over the n elements passed to solve the problem, then the Big O notation for f(N) will be O(f(N)) or simply O(n). static bool IsFound(int[] inArray, int value) { bool _isFound = false; //O(1) foreach(int item in inArray) //O(n) { if (item == value) { _isFound = true; break; } } //constants will be ignored. So, Big O is O(N). return _isFound; }

Example

Search, Insert, and Delete operations in Arrays, which are language constructs given by default in many programming languages,

are accepted as O(n). This means that n number of steps are taken in the worst case to perform these operations. We can see this by writing a simple example. Let’s imagine that we have an array of numbers and we need to find 45 numbers in this array. In the best case, the number can be found in the first element of the array, and in the worst case, it can be found in the last index of the array.

Although a break is set after finding an element in the loop, the Big O of this operation is still O(n). Because as a result, Big O calculates the worst case in us.

2) If my algorithm takes f(n) steps and g(n) steps in the same algorithm, then Big O of the algorithm is equal to O(f(n)) + O(g(n))= 2n=n. For example, if we have written a cycle inside one of our functions, provided that they are not inside each other. In Big O, constants are shortened during addition and multiplication with constants, so we take 2n as just O(n). We usually encounter this situation when there is a 2-cycle process in the algorithm we wrote.

3) Big O O(f(n)) * O(g(n)) = n * n = n2 in this algorithm if g(n) steps are taken for each f(n) element of the algorithm. Usually, when you write a cycle operation within a cycle, you get this case.

4) If f(n) steps are taken according to clause 2 and g(n) steps are taken in the same algorithm, if there s a size difference between f(n) and g(n), then g(n) steps are reduced for simplification and f(n) For >g(n), Big O is assumed to equal to O(to n). Because if we calculate the performance of the algorithm in the worst case, then if the transmitted parameters are so large, the ratio difference between f(n) and g(n) will increase by a factor, and after a certain time, g(n) will become so small that we cannot calculate the step, and thus just f( n) steps are considered.

5) Constants are ignored.

O(C * f(n)) = O(f(n))

O(f(C * N)) = O(f(N))

C+f(N) = f(N)

Want to dive deeper?

Regularly, I share my senior-level expertise on my DecodeBytes YouTube channel, breaking down complex topics like .NET, Microservices, Apache Kafka, Javascript, Software Design, Node.js, and more into easy-to-understand explanations. Join us and level up your skills!

Top comments (0)