DEV Community

Cover image for Running Times of Algorithms
codewithjohnson
codewithjohnson

Posted on

Running Times of Algorithms

  1. 1.1 Introduction
  2. 1.2 Estimating the Running Times of Algorithms
  3. 1.3 Common Running Times
  4. 1.4 Some Notes About Big O Notation
  5. 1.5 Logarithms and Binary Search

1.1 Introduction

In computer science, one valuable skill is the ability to analyze an algorithm and predict its running time. Choosing the right algorithm can significantly impact the performance of a program. For example, an algorithm that takes a few seconds to run can be the difference between success and frustration, especially if another option takes days or weeks to complete.

Example: Summing Integers from 1 to n

Let's explore three different methods to sum the integers from 1 to ( n ):

  1. Looping Approach: The most straightforward method is to loop through the integers from 1 to ( n ) and maintain a running total. Here's how it looks in JavaScript:
   let total = 0;
   for (let i = 1; i <= n; i++) {
       total += i;
   }
Enter fullscreen mode Exit fullscreen mode
  1. Mathematical Formula: If you’re familiar with mathematics, you can use the formula

S=n(n+1)2 S = \frac{n(n + 1)}{2}

This allows you to calculate the sum in a single line:

   let total = (n * (n + 1)) / 2;
Enter fullscreen mode Exit fullscreen mode
  1. Nested Loops: A more convoluted approach involves using nested loops:
   let total = 0;
   for (let i = 1; i <= n; i++) {
       for (let j = 0; j < i; j++) {
           total++;
       }
   }
Enter fullscreen mode Exit fullscreen mode

While any of these algorithms might work well for adding numbers from 1 to 100, the choice of algorithm becomes crucial when scaling up to larger numbers, such as summing from 1 to a billion.

Performance Testing

I tested the performance of these algorithms by summing integers from 1 to one billion. The results were surprising:

  • The looping approach took about 1 second to compute the sum.
  • The mathematical formula returned the answer almost instantly.
  • The nested loops approach would take an estimated 12 years to finish, based on its progress over three minutes.

Big O Notation

In computer science terminology, we describe the performance of these algorithms using Big O notation:

  • The looping algorithm is classified as O(n).
  • The mathematical approach is categorized as O(1).
  • The nested loops algorithm is noted as O(n²).

Big O notation is used to measure the running time of algorithms, but it provides an order of magnitude estimation rather than an exact running time. Calculating the precise running time is often impractical due to varying factors like processor speed, memory, and the current workload on the machine.

Instead, Big O notation focuses on how an algorithm's running time grows as a specific parameter ( n ) increases. In this case, ( n ) represents the integer we are summing to, but it could also be the size of an array, the number of items in a matrix, or any other relevant measure.

Key Concepts of Big O Notation

  • We typically focus on the dominant term. For instance, ( O(n² + n + 1) ) simplifies to ( O(n²) ) because ( n² ) grows faster than ( n ) or ( 1 ) as ( n ) increases.
  • We often ignore constants. Therefore, ( O(3.47n) ) is the same as ( O(n) ).
  • In a more complex expression like ( O(2n + 4n³ + 0.6n² - 1) ), we would simplify it to ( O(n³) ).

Remember, Big O notation is a way to estimate the order of growth rather than providing exact details about running time.

Top comments (0)