Let’s talk about the Big O notation and time complexity here. To measure the performance of a program we use metrics like time and memory. The amount of time it takes for the algorithm to run and the amount of memory it uses.

Lesser the time and memory consumed by the algorithm, the better it is considered. However, time and memory are metrics that often increase with the size of the input. The same algorithm will take more time and memory for a larger input.

Then how do we generalize and compare the algorithms independent of the input size?

**Enter: Big – O notation.**

The common standard is to express the running time of an algorithm as a function of the input size [**f(n)**].

Here **n** is taken as the size of the input.

Table of Contents

## Different Ways to Analyze the Time Taken by an Algorithm

While analyzing the time taken by algorithm to run, we have three cases :

**Worst case**: Considers the input for which the algorithm takes the longest time.**Best case**: Considers the input for which the algorithm runs the fastest.**Average case**: Considers a random input.

Worst case analysis forms the upper bound for **f(n)** and best case analysis forms the lower bound.

## What is the Big-O Notation?

Big-O notation gives the tight upper bound of the function f(n). It is represented as:

f(n) = O(g(n))

Which means that at larger values of n, the upper bound of f(n) is g(n).

It is worthwhile to mention that Big-O notation asymptotically bounds the growth of a running time [f(n)]. This means that g(n) gives the maximum rate of growth for f(n) at larger values of n.

For example, if:

f(n) = n^4 + n^3 + 4(n^2) + 5

Then:

g(n) = n^4

## Commonly seen Big-O Notations

Here are some commonly seen Big-O notations in decreasing order of growth rate. That means that lower the notation is on the list, the better is the performance of its corresponding algorithms.

n! |

4^n |

2^n |

n^2 |

nlog(n) |

log(n!) |

n |

2^log(n) |

(log(n))^1/2 |

log(log(n)) |

1 |

The way to compare different complexities is by substituting n with a large value say 10^6. You can also imagine the functions graphically to compare.

## Some common examples

Here are some common examples and their time complexities in Big -O notation.

### for loop

The correct way to find the complexity of a loop is by counting the number of times its innermost statement is running. This is helpful in case of multiple nested loops with conditions that depend on each other.

**Simple for loop **

for(i=1;i<=n;i++) m=m+2; // constant time

Here the statement inside the loop will run n number of times. So the time complexity is :

O(n)

**Nested for loop**

for(i=1;i<=n;i++){ for(i=j;j<=n;j++){ k=k+1; // contant time } }

Here we have two loops. The inner loop will run n times for every single iteration of the outer loop. The outer loop will further run for n iterations. Therefore the complexity is n X n = n^2.

O(n^2)

**loop with Logarithmic complexity **

for(i=1;i<=n;) i=i*2;

Here value of **i** is doubled at each iteration. Therefore the complexity is:

O(logn)

### Time Complexity analysis of common sorting algorithms

Analyzing and studying the time complexities of different sorting algorithms is very popular. A lot of research is done in the field of sorting. More efficient sorting algorithms reduce the bottlenecks that are often faced in a lot of different areas of work.

Here are some algorithms and their worst case complexities :

Type of sorting | Best case time complexity |

Bubble | O(n^2) |

Insertion | O(n) |

Selection | O(n^2) |

Quick | O(n log n) |

Merge | O(n log n) |

Tree | O(n log n) |

An interesting point to note here is that the worst-case complexity of quicksort is the same as bubble sort or insertion sort.

## Conclusion

This tutorial was about Big-O notation and how it is used to measuring time complexities of algorithms. For a programmer, a good practice is to always calculate the time complexity of an algorithm before implementing it.