Big-O Notation

BIG-O NOTATION 
(This is my quick notes to understand big-O notation)



Big-O notation is used to describe the performance or complexity of an algorithm. Specifically, it describes the worst-case scenario in terms of time or space complexity.


Let's understand these with real life examples.

O(1): Constant TimeO(1) means that an algorithm takes a constant time to run, regardless of the size of the input. 

Example: Imagine using bookmarks in a book. When you want to find the last page you read, you can quickly flip to the bookmarked page. Regardless of how thick the book is, finding the bookmarked page always takes the same amount of time—constant time!
    So, in summary, O(1) represents an algorithm with consistent performance, making it efficient for specific tasks.



Imagine a classroom of 100 students in which you gave your pen to one person. You have to find that pen without knowing to whom you gave it. 

O(log n): I divide the class into two groups, then ask: “Is it on the left side, or the right side of the classroom?” Then I take that group and divide it into two and ask again, and so on. Repeat the process till you are left with one student who has your pen. This is what you mean by O(log n). 
O(n)Going and asking each student individually is O(n). 
O(n^2) You go and ask the first person in the class if he has the pen. Also, you ask this person about the other 99 people in the classroom if they have that pen and so on, This is what we call O(n^2).


O(n logn): It represents an algorithm with log-linear (or quasilinear) time complexity.
Example: Merge Sort is an efficient sorting algorithm that divides the input array into smaller halves, sorts them individually, and then merges them back together. Its time complexity is O(n log n). Imagine you have a deck of cards to sort. You split the deck in half, sort each half, and then merge them. This process continues recursively until the entire deck is sorted.
                In summary, O(n log n) algorithms strike a balance between efficiency and scalability, making them useful for various tasks.

O(2^n): It represents an algorithm with exponential time complexity.
Example: When an algorithm’s runtime doubles with each addition to the input, it exhibits exponential time complexity. For example, the Fibonacci sequence implemented recursively has a time complexity of O(2^n). Each recursive call branches into two more calls, leading to exponential growth.
                In summary, O(2^n) algorithms become significantly slower as the input size increases.

O(n!): It 
 represents an algorithm with factorial time complexity.
Example: An algorithm with O(n!) complexity performs a number of operations equal to the factorial of the input size. For example, if you have n items, the algorithm would take n! operations. As n increases, the number of operations grows rapidly.
        In summary, O(n!) algorithms become extremely inefficient for handling large input sizes due to their factorial growth. 



If you want examples of Algorithms/Group of Statements with Time complexity as given in the question, here is a small list -

O(1) time

  • Accessing Array Index (int a = ARR[5];)
  • Inserting a node in Linked List
  • Pushing and Poping on Stack
  • Insertion and Removal from Queue
  • Finding out the parent or left/right child of a node in a tree stored in Array
  • Jumping to Next/Previous element in Doubly Linked List

O(n) time

In a nutshell, all Brute Force Algorithms, or Noob ones which require linearity, are based on O(n) time complexity

  • Traversing an array
  • Traversing a linked list
  • Linear Search
  • Deletion of a specific element in a Linked List (Not sorted)
  • Comparing two strings
  • Checking for Palindrome
  • Counting/Bucket Sort and here too you can find a million more such examples....

O(log n) time

  • Binary Search
  • Finding largest/smallest number in a binary search tree
  • Certain Divide and Conquer Algorithms based on Linear functionality
  • Calculating Fibonacci Numbers - Best Method The basic premise here is NOT using the complete data, and reducing the problem size with every iteration

O(n log n) time

The factor of 'log n' is introduced by bringing into consideration Divide and Conquer. Some of these algorithms are the best optimized ones and used frequently.

  • Merge Sort
  • Heap Sort
  • Quick Sort
  • Certain Divide and Conquer Algorithms based on optimizing O(n^2) algorithms

O(n^2) time

These ones are supposed to be the less efficient algorithms if their O(nlogn) counterparts are present. The general application may be Brute Force here.

  • Bubble Sort
  • Insertion Sort
  • Selection Sort
  • Traversing a simple 2D array

O(n!) time

  • Solving the travelling salesman problem via brute-force search
  • generating all unrestricted permutations of a partially ordered set;
  • finding the determinant with Laplace expansion
  • enumerating all partitions of a set

Comments