Big O Notation
Last updated
Was this helpful?
Last updated
Was this helpful?
In computer science, complexity is a measure of the resources required for an algorithm to solve a problem. The two most commonly analyzed types of complexity are:
Time Complexity: How the runtime of an algorithm increases with the size of the input.
Space Complexity: How the memory usage of an algorithm increases with the size of the input.
Both time and space complexity are often expressed using Big O notation, which describes the upper bound of an algorithm's growth rate.
The general order of growth rates is:
O(1) < O(logn) < O(n) < O(nlogn) < O(n^k) <O(n^logn) <O(k^n) <O(n!) <O(n^n)
Constant → Logarithmic → Linear → Linearithmic → Polynomial → Super-Polynomial → Exponential → Factorial.
The following table compares the growth of various time complexities with different input sizes n:
n
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n³)
O(2ⁿ)
1
1
0
1
0
1
1
2
10
1
1
10
10
100
1000
1024
100
1
2
100
200
10,000
1,000,000
1.27e30
1,000
1
3
1,000
3,000
1,000,000
1.0e9
1.07e301
10,000
1
4
10,000
40,000
1.0e8
1.0e12
-
100,000
1
5
100,000
500,000
1.0e10
1.0e15
-
1,000,000
1
6
1,000,000
6,000,000
1.0e12
1.0e18
-
Amortized time complexity refers to the average time per operation over a sequence of operations, rather than analyzing the worst case for each individual operation.
It helps when an expensive operation happens occasionally, but most operations are cheap. Instead of considering the worst-case for each operation, we spread the cost across multiple operations to get a more realistic average cost.
Scenario
Suppose we use a dynamic array (like ArrayList
in Java).
If an array is full, we double its size (e.g., from 4 to 8 elements).
Copying elements to a new array seems expensive, but it happens infrequently.
Operation Complexity
Insert (when space is available) - O(1)
Insert (when resizing) - O(n) (copying n
elements)
Amortized Analysis
Let’s analyze n
insertions.
Every i
-th resizing operation takes O(i)
, but it occurs rarely.
Total cost across n
operations is O(n).
Amortized cost per operation = O(1).
Thus, while a single resize is O(n), the amortized time per insertion remains O(1).
An algorithm runs in constant time if its runtime does not change with the input size.
Example: Accessing an array element by index.
An algorithm runs in logarithmic time if its runtime grows logarithmically with the input size. These algorithms reduce the problem size by a fraction (typically half) at each step. This means that as the input size increases, the number of steps needed grows logarithmically rather than linearly.
What is the base of log used here ?
All logarithmic functions with different bases can be represented as O(log(n)) in Big O notation.
Example: Binary search.
An algorithm runs in linear time if its runtime grows linearly with the input size.
Example: Finding the maximum element in an array.
An algorithm runs in linearithmic time if its runtime grows in proportion to nlogn. It describes algorithms whose running time grows linearly with the size of the input 𝑛 n but also includes an additional logarithmic factor
Example: Efficient sorting algorithms like Merge Sort and Quick Sort.
sourceArray
→ The array to copy from.
sourceStartIndex
→ The starting index in the source array.
destinationArray
→ The array to copy into.
destinationStartIndex
→ The starting index in the destination array.
length
→ The number of elements to copy.
An algorithm runs in quadratic time if its runtime grows proportionally to the square of the input size.
Example: Simple sorting algorithms like Bubble Sort, Selection Sort.
An algorithm runs in cubic time if its runtime grows proportionally to the cube of the input size.
Example: Certain dynamic programming algorithms.
It is between polynomial and exponential growth. Examples include algorithms involving combinatorics or recursion trees. Significant growth—slower than exponential but faster than any polynomial.
An algorithm runs in exponential time if its runtime doubles with each additional input element. Example: Solving the traveling salesman problem using brute force.
An algorithm uses constant space if the amount of memory it requires does not change with the input size. Example: Swapping two variables.
An algorithm uses linear space if the amount of memory it requires grows linearly with the input size. Example: Creating a copy of an array.
An algorithm uses quadratic space if the amount of memory it requires grows proportionally to the square of the input size. Example: Creating a 2D array.
An algorithm uses logarithmic space if the amount of memory it requires grows logarithmically with the input size. Example: Recursive algorithms that divide the problem in half at each step.