Problems
For loop
Simple for loop
Nested for loop
Outer Loop (
for (i = 0; i < N; i++)
): The outer loop runs fromi = 0
toi = N - 1
, which means it executes N times.Inner Loop (
for (j = N; j > i; j--)
):For a given value of
i
, the inner loop runs fromj = N
toj = i + 1
.The number of iterations for the inner loop depends on the current value of
i
:When
i = 0
, the inner loop runsN
times.When
i = 1
, the inner loop runsN - 1
times.When
i = 2
, the inner loop runsN - 2
times....
When
i = N - 1
, the inner loop runs1
time.
Thus, the total number of inner loop iterations is the sum of the first
N
natural numbers:Total Iterations=N+(N−1)+(N−2)+⋯+1=N*(N+1)/2
Outer Loop
The outer loop runs with the variable i
, starting at N and halving its value in each iteration (i/=2).
Thus, the value of i
takes the following sequence:
N,N/2,N/4,N/8,…,1
The number of iterations in the outer loop is determined by how many times N can be halved before it becomes 1. This is equivalent to the number of times N can be divided by 2, which is:
Number of iterations=⌊log2(N)⌋+1
So, the outer loop runs O(logN) times.
Inner Loop
The inner loop runs from j=0j=0 to j<ij<i.
For each iteration of the outer loop, the number of iterations of the inner loop is equal to the current value of i
.
When i=N, the inner loop runs N times.
When i=N/2, the inner loop runs N/2 times.
When i=N/4, the inner loop runs N/4 times.
...
When i=1, the inner loop runs 1 time.
This is a geometric series with the first term N and a common ratio of 1 /2 . The sum of this series is N*( 1)/(1− 1/2) i.e. 2N.
The first time through j runs for N-1 steps. The second time, it's N-2 steps. Then N-3 steps. And so on. Therefore, the number of steps total is: (N-1) + (N-2) + (N-3) + ... + 2 + 1
= 1 + 2 + 3 + ... + N-1 = sum of 1 through N-1
Check if Prime number
Calculate Square root
Array Copy
For each element i
, a new array of size i+1
is created.
Thus, the total space used for all arrays is:
1+2+3+⋯+N=N*(N+1)/2
This is O(N²) space complexity.
While loop
Simple while loop
Sum of Digits
Recursion
Divide and Conquer Method
The function searches for the occurrences of the element k
in a sorted vector V
using a recursive approach.
Base Case: If the
start
index exceeds theend
index, it returns 0 (no match found).Divide and Conquer:
If V[mid]<k, search the right half of the array (mid+1 to end).
If V[mid]>k, search the left half of the array (start to mid−1).
When V[mid]==k:
Count this occurrence of k (+1) and recursively search both the left and right halves to count all other occurrences.
Time Complexity Analysis
The function involves binary search combined with multiple recursive calls when V[mid]==k. Let's analyze:
Best Case
If k is not present in the array, the function acts like a standard binary search:
At each step, it halves the search space.
The recursion depth is O(logN).
In this case, the time complexity is: O(logN)
Worst Case
When all elements in the array are equal to kk:
For every element, the function recursively explores both halves of the array.
This results in a full binary tree of recursive calls.
The total number of recursive calls is O(N), since all elements are visited once.
Thus, in the worst case, the time complexity is: O(N)
Space Complexity Analysis
The space complexity is determined by the recursive call stack. In the worst case, the depth of recursion is:
O(logN) if the array is halved at each step (binary search behavior).
O(N) if all elements are equal to kk (linear recursion due to overlapping searches).
Thus, the space complexity is: O(N)
in the worst case and O(logN) in the best case.
Minimum Path
The function findMinPath
computes the minimum path sum from the top-left corner (0,0) to the bottom-right corner (R−1,C−1) of a matrix V
by recursively exploring the possible paths. At every cell, the function can move:
Down (r+1,c)
Right (r,c+1)
The function returns the minimum sum of the path.
Time Complexity Analysis
Recursive Calls
At every cell (r,c), the function makes two recursive calls:
findMinimumPath(V, r + 1, c)
— explores the path moving down.findMinimumPath(V, r, c + 1)
— explores the path moving right.
If the matrix has R rows and C columns, the function explores all possible paths from the top-left to the bottom-right corner. In the worst case, the total number of recursive calls is exponential: T(R,C)=2^(R+C−2)
This is because, for a path of length R+C−2 (the total number of moves i.e. R−1 downward moves to go from row 0 to row R−1 and C−1 rightward moves to go from column 0 to column C−1), there are 2^(R+C−2) combinations of recursive calls.
Exponential Growth
The time complexity of the function is: O(2(R+C))
Space Complexity Analysis
Recursive Call Stack
The space complexity is determined by the maximum depth of the recursion. Since the function makes recursive calls in two directions (down and right), the depth of the recursion is bounded by the longest path in the matrix, which is R+C−2
Thus, the space complexity is: O(R+C)
Minimum Path with Dynamic Programming (DP)
The above implementation recomputes the results for overlapping subproblems, which makes it inefficient. To improve the time complexity, we can use Dynamic Programming (DP) with memoization or tabulation.
Time Complexity with Memoization
With memoization, each cell in the matrix is computed at most once. The time complexity becomes:
O(R×C)
Space Complexity with Memoization
The space complexity with memoization includes the storage for the DP table (O(R×C)) and the recursive call stack (O(R+C)):
O(R×C+R+C)
Exponential Growth
Sum of all values of Balanced Binary Search Tree
The function performs a recursive in-order traversal of a binary tree:
It visits each node once (
O(N)
time complexity).Uses recursive calls to traverse the left and right subtrees.
The tree is a balanced binary search tree. Therefore, if there are N total nodes, then depth is roughly log N.
2^logN = N
Factorial of a number
Permutations of a string
Time Complexity
Step 1: Counting Base Case Calls
The base case occurs when
str.length() == 0
, meaning we've used all characters and formed a complete permutation.Each unique permutation is a leaf node in the recursion tree.
Since there are
n!
permutations of ann
-character string, the base case is reachedn!
times.
Step 2: Counting Total Function Calls
We need to count all calls, not just the base cases.
Consider how the recursion unfolds:
The first call has
n
branches (one for each starting character).Each of those branches has
n-1
further branches (one for each remaining character), and so on.This forms a recursive tree where each node makes
n - depth
recursive calls.The total number of nodes in the tree (calls to
permutation()
) is approximately n × n!.
Step 3: Work Done per Function Call
Each function call performs:
Printing the permutation (Base Case Only)
Involves printing a string of length
n
, which takes O(n) time.
Generating
rem
(Line 10)Constructing
rem
involves substring operations, which take O(n) time.Since we create a new string in each call, this contributes to the overall complexity.
Recursive Call (Line 11)
Simply a function call, which is O(1) itself but triggers further recursive work.
Thus, each function call does O(n) work due to substring creation and concatenation.
Final Complexity Analysis
Total number of calls: ≈
n × n!
Work per call:
O(n)
Overall complexity: O(n^2*n!)
Fibonacci Number
Time Complexity O(branches^depth)
Step 1: Identifying Branches and Depth
Each call to
fib(n)
makes two recursive calls:fib(n - 1)
fib(n - 2)
This means the recursion tree branches into 2 subproblems at each level → branches = 2.
The depth of the recursion tree is
n
, because we reducen
by 1 or 2 in each step.
Step 2: Complexity Using Recursion Tree Growth
The recursion grows exponentially, with each level roughly doubling the number of calls.
The total number of calls is proportional to O(2ⁿ).
Space Complexity
Recursive call stack depth is at most
O(n)
.Each recursive call adds a new frame until we reach the base case (
n = 0
orn = 1
).
So, space complexity = O(n) (due to recursion depth).
All Fibonacci numbers from 0 to n with recursion
fib(1) -> 2^1 steps
fib(2) -> 2^2 steps
fib(3) -> 2^3 steps
....
fib(n) -> 2^n steps
Therefore, the total amount of work is: 2^1 + 2^2 + 2^3 + 2^4 + , , , + 2^n
All Fibonacci from 0 to n numbers with memoization
Time Complexity
Step 1: Time Complexity of fib(n, memo)
In a naive recursive Fibonacci,
fib(n)
makes O(2ⁿ) calls due to repeated work.With memoization, each Fibonacci number is computed only once.
The function still makes O(n) recursive calls in total, since each number is computed once and stored.
Thus,
fib(n, memo)
runs in O(n) time.
Step 2: Time Complexity of allFib(n)
It calls
fib(i, memo)
for eachi
from0
ton-1
.Since memoization prevents redundant calculations,
fib(i, memo)
runs in O(1) for already computed values.The first call computes Fibonacci numbers in O(n) total.
All subsequent calls just fetch values from
memo
in O(1) time.Overall, O(n) + O(n) = O(n).
Thus, final time complexity = O(n).
Space Complexity
Step 1: Space Complexity of memo
Array
The
memo
array stores n+1 Fibonacci values.This contributes O(n) space.
Step 2: Space Complexity of Recursion Stack
Since
fib(n, memo)
is recursive, it uses the call stack.The maximum recursion depth is O(n) (for the deepest recursive call).
However, memoization avoids redundant calls, so at most O(n) calls are on the stack.
Thus, final space complexity = O(n).
Powers of 2 from 1
Space Complexity Analysis
1. Understanding the Recursion
The function repeatedly divides
n
by 2 in each recursive call untiln < 1
.This means the recursion depth is O(log n) (since
n
is halved in each step).
2. Space Used by the Call Stack
Each recursive call adds a new frame to the call stack.
Since the recursion depth is O(log n), the maximum stack size is also O(log n).
3. No Extra Data Structures
Apart from the call stack, no extra space (like arrays or hash tables) is used.
The function only uses a few integer variables (
prev
,curr
,n
), which are O(1) space
Total = O(1) + O(logn) = O(logn)
Compute a^b with recursion
Square root of a number by guessing
This is doing binary search to find square root
Miscellaneous Problems
GCD with Euclidean algorithm
This is Euclidean algorithm which repeatedly replaces the pair (n, m) with (m , n mod m) until m=0. The result is the greatest common divisor (GCD) of n and m .
Worst-Case Example
The worst-case input for the Euclidean algorithm occurs when the two numbers are consecutive Fibonacci numbers, because the remainder reduction is minimal in each step. For example:
For n=Fk+1 and m=Fk (where Fk is the k-th Fibonacci number), the algorithm performs k steps.
Since Fibonacci numbers grow exponentially (Fk≈ϕk, where ϕ is the golden ratio), the number of steps is proportional to logN.
Eg: 8 and 13.
(13 % 8) = 5 which is the previous fibonacci number of 8.
Same way : (8 % 5) = 3 , (5 % 3) = 2 , (3 % 2) = 1 , (2 % 1) = 0.
Different For loops
A) for (i = 0; i < n; i++)
Loop Analysis:
The loop starts at i=0 and increments by 1 (i++) until i<n.
The number of iterations is directly proportional to nn.
Time Complexity: O(n)
B) for (i = 0; i < n; i += 2)
Loop Analysis:
The loop starts at i=0 and increments by 2 (i+=2) until i<n.
Since the increment step is 2, the number of iterations will be approximately half the size of n. Number of iterations=⌈n/2⌉
Time Complexity: O(n)
Although the number of iterations is halved compared to Case A, the time complexity remains O(n) because constants are ignored in Big-O notation.
C) for (i = 1; i < n; i = 2)
Loop Analysis:
The loop starts at i=1 and multiplies by 2 (i∗=2) until i<n.
The value of i follows the sequence: 1,2,4,8,… The loop runs while i<n.
The number of iterations corresponds to the number of times ii can be doubled before it exceeds nn:Number of iterations=⌊log2(n)⌋
Time Complexity: O(logn)
This is more efficient than both A and B because the growth rate is logarithmic, much smaller than linear.
D) for (i = n; i > -1; i /= 2)
Loop Analysis:
The loop starts at i=n and divides i by 2 (i/=2) until i>−1.
The value of i follows the sequence: n,n/2,n/4,…n. The loop runs while i>−1. Loop will not end since i will stuck at 0.
Check the boundness
Check the growth rate
1. f1(n)=2^n (Exponential Growth)
This function grows exponentially.
Exponential functions grow faster than polynomial or logarithmic functions.
For large nn, 2n becomes extremely large.
Time Complexity: Exponential
2. f2(n)=n^3/2 (Polynomial Growth)
This is a polynomial function where the exponent is 3/2 (greater than 1 but less than 2).
Polynomial functions grow slower than exponential functions but faster than logarithmic or linearithmic functions for large n.
Time Complexity: Polynomial (O(n3/2).
3. f3(n)=nlogn (Linearithmic Growth)
This is a linearithmic function (a combination of linear and logarithmic growth).
It grows faster than logarithmic functions but slower than any polynomial function with nk where k>1
Time Complexity: Linearithmic (O(nlogn).
4. f4(n)=n^logn (Super-Polynomial Growth)
The function n^logn grows faster than any polynomial function because the exponent itself is logn, which increases with n.
It is slower than exponential functions like 2^n but faster than standard polynomials.
Time Complexity: Super-Polynomial (O(n^logn).
For and While loop
Outer Loop
The
for
loop iterates fromi = 0
toi = n - 1
, so it runs nn times.
Inner while
Loop
while
LoopThe
while
loop incrementsj
until one of the conditions (j < n
orarr[i] < arr[j]
) is false.The key observation here is that
j
does not reset to 0 for each iteration of the outerfor
loop. Instead,j
keeps its value from the previous iteration.This means
j
only moves forward and never goes backward.
Number of Iterations
j
starts at 0 and can increment at most nn times (from 0 to n−1) across all iterations of the outer loop. This is becausej
is shared across all iterations of thefor
loop and is incremented only in thewhile
loop.
Time Complexity
The outer loop runs nn times.
The inner loop (the
while
loop) movesj
forward at most nn times in total across all iterations of the outer loop.
Thus, the total number of operations performed by both loops is O(n).
Which of the following are equivalent to O(N)?
O(N + P), where P < X 0(2N) O(N + log N) O(N + M)
All but the last one are equivalent to O(N) because there is no established relationship between N and M, so we have to keep both variables in there.
String Array Sorting
Suppose we had an algorithm that took in an array of strings, sorted each string, and then sorted the full array. What would the runtime be?
Let s be the length of the longest string. Let a be the length of the array.
Sorting each string is 0( s log s). We have to do this for every string (and there are a strings), so that's 0( a* s log s). Now we have to sort all the strings. There are a strings, so we'll may be inclined to say that this takes O ( a log a) time. We should also take into account that we need to compare the strings. Each string comparison takes O(s) time. There are O(a log a) comparisons, therefore this will take 0( a*s log a) time.
Then runtime will be 0( a* s ( log a + log s)).
Intersection (the number of elements in common) of two arrays
Last updated
Was this helpful?