Sorting
1. Bubble Sort
Description: Bubble Sort repeatedly steps through the list, compares adjacent elements, and swaps them if they are in the wrong order. This process is repeated until the list is sorted.
Time Complexity:
Best Case: O(n) (when the array is already sorted)
Average Case: O(n^2)
Worst Case: O(n^2)
Space Complexity: O(1) (in-place sort)
Use Cases:
Simple and easy to implement but is not very efficient for large datasets.
Useful for small datasets or educational purposes to understand sorting concepts
Example
2. Selection Sort
Description: Selection sort is a sorting technique that divides the unordered list into two sub-arrays: sorted and unsorted. In each pass, it finds the minimum (or maximum for descending order) element in the unsorted sub-array and swaps it with the first element of the unsorted sub-array. This process continues until the entire list is sorted.
Time Complexity:
Best Case: O(n^2)
Average Case: O(n^2)
Worst Case: O(n^2)
Space Complexity: O(1) (in-place sort)
Use Cases:
Similar to bubble sort, the number of comparisons grows quadratically with the size of the array. Selection sort is inefficient for large datasets.
Useful for small datasets
Example
3. Insertion Sort
Description: Insertion Sort builds the final sorted array one item at a time. It removes an element from the input data, finds the location it belongs to within the sorted list, and inserts it there.
Time Complexity:
Best Case: O(n) (when the array is already sorted)
Average Case: O(n^2)
Worst Case: O(n^2)
Space Complexity: O(1) (in-place sort)
Use Cases:
Efficient for small or partially sorted datasets due to its linear average-case time complexity.
In-place sorting algorithm, meaning it sorts the data within the original array without requiring additional memory allocation.
Adaptive: efficient for data that is already substantially sorted
Used in hybrid sorting algorithms like Timsort
Becomes inefficient for large datasets due to its worst-case quadratic time complexity.
Not as efficient as Merge Sort or Quick Sort for general sorting purposes.
Example
4. Merge Sort
Description: Merge Sort is a divide and conquer algorithm that splits the list into halves, recursively sorts each half, and then merges the sorted halves.
Time Complexity:
Best Case: O(nlogn)
Average Case: O(nlogn)
Worst Case: O(nlogn)
Space Complexity: O(n) (not in-place)
Use Cases:
Sorting linked lists
External sorting (e.g., sorting large data sets stored on disk)
Example
5. Quick Sort
Description: Quick Sort is a divide and conquer algorithm that picks an element as a pivot, partitions the array around the pivot, and recursively sorts the partitions.
Time Complexity:
Best Case: O(nlogn)
Average Case: O(nlogn)
Worst Case: O(n^2) (rare, can be mitigated with good pivot selection strategies like randomized or median-of-three). If we repeatedly partition the array (and its sub-arrays) around an element, the array will eventually become sorted. However, as the partitioned element is not guaranteed to be the median (or anywhere near the median), our sorting could be very slow. This is the reason for the 0( n^2) worst case runtime.
Space Complexity: O(logn) (in-place)
Use Cases:
Generally faster in practice compared to other O(nlogn) algorithms
Used in systems and applications where average-case performance matters.
Efficient for large datasets due to its average O(n log n) time complexity.
In-place sorting algorithm, meaning it sorts the data within the original array without requiring significant additional memory allocation.
Pivot selection strategy can significantly impact performance.
Example
6. Heap Sort
Description: Heap Sort is a sorting technique that leverages the properties of a heap data structure to efficiently sort an array. It involves converting the input array into a max-heap (where the root element has the largest value), and then repeatedly removing the largest element (root) and placing it at the end of the sorted sub-array. This process continues until the entire array is sorted.
Time Complexity:
Best Case: O(nlogn)
Average Case: O(nlogn)
Worst Case: O(nlogn)
Space Complexity: O(1) (in-place)
Use Cases:
Situations where space complexity matters
Embedded systems where memory usage is constrained
Example
7. Radix Sort
Description: Radix sort is a non-comparative sorting technique that works efficiently for integers by sorting digits individually, from the least significant digit (LSD) to the most significant digit (MSD). It leverages the concept of place value to group elements based on their digits and then sorts them within those groups.
Time Complexity:
Best Case: O(nk)
Average Case: O(nk)
Worst Case: O(nk)
Space Complexity: O(n+k)
Use Cases:
Sorting integers or strings
Useful for large datasets where comparison-based sorting algorithms are inefficient
Efficient for sorting integers with a fixed number of digits or limited range.
Stable sorting algorithm, meaning it preserves the original order of equal elements.
Non-comparative sorting, offering advantages for specific hardware architectures.
Not as efficient for sorting strings or data with variable-length keys.
Requires additional space for buckets during sorting.
Example
8. Bucket Sort
Description: Bucket sort is a sorting technique that divides the elements into a fixed number of buckets (or bins) based on a specific range or hash function. Each bucket is then sorted individually using any suitable sorting algorithm like insertion sort. Finally, the sorted buckets are concatenated to form the final sorted array.
Time Complexity:
Best Case: O(n+k)
Average Case: O(n+k))
Worst Case: O(n^2)
Space Complexity: O(n+k)
Use Cases:
Sorting uniformly distributed data
Useful when input is drawn from a known distribution
Efficient for data with a limited range or when elements can be distributed evenly across buckets.
Works well with other sorting algorithms for the individual buckets (e.g., insertion sort for small bucket sizes).
Performance depends heavily on the chosen number of buckets and the distribution of elements.
Might require additional space for the buckets array.
9. Counting Sort
Description: Counting sort is a sorting algorithm that efficiently sorts elements when the range of possible values is limited. It works by creating a count array to store the frequency of each unique element in the original array. This count array is then used to place elements in their correct positions in the sorted output array.
Time Complexity:
Best Case: O(n+k)
Average Case: O(n+k)
Worst Case: O(n+k)
Space Complexity: O(k)
Use Cases:
Sorting integers within a small range
Efficient for datasets where range of the elements is not significantly larger than the number of elements
Not suitable for large ranges of possible values, as the count array size becomes significant.
Example
Comparison Table
Bubble Sort
O(n)
O(n2)
O(n2)O(n2)
O(1)O(1)
Small datasets, educational purposes
Selection Sort
O(n2)
O(n2)
O(n2)O(n2)
O(1)O(1)
Small datasets
Insertion Sort
O(n)
O(n2)
O(n2)
O(1)
Small datasets, adaptive sorting
Merge Sort
O(nlogn)
O(nlogn)
O(nlogn)
O(n)
Linked lists, external sorting
Quick Sort
O(nlogn)
O(nlogn)
O(n2)
O(logn)
General-purpose, fast average performance
Heap Sort
O(nlogn)
O(nlogn)
O(nlogn)
O(1)
Space-constrained environments
Radix Sort
O(nk)
O(nk)
O(nk)
O(n+k)
Large datasets, integer/strings
Bucket Sort
O(n+k)
O(n+k)
O(n2)
O(n+k)
Uniformly distributed data
Counting Sort
O(n+k)
O(n+k)
O(n+k)
O(k)
Small range integers
Last updated
Was this helpful?