Back to Library

Slow Sort

Esoteric
Algorithms that are more conceptual, humorous, or illustrative of a theoretical point than for practical use.
Divide and Conquer
Recursively breaks the problem into smaller sub-problems, solves them, and combines the results.
In-Place
Requires a constant amount of extra memory space (O(1)), regardless of input size.
Unstable
Does not guarantee the relative order of elements with equal values.

Slow Sort is a recursive, in-place sorting algorithm created by Andrei Broder and Jorge Stolfi. It was designed as a "pessimal" algorithm to illustrate concepts of profoundly bad complexity. It is based on the principle of multiply and surrender, a parody of the divide and conquer strategy. The algorithm recursively sorts two halves of an array, places the maximum of the entire array at the end, and then recursively sorts the entire array again, except for the newly placed maximum. This final, redundant recursive call is the source of its abysmal performance.

Visualize Fullscreen

Demo

20 elements • 4x Speed

No Data

Did you know?

  • The name comes from its creators' paper "Pessimal Algorithms and Simplexity Analysis", a parody of "Optimal Algorithms and Complexity Analysis".
  • The "multiply and surrender" principle is a direct, humorous opposite of the effective "divide and conquer" strategy used by algorithms like Merge Sort and Quick Sort.

How it Works

  • 1. Base Case: If the subarray has one or zero elements, it is already sorted, so return.

  • 2. Divide: Split the current subarray A[i...j] into two halves at the middle index m.

  • 3. Recursively Sort Halves:

  • First, recursively call Slow Sort on the first half A[i...m]. This will eventually place the maximum of this half at index m.

  • Second, recursively call Slow Sort on the second half A[m+1...j]. This will place the maximum of this half at index j.

  • 4. Find and Place Global Maximum: Compare the maxima of the two halves (A[m] and A[j]). If A[m] is larger, swap them. This guarantees the maximum element of the entire subarray A[i...j] is now at index j.

  • 5. Multiply and Surrender: Recursively call Slow Sort on the entire subarray except for the maximum element we just placed: A[i...j-1]. This final step ensures the rest of the elements are sorted, but at an immense computational cost.

Complexity Analysis

Best Case
O(nlog2(n)2+ϵ)O(n^{\frac{\log_2(n)}{2+\epsilon}})
Average
O(nlog2(n)2+ϵ)O(n^{\frac{\log_2(n)}{2+\epsilon}})
Worst Case
O(nlog2(n)2+ϵ)O(n^{\frac{\log_2(n)}{2+\epsilon}})
Space
O(n)O(n)

Advantages

  • Educational Value: It serves as an excellent and memorable example of a "pessimal" algorithm, highlighting the importance of efficient algorithm design.
  • Conceptual Simplicity: Despite its inefficiency, the recursive structure is relatively straightforward to understand.

Disadvantages

  • Extreme Inefficiency: The time complexity, given by the recurrence T(n)=2T(n/2)+T(n1)+1T(n) = 2T(n/2) + T(n-1) + 1, is not in polynomial time and is far worse than even bubble sort.
  • Impractical: It is never used in practice and exists purely for academic and humorous purposes.
  • Stack Overflow Risk: The deep recursion, particularly from the T(n1)T(n-1) call, can lead to a stack overflow error for even moderately sized arrays.

Implementation

JavaScript slow-sort.js
function slowSort(A, i, j) {
  // Base Case
  if (i >= j) {
    return;
  }

  const m = Math.floor((i + j) / 2);

  // Sort the first half
  slowSort(A, i, m);

  // Sort the second half
  slowSort(A, m + 1, j);

  // Find max of whole array and place at the end
  if (A[j] < A[m]) {
    [A[j], A[m]] = [A[m], A[j]]; // Swap
  }

  // Sort all but the maximum
  slowSort(A, i, j - 1);
}