QuickSort & MergeSort
QuickSort
The idea behind the quickSort algorithm is to choose, among the array elements, one which will serve as a pivot element used to split the array into two parts:
i  
≤ pivot  ≥ pivot  
0  n 
The second phase calls quicksort recursively on the two parts. We must ensure that the i index satisfies 0 < i < n, so that both parts are nonempty; otherwise, quickSort could fall into infinite recursion! The essence of quickSort’s speed is to get the partitioning split to be as close as possible to the middle.
Quicksort partitioning algorithm
A key step is how the pivot is chosen. We do so by sorting the elements at three positions:
a[from], a[center], a[to1] ( center = (from+(to1))/2 )
The pivot is set to a[center] and and the starting indices are as follows:
a 

By virtue of the sorting of the three elements, we know before the loop starts that a[i] ≤ pivot and pivot ≤ a[j]. Here is partition algorithm pseudocode:
//sort the 3 elements { a[from], a[center], a[to1] }, // where center = (from + (to1))/2 pivot = a[center] i = from j = to1 while (true) { while (a[++i] < pivot) { } // move i up and keep going if element is < pivot while (pivot < a[j]) { } // move j down and keep going if element is > pivot if (i < j) { swap(a, i, j); } else { break; } }
In the quickSort splitting code, the indices i and j control the splitting and the i index ends up being the correct marker so that:
 for k < i: a[k] ≤ pivot
 for k ≥ i: a[k] ≥ pivot
Key points are that:
 the while loops always move i up and j down by at least one every time.
 i and j indices stop whenever the element seen equal to the pivot. This seems unnecessary and inefficient; however, in reality, it makes it more likely that the splitting will be closer to optimal (equal halves).
After the partitioning is complete, we call the algorithm recursively on both sides:
quickSort(a, from, i); quickSort(a, i, to);
The key points of the partitioning which ensure that this actually works is that:
from < i < to
without this guarantee, one of the recursive calls would be empty and the other would be to the entire array (possibly altered). This is the initial importance of the seemingly inefficient “<” comparison when comparing against the pivot element.
Median of 3
The usage of the medianOf3 helper function attempts to determine the best possible choice of pivot in constant time. The closer these two parts are to being the same size, the faster the algorithm will be. The medianOf3 function sorts the left, center, and right elements and then pivot is chosen to be the value of the center element. In particular, in a common order pattern of being “nearly sorted,”medianOf3 is likely to choose a good pivot element.
Smaller subarrays first to minimize stack height
In order to keep the stack height (recursion height) minimal, it is important to make the recursive call on the smaller subarray first. Thus the recusive calls in our final version look like this:
if (i  from <= to  i) { quickSort(a, from, i); quickSort(a, i, to); } else { quickSort(a, i, to); quickSort(a, from, i); }
With this modification, the extra space usage (i.e., stack depth) is O(log(n)). Without this change, a worstcase scenario (see below) can cause an O(n) stack depth.
For demonstration purpose, we can ignore this code change. It has no effect on the total running time because all recursive calls must be made, regardless of the order in which they are done.
CUTOFF value
The quickSort splitting method, in its usage of the medianOf3 operation, cannot be used for arrays of size 2. In fact, it turns out to be best to avoid the recursive calls down to an array which is “too small.” Optimized quickSort implementations impose a CUTOFF value whereby all arrays of size less than or equal to this will be sorted by an alternative method, insertionSort being a good choice. The optimal CUTOFF value can be empirically determined to be 9 or 10 by executing a main function counting operations which does counting for a range of possible values.
Quicksort Code
My version represents a slight deviation from the textbook’s version. My tests indicate that my version is slightly less costly than the textbook’s version which does two extra swaps in order to make the right side have one less element.
Here is a test program that can be used to verify the empirically “best” cutoff value.
Demonstrations
The mainShowSorts function is useful for giving demonstrations of quickSort; simply set:
WHICH = ShowAlgorithms.QUICK
The one algorithmic difference to keep in mind is that the quickSort algorithm in sorts.ShowAlgorithms uses a default cutoff value of 5 for demonstration purposes, whereas thesorts.Algorithms version uses the optimal cutoff value of 10.
You can demonstrate of the effectiveness of the quickSort’s splitting method by altering the generated values in the test array, e.g., try these:
NUM = 20; for (int i = 0; i < A.length; ++i) { A[i] = 10 + r.nextInt(3 * NUM); // random //or, A[i] = 10 + i; // sorted //or, A[i] = 10 + NUM  i; // reverse sorted //or A[i] = 10; // all equal }
Here is an example with 11 elements where the quicksort cutoff is set at 6 making only one partition occur:
0 1 2 3 4 5 6 7 8 9 10            A=[1, 4, 6, 8, 5, 7, 10, 4, 1, 6, 6] sort 3 elements at: 0, 5=(0+10)/2, 10 ^ ^ ^ [1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7] mOf3: pivot = A[5] = 6 i j [1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7] swap 6 & 6 i j [1, 4, 6, 8, 5, 6, 10, 4, 1, 6, 7] swap 8 & 1 i j [1, 4, 6, 1, 5, 6, 10, 4, 8, 6, 7] swap 6 & 4 j i =>[1, 4, 6, 1, 5, 4, 10, 6, 8, 6, 7] j = 5 ≤ i = 6
Now, quickSort(0,6); quickSort(6,11);, both of which call insertionSort.
Best Time
Quicksort’s best case time is O(n*log n). The best case occurs when the split separates the array into two “equal” halves. In this case the timing function would satisfy a recurrence equation like this:
T(n) ≤ 2 * T(n/2) + C * n // C * n = time to split
We can make a quick argument about this result following the book’s lead:
T(n)/n ≤ 2 * T(n/2)/n + C = T(n/2) / (n/2) + C ( make the 2 factor divide the divisor ) ≤ T(n/4) / (n/4) + C + C ( replace n by n/2 in previous ) ... ≤ T(1) / 1 + C + ... + C ( C repeated log(n) times ) T(n)/n ≤ T(1) + C * log(n)
Thus
T(n) ≤ T(1) * n + C * n * log(n) = O(n*log n)
Average Time
The average time of quicksort is also O(n*log n). The math is a bit more involved (see the textbook). One important thing to point out is that the assumption in the book is that the partition index, i, is equally likely to be any position 0 < i < n. This is definitely not true for the quickSort algorithm above in which the medianofthree pivot makes it far more likely that the split occur towards the middle. The mathematical analysis becomes very involved if one takes this optimized pivot choice into consideration.
Worst time
QuickSort’s worst case time is quadratic, i.e., Ω(n^{2}), which occurs when the splitting consistently leaves up to, say, K elements in one portion or the other. In that case the timing would satisfy something like this:
T(n) ≥ T(nK) + n // the "n" term means split must "look at" every element
These “worstcase” arrays are convoluted, but it is still possible that they occur in practice. Here is one example which you can plug in and test:
sorts.ShowAlgorithms.setQuicksortCutoff(2); A = new Integer[] { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 2, 5, 1, 6, 3, 7, 0, 8, 4, 9 };
Plug this into the mainShowSorts function after the “typical array generation” section, so that it overrides the definition of A and replaces it by this new one.
Sorting with this array consistently leaves two equal elements in the rightside split all the way down to the cutoff.
Generalizable example which has quadratic time
Another example which is simpler to generalize to a family of arrays of varying sizes (n = 32 in this case) is this:
A = new Integer[] { 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 10, 18, 10, 19, 10, 20, 10, 21, 10, 22, 10, 23, 10, 24, 10, 25 };
Again, test this out in the mainShowSorts function, like the previous example. Once again, the splitting consistently leaves two equal elements in the right side, but, in this case, only until the array is ½ its original size. Doing the math, we can say:
T(m) ≥ T(m2) + m, m ≥ n/2,
and we prove that the run time for this series of examples is Ω(n^{2}):
T(n) ≥ T(n2) + n ≥ T(n4) + (n2) + n ≥ T(n/22) + n/2 + ... + (n2) + n ≥ n/2 + ... + (n2) + n (at least n/4 terms, all ≥ n/2) ≥ n/2 + n/2 + ... + n/2 (n/4 times) = n^{2}/8
The point to make is that these worst case arrays are rather convoluted and not representative of common patterns (nearly sorted, almost all equal, etc.) Nevertheless, some users will argue that quadratic time behavior can actually occur in practice.
MergeSort
The idea behind mergeSort is a simple application of recursion:
mergeSort(A) { if (A.length == 1) return; // nothing to do split A into two equal halves: A = (A1, A2) mergeSort(A1) mergeSort(A2) // then, the merge step, where we actually doing something // by merging the two sorted halves back into the full array: A = merge(A1,A2) }
MergeSort vs QuickSort recursion
MergeSort exhibits end order or bottom up recursion in that all the significant actions take place after the recursive calls. This means that the array is split all the way down to singletons (of size 1) and that all actions take place on the way out of recursion.
In contrast, quickSort is exactly the opposite, namely top down recursion in the sense that the main action of partitioning is done before the recursive calls.
Analysis
Let T(n) be the worstcase number of comparisons and data movements for an array (of size n), we see that for n > 1, T(n) is made up of these counts:
 recursive call on the two halves of A: 2 * T(n/2)
 merge the halves of A (in a stable manner): O(n)
and conclude that:
T(n) = 2 * T(n/2) +
O
(n)
We’ve seen this recurrence many times before and know the solution is O(n*log n). Thus the significance of mergeSort is that:
 its worst case time is O(n*log n).
 it is stable
The one detraction of mergesort is that it requires a spare array in order to do the merge operations; for a huge array, this may be considered too costly.
Our first inefficient approach is think of B as always being the merge target.
 If the size of A is 1, copy A into B.
 If the size is greater than 1, call mergeSort recursively to sort left half of A, A1, into the left half of B, B1. Same with the right half A2 into B2.
 Copy B into A. A now consists of two sorted halves.
 Merge the sorted halves (A1,A2) into B.
At the completion of the algorithm, we would have to copy B back to A.
Efficient usage of merge target
A practical implementation of this algorithm does fewer data movements than indicated above by having A and B switch “roles” where one is the merge source and the other the merge target. In this way we completely avoid the copy step (b).
In any case, we want the final merge target to be A. As we go into the recursion, the merge target alternates:
A ← B ← A ← B ← ... ← (base of recursion: A or B)
This requirement means that when we get to the base case with singleton arrays:
 if B is the merge target: copy (the single element) from A to B.
 if A is the merge target: do nothing, the element is already in A.
A =  17  16  15  14  13  12  11  10  (base of recursion)  
↓ (copy elements)  
B =  17  16  15  14  13  12  11  10  
↓ (merge 1’s to 2’s)  
A =  16  17  14  15  12  13  10  11  (sorted 2’s)  
↓ (merge 2’s to 4’s)  
B =  14  15  16  17  10  11  12  13  (sorted 4’s)  
↓ (merge 4’s to 8’s)  
A =  10  11  12  13  14  15  16  17  THE GOAL (sorted 8’s) 
A =  25  24  23  22  21  20  19  18  17  16  15  14  13  12  11  10  (base of recursion — do nothing)  
↓ (merge 1’s to 2’s)  
B =  24  25  22  23  20  21  18  19  16  17  14  15  12  13  10  11  
↓ (merge 2’s to 4’s)  
A =  22  23  24  25  18  19  20  21  14  15  16  17  10  11  12  13  
↓ (merge 4’s to 8’s)  
B =  18  19  20  21  22  23  24  25  10  11  12  13  14  15  16  17  
↓ (merge 8’s to 16’s)  
A =  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  THE GOAL 
The MergeSort Code
In order to manage the merge target array, a boolean variable target_is_a is passed into the recursive calls. This boolean indicates what to do at the merge step:
target_is_a = true: A is the merge target, B is the source = false: B is the merge target, A is the source
The initial call assigns target_is_a to true, indicating that we want the final merge to go into to the A array. For each recursive step the boolean is negated, toggling between true and false all the way down to the base case of a single element.
When we get to the base case, a singleton array, if target_is_a is true, i.e., the element must end up in A, then there is nothing to do because it is already there; otherwise the element must be copied from A into B.
Stability
In order to achieve stability, care must be taken in the merge operation;
private static <E> void merge( E[] source, E[] target, int fromIndex, int middle, int toIndex, Comparator<? super E> c) { int i = fromIndex, j = middle, // source indices in from array k = fromIndex; // target index in to array while (i < middle  j < toIndex) { if (i == middle) { target[k++] = source[j++]; } else if (j == toIndex) { target[k++] = source[i++]; } else { if (c.compare(source[i], source[j]) <= 0) { // "<=" gives stability target[k++] = source[i++]; } else { target[k++] = source[j++]; } } }
The left side of the merge is indexed by i and the right side by j. Any equal elements in the sorted halves will be grouped together in correct order (this is an inductive assumption). If i and j are pointing to equal elements, we ensure that the leftside element, i.e., the one with index i, goes into the target array first:
if (c.compare(from[i],from[j]) <= 0) { // NOT: < 0 to[k++] = from[i++];
Best/worse case number of operations
MergeSort will always have the same number of data moves for arrays of equal sizes because the data is simply being moved between source and target arrays for each of the “steps” of merge. In terms of comparisons, the best case for a single merge is when the largest in one half is less than or equal to the smallest in the other half, like this array of size 16:
18  19  20  21  22  23  24  25  10  11  12  13  14  15  16  17  
↓  
10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25 
With 8 comparisons, the “smaller” half (of size 8) goes first and then “larger” half fills out the array with no additional comparisons necessary. Count roughly n/2 comparisons for merging the halves of an array of size n
The worst case is when the array elements are shuffled like this:
10  12  14  16  18  20  22  24  11  13  15  17  19  21  23  25 
The shuffled structure will force comparisons between halves for each of the 15 pairs. Count roughly n1 comparisons to merge the halves on an array of size n, approximately twice as many comparisons as the best case.
Working our way down with this shuffling strategy, a worst case array of size 16 would be:
10  18  14  22  12  20  16  24  11  19  15  23  13  21  17  25  
— then proceeding with the sort —  
10  18  14  22  12  20  16  24  11  19  15  23  13  21  17  25  8 * 1 = 8 comparisons (but no changes)  
10  14  18  22  12  16  20  24  11  15  19  23  13  17  21  25  4 * 3 = 12 comparisons  
10  12  14  16  18  20  22  24  11  13  15  17  19  21  23  25  2 * 7 = 14 comparisons  
10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  15 comparisons 
© Robert M. Kline