Benchmarking quicksort and mergesort yields that mergesort is faster

醉酒当歌 提交于 2020-01-24 13:57:05

问题


I've tried benchmarking and for some reason when trying both of them on array of 1M elements the Mergesort sorted it in 0.3s and Quicksort took 1.3s.

I've heard that generally quicksort is faster, because of its memory management, but how would one explain these results?

I am running MacBook Pro if that makes any difference. The input is a set of randomly generated integers from 0 to 127.

The codes are in Java:

MergeSort:

static void mergesort(int arr[]) {
    int n = arr.length;
    if (n < 2)
        return;
    int mid = n / 2;
    int left[] = new int[mid];
    int right[] = new int[n - mid];
    for (int i = 0; i < mid; i++)
        left[i] = arr[i];
    for (int i = mid; i < n; i++)
        right[i - mid] = arr[i];
    mergesort(left);
    mergesort(right);
    merge(arr, left, right);
}

public static void merge(int arr[], int left[], int right[]) {
    int nL = left.length;
    int nR = right.length;
    int i = 0, j = 0, k = 0;
    while (i < nL && j < nR) {
        if (left[i] <= right[j]) {
            arr[k] = left[i];
            i++;
        } else {
            arr[k] = right[j];
            j++;
        }
        k++;
    }
    while (i < nL) {
        arr[k] = left[i];
        i++;
        k++;
    }
    while (j < nR) {
        arr[k] = right[j];
        j++;
        k++;
    }
}

Quicksort:

public static void quickSort(int[] arr, int start, int end) {
    int partition = partition(arr, start, end);

    if (partition - 1 > start) {
        quickSort(arr, start, partition - 1);
    }
    if (partition + 1 < end) {
        quickSort(arr, partition + 1, end);
    }
}

public static int partition(int[] arr, int start, int end) {
    int pivot = arr[end];

    for (int i = start; i < end; i++) {
        if (arr[i] < pivot) {
            int temp = arr[start];
            arr[start] = arr[i];
            arr[i] = temp;
            start++;
        }
    }

    int temp = arr[start];
    arr[start] = pivot;
    arr[end] = temp;

    return start;
}

回答1:


Your implementations are a bit simplistic:

  • mergesort allocates 2 new arrays at each recursive call, which is expensive, yet some JVMs are surprisingly efficient at optimising such coding patterns.
  • quickSort uses a poor choice of pivot, the last element of the subarray, which gives quadratic time for sorted subarrays, including those with identical elements.

The data set, an array with pseudo-random numbers in a small range 0..127, causes the shortcoming of the quickSort implementation to perform much worse than the inefficiency of the mergesort version. Increasing the dataset size should make this even more obvious and might even cause a stack overflow because of too many recursive calls. Data sets with common patterns such as identical values, increasing or decreasing sets and combinations of such sequences would cause catastrophic performance of the quickSort implementation.

Here is a slightly modified version with less pathological choice of pivot (the element at 3/4 of the array) and a loop to detect duplicates of the pivot value to improve efficiency on datasets with repeated values. It performs much better (100x) on my standard sorting benchmark with arrays of just 40k elements, but still much slower (8x) than radixsort:

public static void quickSort(int[] arr, int start, int end) {
    int p1 = partition(arr, start, end);
    int p2 = p1;

    /* skip elements identical to the pivot */
    while (++p2 <= end && arr[p2] == arr[p1])
        continue;

    if (p1 - 1 > start) {
        quickSort(arr, start, p1 - 1);
    }
    if (p2 < end) {
        quickSort(arr, p2, end);
    }
}

public static int partition(int[] arr, int start, int end) {
    /* choose pivot at 3/4 or the array */
    int i = end - ((end - start + 1) >> 2);
    int pivot = arr[i];
    arr[i] = arr[end];
    arr[end] = pivot;

    for (i = start; i < end; i++) {
        if (arr[i] < pivot) {
            int temp = arr[start];
            arr[start] = arr[i];
            arr[i] = temp;
            start++;
        }
    }

    int temp = arr[start];
    arr[start] = pivot;
    arr[end] = temp;

    return start;
}

For the OP's dataset, assuming decent randomness of the distribution, scanning for duplicates is responsible for the performance improvement. Choosing a different pivot, be it first, last, middle, 3/4 or 2/3 or even median of 3 has almost no impact, as expected.

Further testing on other non random distributions shows catastrophic performance for this quickSort implementation due to the choice of pivot. On my benchmark, much improved performance is obtained by choosing for pivot the element at 3/4 or 2/3 of the array (300x improvement for 50k samples, 40% faster than standard merge sort and comparable time to radix_sort).

  • Mergesort has the distinct advantage of being stable and predictable for all distributions, but it requires extra memory between 50% and 100% of the size of the dataset.
  • Carefully implemented Quicksort is somewhat faster in many cases and performs in place, requiring only log(N) stack space for recursion. Yet it is not stable and tailor made distributions will exhibit catastrophic performance, possibly crashing.
  • Radixsort is only appropriate for specific kinds of data such as integers and fixed length strings. It also requires extra memory.
  • Countingsort would be the most efficient for the OP's dataset as it only needs an array of 128 integers to count the number of occurrences of the different values, known to be in the range 0..127. It will execute in linear time for any distribution.


来源:https://stackoverflow.com/questions/59585662/benchmarking-quicksort-and-mergesort-yields-that-mergesort-is-faster

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!