Quicksort slower than Mergesort?

后端 未结 15 1171
名媛妹妹
名媛妹妹 2021-02-07 02:01

I was working on implementing a quicksort yesterday, and then I ran it, expecting a faster runtime than the Mergesort (which I had also implemented). I ran the two, and while th

相关标签:
15条回答
  • 2021-02-07 02:47

    Previously discussed on SO: "Why is quicksort better than mergesort?"

    ~

    0 讨论(0)
  • 2021-02-07 02:48

    I could imagine that by directly accessing the memory, using C for example, one can improve the performance of Quicksort more than it is possible with Mergesort.

    Another reason is that Mergesort needs more memory because it's hard to implement it as an in-place sort.

    And specifically for your implementation you could improve the choosing of the pivot, there are a lot of different algorithms to find a good pivot.

    As can be seen on wikipedia, one can implement Quicksort in different ways.

    0 讨论(0)
  • 2021-02-07 02:50

    One of the advantages of quicksort for relatively small array sizes is just an artifact of hardware implementation.

    On arrays, quicksort can be done in-place, meaning that you're reading from and writing to the same area of memory. Mergesort, on the other hand, typically requires allocating new buffers, meaning your memory access is more spread out. You can see both of these behaviors in your example implementations.

    As a result, for relatively small datasets, quicksort is more likely to get cache hits and therefore just tends to run faster on most hardware.

    Mergesort is still a pretty good solution for large data sets or other data structures, like linked lists, as your experiments confirm.

    0 讨论(0)
  • 2021-02-07 02:50

    Merge sort's worst case is quicksort's average case, so if you don't have a good implementation, merge sort is going to be faster overall. Getting quicksort to work fast is about avoiding sub-average cases. Choose a better pivot (median-of-3 helps) and you'll see a difference.

    0 讨论(0)
  • 2021-02-07 02:51

    Mergesort is a lot slower for random array based data, as long as it fits in ram. This is the first time I see it debated.

    • qsort the shortest subarray first.
    • switch to insertion sort below 5-25 elements
    • do a normal pivot selection

    Your qsort is very slow because it tries to partition and qsort arrays of length 2 and 3.

    0 讨论(0)
  • 2021-02-07 02:53

    I actually just wrote a "linked-list comparative sort demo program" in C and arrived at a similar conclusion (that mergesort will beat quicksort for most uses), altho I have been told that quicksort is generally not used for linked lists anyway. I would note that the choice of pivot values is a monster factor -- my initial version used a random node as the pivot, and when I refined it a bit to take a mean of two (random) nodes, the exectution time for 1000000 records went from over 4 minutes to less than 10 seconds, putting it right on par with mergesort.

    Mergesort and quicksort have the same big O best case (n*log(n)) and despite what people may try to claim, big O is really about iteration count and not comparison count. The biggest difference that can be produced between the two of them will always be to quicksort's detriment, and it involves lists that are already largely sorted or contain a large number of ties (when quicksort does better than mergesort, the difference will not be nearly so great). This is because ties or already sorted segments streamline straight through mergesort; when two split lists come back to be merged, if one list already contains all smaller values, all of the values on the left will be compared one at a time to the first element of the right, and then (since the returned lists have an internal order) no further comparisons need be done and the right is simply iterated onto the end. This is to say, the number of iterations will remain constant, but the number of comparisons is cut in half. If you are talking about actual time and are sorting strings, it's the comparisons that are expensive.

    Ties and already sorted segments in quicksort can easily lead to unbalanced lists if the pivot value is not carefully determined, and the imbalanced lists (eg, one on the right, ten on the left) are what causes the slowdown. So, if you can get your quicksort to perform as well on an already sorted list as it does on a ramdomized list, you've got a good method for finding the pivot.

    If you're interested, the demo program produces output like this:

    [root~/C] ./a.out -1 3 
    Using "", 0 records
    Primary Criteria offset=128
    
    Command (h for help, Q to quit): N
    How many records? 4000000
    New list is 562500.00 kb
    
    Command (h for help, Q to quit): m
    
    Mergesorting..............3999999 function calls
    123539969 Iterations     Comparison calls: 82696100
    Elapsed time: 0 min 9 sec
    
    
    Command (h for help, Q to quit): S
    Shuffled.
    
    Command (h for help, Q to quit): q
    
    Quicksorting..............4000000 function calls
    190179315 Iterations     Comparison calls: 100817020
    Elapsed time: 0 min 23 sec
    

    Altho without the krazy kolors. There's some more stuff about it by me about halfway down this page.

    ps. neither sort requires extra memory with the linked list.

    0 讨论(0)
提交回复
热议问题