Worse is better. Is there an example?

前端 未结 24 828
予麋鹿
予麋鹿 2020-12-12 19:03

Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical si

相关标签:
24条回答
  • 2020-12-12 19:33

    Not quite on the mark, but backtracking-based regular expressions have an exponential worst case versus O(N) for DFA-based regular expressions, yet backtracking-based regular expressions are almost always used rather than DFA-based ones.

    EDIT: (JFS)

    Regular Expression Matching Can Be Simple And Fast (but is slow in Java, Perl, PHP, Python, Ruby, ...):

    The power that backreferences add comes at great cost: in the worst case, the best known implementations require exponential search algorithms.

    Regular Expression Engines:

    This method (DFA) is really more efficient, and can even be adapted to allow capturing and non-greedy matching, but it also has important drawbacks:

    • Lookarounds are impossible
    • Back-references are also impossible
    • Regex pre-compilation is longer and takes more memory

    On the bright side, as well as avoiding worst-case exponential running times, DFA approaches avoid worst-case stack usage that is linear in the size of the input data.

    [3]:

    0 讨论(0)
  • 2020-12-12 19:34

    Simplex is an algorithm which has exponential time complexity in the worst case but for any real case it is polynomial. Probably polynomial algorithms for linear programming exist but they are very complicated and usually have large constants.

    0 讨论(0)
  • 2020-12-12 19:34

    Mergesort versus Quicksort

    Quick sort has an average time complexity of O(n log n). It can sort arrays in place, i.e. a space complexity of O(1).

    Merge sort also has an average time complexity of O(n log n), however its space complexity is much worse: Θ(n). (there is a special case for linked lists)

    Because of the worst case time complexity of quick sort is Θ(n^2) (i.e. all elements fall on the same side of every pivot), and mergesort's worst case is O(n log n), mergesort is the default choice for library implementers.

    In this case, I think that the predictability of the mergesort's worst case time complexity trumps quicksorts much lower memory requirements.

    Given that it is possible to vastly reduce the likelihood of the worst case of quicksort's time complexity (via random selection of the pivot for example), I think one could argue that mergesort is worse in all but the pathological case of quicksort.

    0 讨论(0)
  • 2020-12-12 19:34

    Insertion sort despite having O(n2) complexity is faster for small collections (n < 10) than any other sorting algorithm. That's because the nested loop is small and executes fast. Many libraries (including STL) that have implementation of sort method actually using it for small subsets of data to speed things up.

    0 讨论(0)
  • 2020-12-12 19:35

    Radix sort has time-complexity O(n) for fixed-length inputs, but quicksort is more often used, despite the worse asympotic runtime, because the per-element overhead on Radix sort is typically much higher.

    0 讨论(0)
  • 2020-12-12 19:36

    This statement can be applied to nearly any parallel algorithm. The reason they were not heavily researched in the early days of computing is because, for a single thread of execution (think uniprocessor), they are indeed slower than their well-known sequential counterparts in terms of asymptotic complexity, constant factors for small n, or both. However, in the context of current and future computing platforms, an algorithm which can make use of a few (think multicore), few hundred (think GPU), or few thousand (think supercomputer) processing elements will beat the pants of the sequential version in wall-clock time, even if the total time/energy spent by all processors is much greater for the parallel version.

    Sorts, graph algorithms, and linear algebra techniques alike can be accelerated in terms of wall-clock time by bearing the cost of a little extra bookkeeping, communication, and runtime overhead in order to parallelize.

    0 讨论(0)
提交回复
热议问题