Worse is better. Is there an example?

前端 未结 24 829
予麋鹿
予麋鹿 2020-12-12 19:03

Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical si

相关标签:
24条回答
  • 2020-12-12 19:47

    Often an algorithm (like quicksort) that can be easily parallelized or randomized will be chosen over competing algorithms that lack these qualities. Furthermore, it is often the case that an approximate solution to a problem is acceptable when an exact algorithm would yield exponential runtimes as in the Travelling Salesman Problem.

    0 讨论(0)
  • 2020-12-12 19:48

    One example is from computational geometry. Polygon triangulation has worst case O(N) algorithm due to Chazelle, but it is almost never implemented in practice due to toughness of implementation and huge constant.

    0 讨论(0)
  • 2020-12-12 19:48

    I've always understood the term 'worse is better' to relate to problems with correct solutions that are very complex where an approximate (or good enough) solution exists that is relatively easier to comprehend.

    This makes for easier design, production, and maintenance.

    0 讨论(0)
  • 2020-12-12 19:49

    Iterative Deepening

    When compared to a trivial depth-first search augmented with alpha-beta pruning an iterative deepening search used in conjunction with a poor (or non-existent) branch ordering heuristic would result in many more nodes being scanned. However, when a good branch ordering heuristic is used a significant portion of the tree is eliminated due to the enhanced effect of the alpha-beta pruning. A second advantage unrelated to time or space complexity is that a guess of the solution over the problem domain is established early and that guess gets refined as the search progresses. It is this second advantage that makes it so appealing in many problem domains.

    0 讨论(0)
  • 2020-12-12 19:54

    The Spaghetti sort is better than any other sorting algorithm in that it is O(n) to set up, O(1) to execute and O(n) to extract the sorted data. It accomplishes all of this in O(n) space complexity. (Overall performance: O(n) in time and space both.) Yet, for some strange (obvious) reason, nobody uses it for anything at all, preferring the far inferior O(nlogn) algorithms and their ilk.

    0 讨论(0)
  • 2020-12-12 19:56
    1. Y-fast-trie has loglogu time complexily for successor / predecessor but it has relativly big constants so BST (which is logn) is probably better, this is because log(n) is very small anyways in any practical use so the constants matter the most.

    2. Fusion tree have an O(logn/loglogu) query complexity but with very big constants and a BST can achieve the same in logn which is better again (also loglogu is extremely small so O(logn/loglogu)=O(logn) for any practical reason).

    3. The deterministic median algorithm is very slow even though it's O(n), so using a sort (nlogn) or the probabilistic version (theoretically could take O(n!) but with a very high probability it takes O(n) and the probability it would take T*O(n) drops exponentially with T) is much better.

    0 讨论(0)
提交回复
热议问题