Worse is better. Is there an example?

前端 未结 24 830
予麋鹿
予麋鹿 2020-12-12 19:03

Is there a widely-used algorithm that has time complexity worse than that of another known algorithm but it is a better choice in all practical si

相关标签:
24条回答
  • "Worse is Better" can be seen in languages too, for example the ideas behind Perl, Python, Ruby, Php even C# or Java, or whatever language that isn't assembler or C (C++ might fit here or not).

    Basically there is always a "perfect" solution, but many times its better to use a "worse" tool/algorithm/language to get results faster, and with less pain. Thats why people use these higher level languages, although they are "worse" from the ideal computer-language point of view, and instead are more human oriented.

    0 讨论(0)
  • 2020-12-12 19:38

    If I understand the question, you are asking for algorithms that are theoretically better but practically worse in all situations. Therefore, one would not expect them to actually be used unless by mistake.

    One possible example is universal memoization. Theoretically, all deterministic function calls should be memoized for all possible inputs. That way complex calculations could be replaced by simple table lookups. For a wide range of problems, this technique productively trades time for storage space. But suppose there were a central repository of the results of all possible inputs for all possible functions used by all of humanity's computers. The first time anyone anywhere did a calculation it would be the last time. All subsequent tries would result in a table lookup.

    But there are several reasons I can think of for not doing this:

    1. The memory space required to store all results would likely be impossibly large. It seems likely the number of needed bits would exceed the number of particles in the universe. (But even the task of estimating that number is daunting.)

    2. It would be difficult to construct an efficient algorithm for doing the memoiztion of that huge a problem space.

    3. The cost of communication with the central repository would likely exceed the benefit as the number of clients increase.

    I'm sure you can think of other problems.

    In fact, this sort of time/space trade-off is incredible common in practice. Ideally, all data would be stored in L1 cache, but because of size limitations you always need to put some data on disk or (horrors!) tape. Advancing technology reduces some of the pain of these trade-offs, but as I suggested above there are limits.


    In response to J.F. Sebastian's comment:

    Suppose that instead of a universal memoization repository, we consider a factorial repository. And it won't hold the results for all possible inputs. Rather it will be limited to results from 1 to N! Now it's easy to see that any computer that did factorials would benefit from looking up the result rather than doing the calculation. Even for calculating (N+1)! the lookup would be a huge win since that calculation would reduce to N!(N+1).

    Now to make this "better" algorithm worse, we could either increase N or increase the number of computers using the repository.

    But I'm probably not understanding some subtlety of the question. They way I'm thinking of it, I keep coming up with examples that scale well until they don't.

    0 讨论(0)
  • 2020-12-12 19:40

    Monte carlo integration was already suggested but a more specific example is Monte Carlo pricing in finance is also a suggestion. Here the method is much easier to code and can do more things than some others BUT it is much slower than say, finite difference.

    its not practical to do 20dimensional finite difference algorithms, but a 20 dimensional pricing execution is easy to set up.

    0 讨论(0)
  • 2020-12-12 19:41

    quick-sort has worst case time complexity of O(N^2) but it is usually considered better than other sorting algorithms which have O(N log n) time complexity in the worst case.

    0 讨论(0)
  • 2020-12-12 19:44
    Quick-sort has worst case time complexity of O(N^2)! 
    It is considered better than other sorting algorithms 
    like mergesort heapsort etc. which have O(N log n) time complexity 
    in the worst case.
    The reason may be the 
    1.in place sorting 
    2.stability, 
    3.very less amount of code involved.
    
    0 讨论(0)
  • 2020-12-12 19:45

    There's an O(n) algorithm for selecting the k-th largest element from an unsorted set, but it is rarely used instead of sorting, which is of course O(n logn).

    0 讨论(0)
提交回复
热议问题