Are there any cases where you would prefer a higher big-O time complexity algorithm over the lower one?

后端 未结 22 1430
说谎
说谎 2020-11-28 01:06

Are there are any cases where you would prefer O(log n) time complexity to O(1) time complexity? Or O(n) to O(log n)?

相关标签:
22条回答
  • 2020-11-28 01:46

    Simply: Because the coefficient - the costs associated with setup, storage, and the execution time of that step - can be much, much larger with a smaller big-O problem than with a larger one. Big-O is only a measure of the algorithms scalability.

    Consider the following example from the Hacker's Dictionary, proposing a sorting algorithm relying on the Multiple Worlds Interpretation of Quantum Mechanics:

    1. Permute the array randomly using a quantum process,
    2. If the array is not sorted, destroy the universe.
    3. All remaining universes are now sorted [including the one you are in].

    (Source: http://catb.org/~esr/jargon/html/B/bogo-sort.html)

    Notice that the big-O of this algorithm is O(n), which beats any known sorting algorithm to date on generic items. The coefficient of the linear step is also very low (since it's only a comparison, not a swap, that is done linearly). A similar algorithm could, in fact, be used to solve any problem in both NP and co-NP in polynomial time, since each possible solution (or possible proof that there is no solution) can be generated using the quantum process, then verified in polynomial time.

    However, in most cases, we probably don't want to take the risk that Multiple Worlds might not be correct, not to mention that the act of implementing step 2 is still "left as an exercise for the reader".

    0 讨论(0)
  • 2020-11-28 01:47

    To put my 2 cents in:

    Sometimes a worse complexity algorithm is selected in place of a better one, when the algorithm runs on a certain hardware environment. Suppose our O(1) algorithm non-sequentially accesses every element of a very big, fixed-size array to solve our problem. Then put that array on a mechanical hard drive, or a magnetic tape.

    In that case, the O(logn) algorithm (suppose it accesses disk sequentially), becomes more favourable.

    0 讨论(0)
  • 2020-11-28 01:48

    Adding to the already good answers.A practical example would be Hash indexes vs B-tree indexes in postgres database.

    Hash indexes form a hash table index to access the data on the disk while btree as the name suggests uses a Btree data structure.

    In Big-O time these are O(1) vs O(logN).

    Hash indexes are presently discouraged in postgres since in a real life situation particularly in database systems, achieving hashing without collision is very hard(can lead to a O(N) worst case complexity) and because of this, it is even more harder to make them crash safe (called write ahead logging - WAL in postgres).

    This tradeoff is made in this situation since O(logN) is good enough for indexes and implementing O(1) is pretty hard and the time difference would not really matter.

    0 讨论(0)
  • 2020-11-28 01:49

    At any point when n is bounded and the constant multiplier of O(1) algorithm is higher than the bound on log(n). For example, storing values in a hashset is O(1), but may require an expensive computation of a hash function. If the data items can be trivially compared (with respect to some order) and the bound on n is such that log n is significantly less than the hash computation on any one item, then storing in a balanced binary tree may be faster than storing in a hashset.

    0 讨论(0)
提交回复
热议问题