Are there are any cases where you would prefer O(log n)
time complexity to O(1)
time complexity? Or O(n)
to O(log n)
?
The possibility to execute an algorithm in parallel.
I don't know if there is an example for the classes O(log n)
and O(1)
, but for some problems, you choose an algorithm with a higher complexity class when the algorithm is easier to execute in parallel.
Some algorithms cannot be parallelized but have so low complexity class. Consider another algorithm which achieves the same result and can be parallelized easily, but has a higher complexity class. When executed on one machine, the second algorithm is slower, but when executed on multiple machines, the real execution time gets lower and lower while the first algorithm cannot speed up.
My answer here Fast random weighted selection across all rows of a stochastic matrix is an example where an algorithm with complexity O(m) is faster than one with complexity O(log(m)), when m
is not too big.
A more general question is if there are situations where one would prefer an O(f(n))
algorithm to an O(g(n))
algorithm even though g(n) << f(n)
as n
tends to infinity. As others have already mentioned, the answer is clearly "yes" in the case where f(n) = log(n)
and g(n) = 1
. It is sometimes yes even in the case that f(n)
is polynomial but g(n)
is exponential. A famous and important example is that of the Simplex Algorithm for solving linear programming problems. In the 1970s it was shown to be O(2^n)
. Thus, its worse-case behavior is infeasible. But -- its average case behavior is extremely good, even for practical problems with tens of thousands of variables and constraints. In the 1980s, polynomial time algorithms (such a Karmarkar's interior-point algorithm) for linear programming were discovered, but 30 years later the simplex algorithm still seems to be the algorithm of choice (except for certain very large problems). This is for the obvious reason that average-case behavior is often more important than worse-case behavior, but also for a more subtle reason that the simplex algorithm is in some sense more informative (e.g. sensitivity information is easier to extract).
or
In a realtime situation where you need a firm upper bound you would select e.g. a heapsort as opposed to a Quicksort, because heapsort's average behaviour is also its worst-case behaviour.
This is often the case for security applications that we want to design problems whose algorithms are slow on purpose in order to stop someone from obtaining an answer to a problem too quickly.
Here are a couple of examples off the top of my head.
O(2^n)
time where n
is the bit-length of the key (this is brute force).Elsewhere in CS, Quick Sort is O(n^2)
in the worst case but in the general case is O(n*log(n))
. For this reason, "Big O" analysis sometimes isn't the only thing you care about when analyzing algorithm efficiency.