问题
I was trying to find the complexities of an algorithm via different approaches. Mathematically I came across one O(m+n) and another O(mn) approach. However I am unable to grasp or say visualize this. It's not like I look at them and get the "Ahh! That's what's going on" feeling! Can someone explain this using their own examples or any other tool?
回答1:
My recommendation for finding intuition is thought experiments as follows:
First, realize that m and n are two different measurements of the input. They might be the lengths of two input streams, the lengths of sides of a matrix, or the counts of two different attributes of the same data structure, such as edge and node count of the same graph, or any similar measures.
The intuition is that big-O expresses a bound on the true run time (or some other aspect such as comparison count or space needed) of an algorithm in terms of a simple function - call that R(m, n) - multiplied by some arbitrary constant. We ignore the constant factors and think of all algorithms bounded by the same R as a family by calling their run times O( R(m, n) ).
Consequently, big O(m + n) says that the true run time is bounded by some function R(m,n) = C(m + n) for suitably big m and n. For the graph example, this says that the actual run time of the algorithm will be bounded by a multiple of the sum of the number of vertices and edges.
You can think of the bounding function as a graph in 3d with axes m, n, and R(m,n). Or you can think of charts:
R(m,n) = m + n
--------------
m= 1 2 3 4
n=1 1 2 3 4
2 3 4 5 6
3 4 5 6 7
4 5 6 7 8
For R(m,n) = mn, you have
R(m,n) = mn
--------------
m= 1 2 3 4
n=1 1 2 3 4
2 2 4 6 8
3 3 6 9 12
4 4 8 12 16
As a 3d graph, the first function is a plane and the second is a much faster-growing function at almost all points. This means that if m and n grow large enough, an O(mn) bound will ultimately be larger (corresponding to a potentially slower program) than an O(m+n) because the constants become insignificant.
For an example of the cost of rapid growth, suppose an O(m+n) algorithm has an extra constant factor of 3 in its runtime bound (making it potentially very slow on small inputs compared to both algorithms above):
R(m,n) = 3(m + n)
--------------
m= 1 2 3 4
n=1 3 9 12 15
2 9 12 15 18
3 12 15 18 21
4 15 18 21 24
So the the O(m + n) looks like it's bound is less constrained than the O(mn) one in the chart above. But look at the case m=n=100. Here bound on the O(m + n) algorithm is 3(m + n) = 600. But the O(mn) algorithm with the small constant has bound mn = 10000. Clearly you want the first if m and n are large.
@Anonymous raised a fine point on the initial version of this article, which confused big-O and big-Theta. Big-O only deals with bounds or upper limits on the quantity being measured. For example, this means that every O(n) algorithm is also O(n log n) and O(n^2). If the real run time is bounded by the slower-growing function, it is also bounded by all faster-growing ones.
Yet it is quite common for people to say "this algorithms is O(n)" while meaning that the bound is tight. That is, that Cn is an upper bound on the run time for some constant C and Dn is also a lower bound for some other constant D (and suitably large n). Such a tight bound is properly stated as Theta(n), not O(n). The run time of a Theta(R(m, n)) algorithm is (roughly speaking) proportional to R(m, n).
I'll add finally that there are many cases where you can't ignore constants. There exist lots of algorithms in the literature that are asymptotically "faster" than those in common use, but have constants so large that for practical problem sizes they are always too slow. Computational geometry has many examples. Radix 2 sort is another. It's Theta(n), but in practice a good quicksort (Theta(n log n) average case) will beat it on arrays of size up to at least 10^8 integers.
回答2:
O(m+n)
example:
for(int i = 0, i < m, i++)
//code
for(int j = 0, j < n, j++)
//code
m
iterations of code happen. Then n
iterations of code happens.
O(mn)
example:
for(int i = 0, i < m, i++)
for(int j = 0, j < n, j++)
//code
For every iteration of m
, we have n
iterations of code. Imagine iterating over a non-square 2D array.
m
and n
do not necessarily equal the same value. If they did equal the same value, then for O(m+n)
:
O(m+n) => O(m+m) => O(2m) => O(m)
I'd recommend looking at this question/answer in order to understand that last transition.
And for O(mn)
:
O(mn) => O(mm) => O(m^2)
回答3:
O(m+n) is much (an order of magnitude) faster than O(mn).
The O(m+n) algorithm could be one that iterates 2 sets and does a constant time (O(1)) operation on each element.
The O(mn) algorithm could be one that iterates the first set and does a linear search (O(n)) for the matching element in the second set.
The O(mn) algorithm is probably what professors would call The Naive Approach
来源:https://stackoverflow.com/questions/23896399/difference-between-omn-and-omn