Still sort of confused about Big O notation

后端 未结 3 727
感情败类
感情败类 2021-02-09 03:17

So I\'ve been trying to understand Big O notation as well as I can, but there are still some things I\'m confused about. So I keep reading that if something is O(n), it usua

3条回答
  •  闹比i
    闹比i (楼主)
    2021-02-09 03:54

    Big-O, Big-Θ, Big-Ω are independent from worst-case, average-case, and best-case.

    The notation f(n) = O(g(n)) means f(n) grows no more quickly than some constant multiple of g(n).
    The notation f(n) = Ω(g(n)) means f(n) grows no more slowly than some constant multiple of g(n).
    The notation f(n) = Θ(g(n)) means both of the above are true.

    Note that f(n) here may represent the best-case, worst-case, or "average"-case running time of a program with input size n.
    Furthermore, "average" can have many meanings: it can mean the average input or the average input size ("expected" time), or it can mean in the long run (amortized time), or both, or something else.

    Often, people are interested in the worst-case running time of a program, amortized over the running time of the entire program (so if something costs n initially but only costs 1 time for the next n elements, it averages out to a cost of 2 per element). The most useful thing to measure here is the least upper bound on the worst-case time; so, typically, when you see someone asking for the Big-O of a program, this is what they're looking for.

    Similarly, to prove a problem is inherently difficult, people might try to show that the worst-case (or perhaps average-case) running time is at least a certain amount (for example, exponential).
    You'd use Big-Ω notation for these, because you're looking for lower bounds on these.

    However, there is no special relationship between worst-case and Big-O, or best-case and Big-Ω.
    Both can be used for either, it's just that one of them is more typical than the other.

    So, upper-bounding the best case isn't terribly useful. Yes, if the algorithm always takes O(n) time, then you can say it's O(n) in the best case, as well as on average, as well as the worst case. That's a perfectly fine statement, except the best case is usually very trivial and hence not interesting in itself.

    Furthermore, note that f(n) = n = O(n2) -- this is technically correct, because f grows more slowly than n2, but it is not useful because it is not the least upper bound -- there's a very obvious upper bound that's more useful than this one, namely O(n). So yes, you're perfectly welcome to say the best/worst/average-case running time of a program is O(n!). That's mathematically perfectly correct. It's just useless, because when people ask for Big-O they're interested in the least upper bound, not just a random upper bound.

    It's also worth noting that it may simply be insufficient to describe the running-time of a program as f(n). The running time often depends on the input itself, not just its size. For example, it may be that even queries are trivially easy to answer, whereas odd queries take a long time to answer.
    In that case, you can't just give f as a function of n -- it would depend on other variables as well. In the end, remember that this is just a set of mathematical tools; it's your job to figure out how to apply it to your program and to figure out what's an interesting thing to measure. Using tools in a useful manner needs some creativity, and math is no exception.

提交回复
热议问题