Big O for worst-case running time and Ω is for the best-case, but why is Ω used in worst case sometimes?

假装没事ソ 提交于 2019-12-03 06:23:22

问题


I'm confused, I thought that you use Big O for worst-case running time and Ω is for the best-case? Can someone please explain?

And isn't (lg n) the best-case? and (nlg n) is the worst case? Or am I misunderstanding something?

Show that the worst-case running time of Max-Heapify on a heap of size n is Ω(lg n). ( Hint: For a heap with n nodes, give node values that cause Max-Heapify to be called recursively at every node on a path from the root down to a leaf.)

Edit: no this is not homework. im practicing and this has an answer key buy im confused. http://www-scf.usc.edu/~csci303/cs303hw4solutions.pdf Problem 4(6.2 - 6)

Edit 2: So I misunderstood the question not about Big O and Ω?


回答1:


It is important to distinguish between the case and the bound.

Best, average, and worst are common cases of interest when analyzing algorithms.

Upper (O, o) and lower (Omega, omega), along with Theta, are common bounds on functions.

When we say "Algorithm X's worst-case time complexity is O(n)", we're saying that the function which represents Algorithm X's performance, when we restrict inputs to worst-case inputs, is asymptotically bounded from above by some linear function. You could speak of a lower bound on the worst-case input; or an upper or lower bound on the average, or best, case behavior.

Case != Bound. That said, "upper on the worst" and "lower on the best" are pretty sensible sorts of metrics... they provide absolute bounds on the performance of an algorithm. It doesn't mean we can't talk about other metrics.

Edit to respond to your updated question:

The question asks you to show that Omega(lg n) is a lower bound on the worst case behavior. In other words, when this algorithm does as much work as it can do for a class of inputs, the amount of work it does grows at least as fast as (lg n), asymptotically. So your steps are the following: (1) identify the worst case for the algorithm; (2) find a lower bound for the runtime of the algorithm on inputs belonging to the worst case.

Here's an illustration of the way this would look for linear search:

In the worst case of linear search, the target item is not in the list, and all items in the list must be examined to determine this. Therefore, a lower bound on the worst-case complexity of this algorithm is O(n).

Important to note: for lots of algorithms, the complexity for most cases will be bounded from above and below by a common set of functions. It's very common for the Theta bound to apply. So it might very well be the case that you won't get a different answer for Omega than you do for O, in any event.




回答2:


Actually, you use Big O for a function which grows faster than your worst-case complexity, and Ω for a function which grows more slowly than your worst-case complexity.

So here you are asked to prove that your worst case complexity is worse than lg(n).




回答3:


O is the upper limit (i.e, worst case) Ω is the lower limit (i.e., best case)

The example is saying that in the worst input for max-heapify ( i guess the worst input is reverse-ordered input) the running time complexity must be (at least) lg n . Hence the Ω (lg n) since it is the lower limit on the execution complexity.



来源:https://stackoverflow.com/questions/15420848/big-o-for-worst-case-running-time-and-%ce%a9-is-for-the-best-case-but-why-is-%ce%a9-used

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!