Why do we prefer not to specify the constant factor in Big-O notation?

前端 未结 1 736
夕颜
夕颜 2021-01-28 22:27

Let\'s consider classic big O notation definition (proof link):

O(f(n)) is the set of all functions such that there exist positive constants

相关标签:
1条回答
  • 2021-01-28 22:47

    The use of the O() notation is, from the get go, the opposite of noting something "precisely". The very idea is to mask "precise" differences between algorithms, as well as being able to ignore the effect of computing hardware specifics and the choice of compiler or programming language. Indeed, g_1(n) and g_2(n) are both in the same class (or set) of functions of n - the class O(n^2). They differ in specifics, but they are similar enough.

    The fact that it's a class is why I edited your question and corrected the notation from = O(9999 * N^2) to ∈ O(9999 * N^2).

    By the way - I believe your question would have been a better fit on cs.stackexchange.com.

    0 讨论(0)
提交回复
热议问题