what is order of complexity in Big O notation?

前端 未结 6 1264
挽巷
挽巷 2021-01-01 07:41

Question

Hi I am trying to understand what order of complexity in terms of Big O notation is. I have read many articles and am yet to find anything

相关标签:
6条回答
  • 2021-01-01 08:11

    Be careful here, there are some subtleties. You stated "we are measuring the time and space complexity of an algorithm in terms of the growth of input size n," and that's how people often treat it, but it's not actually correct. Rather, with O(g(n)) we are determining that g(n), scaled suitably, is an upper bound for the time and space complexity of an algorithm for all input of size n bigger than some particular n'. Similarly, with Omega(h(n)) we are determining that h(n), scaled suitably, is a lower bound for the time and space complexity of an algorithm for all input of size n bigger than some particular n'. Finally, if both the lower and upper bound are the same complexity g(n), the complexity is Theta(g(n)). In other words, Theta represents the degree of complexity of the algorithm while big-O and big-Omega bound it above and below.

    0 讨论(0)
  • 2021-01-01 08:21

    Big-O analysis is a form of runtime analysis that measures the efficiency of an algorithm in terms of the time it takes for the algorithm to run as a function of the input size. It’s not a formal bench- mark, just a simple way to classify algorithms by relative efficiency when dealing with very large input sizes.

    Update: The fastest-possible running time for any runtime analysis is O(1), commonly referred to as constant running time.An algorithm with constant running time always takes the same amount of time to execute, regardless of the input size.This is the ideal run time for an algorithm, but it’s rarely achievable. The performance of most algorithms depends on n, the size of the input.The algorithms can be classified as follows from best-to-worse performance:

    O(log n) — An algorithm is said to be logarithmic if its running time increases logarithmically in proportion to the input size.

    O(n) — A linear algorithm’s running time increases in direct proportion to the input size.

    O(n log n) — A superlinear algorithm is midway between a linear algorithm and a polynomial algorithm.

    O(n^c) — A polynomial algorithm grows quickly based on the size of the input.

    O(c^n) — An exponential algorithm grows even faster than a polynomial algorithm.

    O(n!) — A factorial algorithm grows the fastest and becomes quickly unusable for even small values of n.

    The run times of different orders of algorithms separate rapidly as n gets larger.Consider the run time for each of these algorithm classes with

       n = 10:
       log 10 = 1
       10 = 10
       10 log 10 = 10
       10^2 = 100
       2^10= 1,024
       10! = 3,628,800
       Now double it to n = 20:
       log 20 = 1.30
       20 = 20
       20 log 20= 26.02 
       20^2 = 400
       2^20 = 1,048,576 
       20! = 2.43×1018
    

    Finding an algorithm that works in superlinear time or better can make a huge difference in how well an application performs.

    0 讨论(0)
  • 2021-01-01 08:24

    Big O is about finding an upper limit for the growth of some function. See the formal definition on Wikipedia http://en.wikipedia.org/wiki/Big_O_notation

    So if you've got an algorithm that sorts an array of size n and it requires only a constant amount of extra space and it takes (for example) 2 n² + n steps to complete, then you would say it's space complexity is O(n) or O(1) (depending on wether you count the size of the input array or not) and it's time complexity is O(n²).

    Knowing only those O numbers, you could roughly determine how much more space and time is needed to go from n to n + 100 or 2 n or whatever you are interested in. That is how well an algorithm "scales".

    Update

    Big O and complexity are really just two terms for the same thing. You can say "linear complexity" instead of O(n), quadratic complexity instead of O(n²), etc...

    0 讨论(0)
  • 2021-01-01 08:26

    Say, f(n) in O(g(n)) if and only if there exists a C and n0 such that f(n) < C*g(n) for all n greater than n0.

    Now that's a rather mathematical approach. So I'll give some examples. The simplest case is O(1). This means "constant". So no matter how large the input (n) of a program, it will take the same time to finish. An example of a constant program is one that takes a list of integers, and returns the first one. No matter how long the list is, you can just take the first and return it right away.

    The next is linear, O(n). This means that if the input size of your program doubles, so will your execution time. An example of a linear program is the sum of a list of integers. You'll have to look at each integer once. So if the input is an list of size n, you'll have to look at n integers.

    An intuitive definition could define the order of your program as the relation between the input size and the execution time.

    0 讨论(0)
  • 2021-01-01 08:26

    I see that you are commenting on several answers wanting to know the specific term of order as it relates to Big-O.

    Suppose f(n) = O(n^2), we say that the order is n^2.

    0 讨论(0)
  • 2021-01-01 08:27

    Others have explained big O notation well here. I would like to point out that sometimes too much emphasis is given to big O notation.

    Consider matrix multplication the naïve algorithm has O(n^3). Using the Strassen algoirthm it can be done as O(n^2.807). Now there are even algorithms that get O(n^2.3727).

    One might be tempted to choose the algorithm with the lowest big O but it turns for all pratical purposes that the naïvely O(n^3) method wins out. This is because the constant for the dominating term is much larger for the other methods.

    Therefore just looking at the dominating term in the complexity can be misleading. Sometimes one has to consider all terms.

    0 讨论(0)
提交回复
热议问题