Big O, what is the complexity of summing a series of n numbers?

后端 未结 10 718
伪装坚强ぢ
伪装坚强ぢ 2020-12-14 01:16

I always thought the complexity of:

1 + 2 + 3 + ... + n is O(n), and summing two n by n matrices would be O(n^2).

But today I read from a textbo

相关标签:
10条回答
  • 2020-12-14 02:05

    There's a difference between summing N arbitrary integers and summing N that are all in a row. For 1+2+3+4+...+N, you can take advantage of the fact that they can be divided into pairs with a common sum, e.g. 1+N = 2+(N-1) = 3+(N-2) = ... = N + 1. So that's N+1, N/2 times. (If there's an odd number, one of them will be unpaired, but with a little effort you can see that the same formula holds in that case.)

    That is not O(N^2), though. It's just a formula that uses N^2, actually O(1). O(N^2) would mean (roughly) that the number of steps to calculate it grows like N^2, for large N. In this case, the number of steps is the same regardless of N.

    0 讨论(0)
  • 2020-12-14 02:07

    So my guess is that this is actually a reference to Cracking the Coding Interview, which has this paragraph on a StringBuffer implementation:

    On each concatenation, a new copy of the string is created, and the two strings are copied over, character by character. The first iteration requires us to copy x characters. The second iteration requires copying 2x characters. The third iteration requires 3x, and so on. The total time therefore is O(x + 2x + ... + nx). This reduces to O(xn²). (Why isn't it O(xnⁿ)? Because 1 + 2 + ... n equals n(n+1)/2 or, O(n²).)

    For whatever reason I found this a little confusing on my first read-through, too. The important bit to see is that n is multiplying n, or in other words that is happening, and that dominates. This is why ultimately O(xn²) is just O(n²) -- the x is sort of a red herring.

    0 讨论(0)
  • 2020-12-14 02:09

    You have a formula that doesn't depend on the number of numbers being added, so it's a constant-time algorithm, or O(1).

    If you add each number one at a time, then it's indeed O(n). The formula is a shortcut; it's a different, more efficient algorithm. The shortcut works when the numbers being added are all 1..n. If you have a non-contiguous sequence of numbers, then the shortcut formula doesn't work and you'll have to go back to the one-by-one algorithm.

    None of this applies to the matrix of numbers, though. To add two matrices, it's still O(n^2) because you're adding n^2 distinct pairs of numbers to get a matrix of n^2 results.

    0 讨论(0)
  • 2020-12-14 02:10

    1+2+3+...+n is always less than n+n+n...+n n times. you can rewrite this n+n+..+n as n*n.

    f(n) = O(g(n)) if there exists a positive integer n0 and a positive constant c, such that f(n) ≤ c * g(n) ∀ n ≥ n0

    since Big-Oh represents the upper bound of the function, where the function f(n) is the sum of natural numbers up to n.

    now, talking about time complexity, for small numbers, the addition should be of a constant amount of work. but the size of n could be humongous; you can't deny that probability.

    adding integers can take linear amount of time when n is really large.. So you can say that addition is O(n) operation and you're adding n items. so that alone would make it O(n^2). of course, it will not always take n^2 time, but it's the worst-case when n is really large. (upper bound, remember?)


    Now, let's say you directly try to achieve it using n(n+1)/2. Just one multiplication and one division, this should be a constant operation, no? No.

    using a natural size metric of number of digits, the time complexity of multiplying two n-digit numbers using long multiplication is Θ(n^2). When implemented in software, long multiplication algorithms must deal with overflow during additions, which can be expensive. Wikipedia

    That again leaves us to O(n^2).

    0 讨论(0)
提交回复
热议问题