Squaring n-bit int vs. multiplying two n-bit ints

前端 未结 6 608
南旧
南旧 2021-01-05 05:25

Disclaimer: Homework question. I\'m looking for a hint…

Professor F. Lake tells his class that it is asymptotically faster to square an n-bit integer than to multip

相关标签:
6条回答
  • 2021-01-05 05:51

    Since you wanted only a hint, answer comes from this equation: (a + b)^2 = a^2 + b^2 + 2*a*b

    To not spoil the puzzle, I've posted complete solution separately :)

    0 讨论(0)
  • 2021-01-05 05:59

    Imagine that squaring is actually asymptotically faster. Then if you have a * b, you could calculate:

    a = m + n
    b = m - n
    

    Then solving this equation system gives:

    m = (a+b)/2
    n = (a-b)/2
    

    But then we have

    a * b = (m+n)*(m-n) = m² - n²
    

    or without intermediate variables:

    a * b = ((a+b)² - (a-b)²)/4
    

    So you can replace any multiplication by two squaring operations (and some additions and division by 4, which is just a bit shift, and these can be all ignored for asymptotical complexity). So the complexity of multiplication is at most twice the complexity of squaring. Of course, "twice" is a constant factor, which means both have the same asymptotical complexity.

    0 讨论(0)
  • 2021-01-05 06:06

    Here's a hint.

    And here's my solution in SECRET CODE:Fdhnevat zrnaf lbh bayl unir gb qb bar vavgvny SG, abg gjb, fb vg'f snfgre.

    0 讨论(0)
  • 2021-01-05 06:06

    Rewritten: This is the only improvement that I can see in squaring a n-bit number over multiplying two n-bit numbers together. It may not be asymptotically better in the O(n^2) vs. O(n) sort of way that is commonly used in computer science. However, if we take it asymptotically literally meaning the complexity that is approached (including the multiplicative constants), then this will fit that definition. Anyway, it's all that I can see to do so take it or leave it.

    Let's say that we have two N-bit numbers, x and y. We can multiply them together (x*y) with the shift-and-add method with A*N^2 + O(N) operations where A is a constant. The second term, the O(N) term, can be disregarded for large enough N so the number of operations is essentially A*N^2.

    Now we calculate x^2. If we define a to have only the upper N/2 bits of x set in it and b to have only the lower N/2 bits of x set in it, then

    x = a + b
    
    x^2 = (a + b)^2 = a^2 + b^2 + 2*a*b
    

    However, remember that we can multiply a N-bit number with A*N^2 operations. To multiply a*a we only have to do A*(N/2)^2 = A*N/4 operations. The same goes for b*b and a*b. If we ignore the O(N) operations, then x^2 = (a + b)^2 is calculated in

    A*N^2/4 + A*N^2/4 + A*N^2/4 = (3/4)*A*N^2
    

    operations which is of course better than the standard A*N^2 for multiplying two arbitrary N-bit numbers by A*N^2/4. We can further improve on this by repeating the same operation with a^2 and b^2. At some point it will not be beneficial to keep doing this. This is not an enormous improvement, but it's all that I can find. You can decide for yourselves if this counts or not.

    0 讨论(0)
  • 2021-01-05 06:09

    My thought is that to multiply two n-bit integers your algorithm needs to cater for any two n-bit integers. That's (2^n)^2 possible inputs.

    A squaring algorithm only needs to handle 2^n possible inputs, although it can be modelled as a multiply algorithm with two inputs the same.

    My guess is that there would be some way to optimise the generic multiply algorithm when you know that both inputs will be the same, but I'd have to think about it. That's the line I'd be investigating, anyway...

    0 讨论(0)
  • 2021-01-05 06:11

    Consider the steps the computer needs to take in order to accomplish these tasks. Remember that computers work drastically different from people.

    0 讨论(0)
提交回复
热议问题