Equilibrium index of an array of large numbers, how to prevent overflow?

后端 未结 2 1050
独厮守ぢ
独厮守ぢ 2021-01-29 13:23

Problem statement:
An equilibrium index of an array is an index into the array such that the sum of elements at lower indices is equal to the sum o

相关标签:
2条回答
  • 2021-01-29 13:39

    Yes it is possible. Notice that if data[0] < data[len - 1], then data[1] shall belong to the "left" part; similarly if data[0] > data[len-1] then data[len-2] belongs to the "right" part. This observation allows an inductive proof of correctness of the following algorithm:

    left_weight = 0; right_weight = 0
    left_index = 0; right_index = 0
    
    while left_index < right_index
        if left_weight < right_weight
            left_weight += data[left_index++];
        else
            right_weight += data[--right_index]
    

    Still there is an accumulation, but it is easy to deal with by keeping track of imbalance and a boolean indicator of which side is heavier:

    while left_index < right_index
        if heavier_side == right
            weight = data[left_index++]
        else
            weight = data[--right_index]
    
        if weight < imbalance
            imbalance = imbalance - weight
        else
            heavier_side = !heavier_side
            imbalance = weight - imbalance
    

    At least for unsigned data there is no possibility of overflow. Some tinkering might be required for signed values.

    0 讨论(0)
  • The short answer is that ultimately it can't be completely cured/solved, unless you limit the number/magnitude of the inputs--not even with something like Java's BigInt (or equivalents for C++ such as gmp, NTL, etc.)

    The problem is pretty simple: the memory in any computer is finite. There will always be some finite limit on the numbers we can represent. An arbitrary precision integer type can increase the limit to numbers far larger than most of use work with on a regular basis, but regardless of what the limit might be, there will always be dramatically larger numbers that can't be represented (at least without changing to some other representation--but if we're going to have precision to the units place for arbitrary numbers, there are distinct limits on how clever we can get in representing gargantuan numbers).

    For the conditions given in the linked problem, the long long type in C and C++ is adequate. If we want to increase the limit to some ridiculous size with a solution in C++, it's pretty simple. Although they're not a required part of a C++ implementation, there are many arbitrary precision integer libraries available for C++.

    I suppose there could be some way to compute an answer to this problem that doesn't involve actually summing the numbers--but at least at first glance, this idea doesn't seem very promising to me. The statement of the problem is specifically about computing sums. While you could certainly carry out various machinations to keep the summing from looking like summing, the fact is that the basic statement of the problem involves sums, which tends to suggest that solutions that don't involve sums may well be difficult to find.

    0 讨论(0)
提交回复
热议问题