Number of 1s in the two's complement binary representations of integers in a range

前端 未结 4 458
感动是毒
感动是毒 2020-12-05 01:00

This problem is from the 2011 Codesprint (http://csfall11.interviewstreet.com/):

One of the basics of Computer Science is knowing how numbers are represented in 2\'s

相关标签:
4条回答
  • 2020-12-05 01:34

    when a is positive, the better explanation was already been posted.

    If a is negative, then on a 32-bit system each negative number between a and zero will have 32 1's bits less the number of bits in the range from 0 to the binary representation of positive a.

    So, in a better way,

    long long solve(int a) {
        if (a >= 0){
            if (a == 0) return 0;
            else if ((a %2) == 0) return solve(a - 1) + noOfSetBits(a);
            else return (2 * solve( a / 2)) + ((long long)a + 1) / 2;
        }else {
            a++;
            return ((long long)(-a) + 1) * 32 - solve(-a);
        }
    }
    
    0 讨论(0)
  • 2020-12-05 01:35

    Cast the array into a series of integers. Then for each integer do:

    int NumberOfSetBits(int i)
    {
       i = i - ((i >> 1) & 0x55555555);
       i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
       return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
    }
    

    Also this is portable, unlike __builtin_popcount

    See here: How to count the number of set bits in a 32-bit integer?

    0 讨论(0)
  • 2020-12-05 01:49

    Well, it's not that complicated...

    The single-argument solve(int a) function is the key. It is short, so I will cut&paste it here:

    long long solve(int a)
    {
     if(a == 0) return 0 ;
     if(a % 2 == 0) return solve(a - 1) + __builtin_popcount(a) ;
     return ((long long)a + 1) / 2 + 2 * solve(a / 2) ;
    }
    

    It only works for non-negative a, and it counts the number of 1 bits in all integers from 0 to a inclusive.

    The function has three cases:

    a == 0 -> returns 0. Obviously.

    a even -> returns the number of 1 bits in a plus solve(a-1). Also pretty obvious.

    The final case is the interesting one. So, how do we count the number of 1 bits from 0 to an odd number a?

    Consider all of the integers between 0 and a, and split them into two groups: The evens, and the odds. For example, if a is 5, you have two groups (in binary):

    000  (aka. 0)
    010  (aka. 2)
    100  (aka. 4)
    

    and

    001  (aka 1)
    011  (aka 3)
    101  (aka 5)
    

    Observe that these two groups must have the same size (because a is odd and the range is inclusive). To count how many 1 bits there are in each group, first count all but the last bits, then count the last bits.

    All but the last bits looks like this:

    00
    01
    10
    

    ...and it looks like this for both groups. The number of 1 bits here is just solve(a/2). (In this example, it is the number of 1 bits from 0 to 2. Also, recall that integer division in C/C++ rounds down.)

    The last bit is zero for every number in the first group and one for every number in the second group, so those last bits contribute (a+1)/2 one bits to the total.

    So the third case of the recursion is (a+1)/2 + 2*solve(a/2), with appropriate casts to long long to handle the case where a is INT_MAX (and thus a+1 overflows).

    This is an O(log N) solution. To generalize it to solve(a,b), you just compute solve(b) - solve(a), plus the appropriate logic for worrying about negative numbers. That is what the two-argument solve(int a, int b) is doing.

    0 讨论(0)
  • 2020-12-05 01:51

    In the following code, the bitsum of x is defined as the count of 1 bits in the two's complement representation of the numbers between 0 and x (inclusive), where Integer.MIN_VALUE <= x <= Integer.MAX_VALUE.

    For example:

    bitsum(0) is 0   
    bitsum(1) is 1   
    bitsum(2) is 1   
    bitsum(3) is 4
    

    ..etc

    10987654321098765432109876543210 i % 10 for 0 <= i <= 31
    00000000000000000000000000000000 0
    00000000000000000000000000000001 1
    00000000000000000000000000000010 2
    00000000000000000000000000000011 3
    00000000000000000000000000000100 4
    00000000000000000000000000000101 ...
    00000000000000000000000000000110
    00000000000000000000000000000111 (2^i)-1
    00000000000000000000000000001000  2^i
    00000000000000000000000000001001 (2^i)+1 
    00000000000000000000000000001010 ...
    00000000000000000000000000001011 x, 011 = x & (2^i)-1 = 3
    00000000000000000000000000001100
    00000000000000000000000000001101
    00000000000000000000000000001110
    00000000000000000000000000001111
    00000000000000000000000000010000
    00000000000000000000000000010001
    00000000000000000000000000010010 18
    ...
    01111111111111111111111111111111 Integer.MAX_VALUE
    

    The formula of the bitsum is:

    bitsum(x) = bitsum((2^i)-1) + 1 + x - 2^i + bitsum(x & (2^i)-1 )
    

    Note that x - 2^i = x & (2^i)-1

    Negative numbers are handled slightly differently than positive numbers. In this case the number of zeros is subtracted from the total number of bits:

    Integer.MIN_VALUE <= x < -1
    Total number of bits: 32 * -x.
    

    The number of zeros in a negative number x is equal to the number of ones in -x - 1.

    public class TwosComplement {
        //t[i] is the bitsum of (2^i)-1 for i in 0 to 31.
        private static long[] t = new long[32];
        static {
            t[0] = 0;
            t[1] = 1;
            int p = 2;
            for (int i = 2; i < 32; i++) {
                t[i] = 2*t[i-1] + p;
                p = p << 1;
            }
        }
    
        //count the bits between x and y inclusive
        public static long bitsum(int x, int y) {
            if (y > x && x > 0) {
                return bitsum(y) - bitsum(x-1);
            }
            else if (y >= 0 && x == 0) {
                return bitsum(y);
            }
            else if (y == x) {
                return Integer.bitCount(y);
            }
            else if (x < 0 && y == 0) {
                return bitsum(x);
            } else if (x < 0 && x < y && y < 0 ) {
                return bitsum(x) - bitsum(y+1);
            } else if (x < 0 && x < y && 0 < y) {
                return bitsum(x) + bitsum(y);
            }
            throw new RuntimeException(x + " " + y);
        }
    
        //count the bits between 0 and x
        public static long bitsum(int x) {
            if (x == 0) return 0;
            if (x < 0) {
                if (x == -1) {
                    return 32;
                } else {
                    long y = -(long)x;
                    return 32 * y - bitsum((int)(y - 1));
                }
            } else {
                int n = x;
                int sum = 0;     //x & (2^i)-1
                int j = 0;
                int i = 1;       //i = 2^j
                int lsb = n & 1; //least significant bit
                n = n >>> 1;
                while (n != 0) {
                    sum += lsb * i;
                    lsb = n & 1;
                    n = n >>> 1;
                    i = i << 1;
                    j++;
                }
                long tot = t[j] + 1 + sum + bitsum(sum);
                return tot;
            }
        }
    }
    
    0 讨论(0)
提交回复
热议问题