问题
In my application i'm trying to display the bit representation of double variables. It works for smaller double variables. Not working for 10^30 level.
Code:
#include <iostream>
#include <bitset>
#include <limits>
#include <string.h>
using namespace std;
void Display(double doubleValue)
{
bitset<sizeof(double) * 8> b(doubleValue);
cout << "Value : " << doubleValue << endl;
cout << "BitSet : " << b.to_string() << endl;
}
int main()
{
Display(1000000000.0);
Display(2000000000.0);
Display(3000000000.0);
Display(1000000000000000000000000000000.0);
Display(2000000000000000000000000000000.0);
Display(3000000000000000000000000000000.0);
return 0;
}
Output:
/home/sujith% ./a.out
Value : 1e+09
BitSet : 0000000000000000000000000000000000111011100110101100101000000000
Value : 2e+09
BitSet : 0000000000000000000000000000000001110111001101011001010000000000
Value : 3e+09
BitSet : 0000000000000000000000000000000010110010110100000101111000000000
Value : 1e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
Value : 2e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
Value : 3e+30
BitSet : 0000000000000000000000000000000000000000000000000000000000000000
My worry is why bitset always gives 64, zero for later 3. Interestingly "cout" for the actual values works as expected.
回答1:
If you look at the std::bitset constructor you will see that it either takes a string as argument, or an integer.
That means your double
value will be converted to an integer, and there is no standard integer type that can hold such large values, and that leads to undefined behavior.
If you want to get the actual bits of the double
you need to do some casting tricks to make it work:
unsigned long long bits = *reinterpret_cast<unsigned long long*>(&doubleValue);
Note that type-punning like this is not defined in the C++ specification, but as long as sizeof(double) == sizeof(unsigned long long)
it will work. If you want the behavior to be well-defined you have to go through arrays of char
and char*
.
回答2:
With C++14, std::bitset
now takes an unsigned long long
constructor, so this might work:
union udouble {
double d;
unsigned long long u;
};
void Display(double doubleValue)
{
udouble ud;
ud.d = doubleValue;
bitset<sizeof(double) * 8> b(ud.u);
cout << "Value : " << doubleValue << endl;
cout << "BitSet : " << b.to_string() << endl;
}
This should give you the internal representation of a double. See the working sample code on IdeOne.
来源:https://stackoverflow.com/questions/40737116/using-stdbitset-for-double-representation