The C++ standard does not discuss the underlying layout of float and double types, only the range of values they should represent. (This is also true for signed types, is it two's compliment or something else)
My question is: What the are techniques used to serialize/deserialize POD types such as double and float in a portable manner? At the moment it seems the only way to do this is to have the value represented literally(as in "123.456"), The ieee754 layout for double is not standard on all architectures.
Brian "Beej Jorgensen" Hall gives in his Guide to Network Programming some code to pack float
(resp. double
) to uint32_t
(resp. uint64_t
) to be able to safely transmit it over the network between two machine that may not both agree to their representation. It has some limitation, mainly it does not support NaN and infinity.
Here is his packing function:
#define pack754_32(f) (pack754((f), 32, 8))
#define pack754_64(f) (pack754((f), 64, 11))
uint64_t pack754(long double f, unsigned bits, unsigned expbits)
{
long double fnorm;
int shift;
long long sign, exp, significand;
unsigned significandbits = bits - expbits - 1; // -1 for sign bit
if (f == 0.0) return 0; // get this special case out of the way
// check sign and begin normalization
if (f < 0) { sign = 1; fnorm = -f; }
else { sign = 0; fnorm = f; }
// get the normalized form of f and track the exponent
shift = 0;
while(fnorm >= 2.0) { fnorm /= 2.0; shift++; }
while(fnorm < 1.0) { fnorm *= 2.0; shift--; }
fnorm = fnorm - 1.0;
// calculate the binary form (non-float) of the significand data
significand = fnorm * ((1LL<<significandbits) + 0.5f);
// get the biased exponent
exp = shift + ((1<<(expbits-1)) - 1); // shift + bias
// return the final answer
return (sign<<(bits-1)) | (exp<<(bits-expbits-1)) | significand;
}
What's wrong with a human readable format.
It has a couple of advantages over binary:
- It's readable
- It's portable
- It makes support really easy
(as you can ask the user to look at it in their favorite editor even word) - It's easy to fix
(or adjust files manually in error situations)
Disadvantage:
- It's not compact
If this a real problem you can always zip it. - It may be slightly slower to extract/generate
Note a binary format probably needs to be normalized as well (seehtonl()
)
To output a double at full precision:
double v = 2.20;
std::cout << std::setprecision(std::numeric_limits<double>::digits) << v;
OK. I am not convinced that is exactly precise. It may lose precision.
Take a look at the (old) gtypes.h file implementation in glib 2 - it includes the following:
#if G_BYTE_ORDER == G_LITTLE_ENDIAN
union _GFloatIEEE754
{
gfloat v_float;
struct {
guint mantissa : 23;
guint biased_exponent : 8;
guint sign : 1;
} mpn;
};
union _GDoubleIEEE754
{
gdouble v_double;
struct {
guint mantissa_low : 32;
guint mantissa_high : 20;
guint biased_exponent : 11;
guint sign : 1;
} mpn;
};
#elif G_BYTE_ORDER == G_BIG_ENDIAN
union _GFloatIEEE754
{
gfloat v_float;
struct {
guint sign : 1;
guint biased_exponent : 8;
guint mantissa : 23;
} mpn;
};
union _GDoubleIEEE754
{
gdouble v_double;
struct {
guint sign : 1;
guint biased_exponent : 11;
guint mantissa_high : 20;
guint mantissa_low : 32;
} mpn;
};
#else /* !G_LITTLE_ENDIAN && !G_BIG_ENDIAN */
#error unknown ENDIAN type
#endif /* !G_LITTLE_ENDIAN && !G_BIG_ENDIAN */
Just write the binary IEEE754 representation to disk, and document this as your storage format (along with is endianness). Then it's up to the implementation to convert this to its internal representation if necessary.
Create an appropriate serializer/de-serializer interface for writing/reading this.
The interface can then have several implementations and you can test your options.
As said before, obvious options would be:
- IEEE754 which writes / reads the binary chunk if directly supported by the architecture or parses it if not supported by the architecture
- Text: always needs to parse.
- Whatever you else you can think of.
Just remember - once you have this layer, you can always start with IEEE754 if you only support platforms that use this format internally. This way you'll have the additional effort only when you need to support a different platform! Don't do work you don't have to.
You should convert them to a format you will always be able to use in order to recreate your floats/doubles.
This could use a string representation or, if you need something that takes less space, represent your number in ieee754 (or any other format you choose) and then parse it as you would do with a string.
I think the answer "depends" on what your particular application and it's perfomance profile is.
Let's say you have a low-latency market data environment, then using strings is frankly daft. If the information you are conveying is prices, then doubles (and binary representation of them) really are tricky to work with. Where as, if you don't really care about performance, and what you want is visibility (storage, transmission), then strings are an ideal candidate.
I would actually opt for integral mantissa/exponent representation of floats/doubles - i.e. at the earliest opportunity, convert the float/double to a pair of integers and then transmit that. You then only have to worry about the portability of integers and well, various routines (such as the hton()
routines to handle conversions for you). Also store everything in your most prevalent platform's endianess (for example if you're only using linux, then what's the point of storing stuff in big endian?)
The SQLite4 uses a new format to store doubles and floats
- It works reliably and consistently even on platforms that lack support for IEEE 754 binary64 floating point numbers.
- Currency computations can normally be done exactly and without rounding.
- Any signed or unsigned 64-bit integer can be represented exactly.
- The floating point range and accuracy exceed that of IEEE 754 binary64 floating point numbers.
- Positive and negative infinity and NaN (Not-a-Number) have well-defined representations.
Sources:
Found this old thread. One solution which solves a fair deal of cases is missing - using fixed point, passing integers with a known scaling factor using built-in casts in either end. Thus, you don't have to bother with the underlying floating point representation at all.
There are of course drawbacks. This solution assumes you can have a fixed scaling factor and still get both the range and resolution needed for the particular application. Furthermore, you convert from your floating point to fixed point at the serialization end and convert back at deserialization, introducing two rounding errors. However, over the years I have found fixed point is enough for my needs in almost all cases and it is reasonably fast too.
A typical case for fixed point would be communication protocols for embedded systems or other devices.
来源:https://stackoverflow.com/questions/4733147/portability-of-binary-serialization-of-double-float-type-in-c