I need to represent an IEEE 754-1985 double (64-bit) floating point number in a human-readable textual form, with the condition that the textual form can be parsed back into
Best option: Use the C99 hexadecimal floating point format:
printf("%a", someDouble);
Strings produced this way can be converted back into double
with the C99 strtod( )
function, and also with the scanf( )
functions. Several other languages also support this format. Some examples:
decimal number %a format meaning
--------------------------------------------
2.0 0x1.0p1 1.0 * 2^1
0.75 0x1.8p-1 1.5 * 2^-1
The hexadecimal format has the advantage that all representations are exact. Thus, converting the string back into floating-point will always give the original number, even if someone changes the rounding mode in which the conversion is performed. This is not true for inexact formats.
If you don't want to use the hexadecimal format for whatever reason, and are willing to assume that the rounding mode will always be round to nearest (the default), then you can get away with formatting your data as decimals with at least 17 significant digits. If you have a correctly rounded conversion routine (most -- not all -- platforms do), this will guarantee that you can do a round trip from double to string and back without any loss of accuracy.
The .NET framework has a round-trip format for this:
string formatted = myDouble.ToString("r");
From the documentation:
The round-trip specifier guarantees that a numeric value converted to a string will be parsed back into the same numeric value. When a numeric value is formatted using this specifier, it is first tested using the general format, with 15 spaces of precision for a Double and 7 spaces of precision for a Single. If the value is successfully parsed back to the same numeric value, it is formatted using the general format specifier. However, if the value is not successfully parsed back to the same numeric value, then the value is formatted using 17 digits of precision for a Double and 9 digits of precision for a Single.
This method could of course be recreated in most any language.
Yes, it can be done, though the implementation depends on the language. The basic idea is simply to print it out with sufficient precision.
Note that the reverse is not true though: some numbers that can be represented precisely in decimal simply cannot be represented in binary.
Sound like you want Burger's algorithm (PDF):
In free-format mode the algorithm generates the shortest correctly rounded output string that converts to the same number when read back in regardless of how the reader breaks ties when rounding.
Sample source code (in C and Scheme) is available as well.
This is the algorithm used in Python 3.x to ensure float
s can be converted to strings and back without any loss of accuracy. In Python 2.x, float
s were always represented with 17 significant digits because:
repr(float)
produces 17 significant digits because it turns out that’s enough (on most machines) so thateval(repr(x)) == x
exactly for all finite floatsx
, but rounding to 16 digits is not enough to make that true. (Source: http://docs.python.org/tutorial/floatingpoint.html)