I have some C code performing high precision arithmetic compiled by gcc(gcc (GCC) 4.4.4 20100726 (Red Hat 4.4.4-13)).The final result of the calculation is a double which has a
Floating-point values are usually stored in the machine in some binary format, not in some decimal format. For this reason, when you are truncating/rounding the number, it is unreasonable to expect that it will be truncated/rounded in therms of its decimal digits. I.e. it is unreasonable to expect that 622.07999995861189
will turn into something that necessarily begins with 622.07999...
. Some leading decimal digits should "survive" unchanged, but since the truncating/rounding is performed on binary representation (and what you see on the screen is the result of conversion the original binary format onto ASCII string) it is quite possible that the number of decimal digits affected by the process will be much greater than one might expect. In your case the change propagated all the way to the second digit after the dot. Which is why you got 622.08...
instead of 622.07...
.
As for the different result from GDB... It is quite possible that GDB's floating-point computation model is different from that of the compiler. For example, GDB might be truncating the result, while the compiler is rounding it. Or the compiler might be optimizing the computations, which often leads to a slightly different result.
A float has much less precision than a double; you lose about half the digits. So at best you'd be seeing the 622.0799 portion (rounded up to 622.0800). The difference you see is probably caused by the rounding mode in use.
Here are the actual numbers:
The internal representations are values generated using Java's Float.floatToIntBits
. You can also use Float.intBitsToFloat
to get back a floating-point number.