Why does GDB evaluate floating-point arithmetic differently from C++?

后端 未结 3 516
情深已故
情深已故 2021-02-13 01:46

I\'ve encountered something a little confusing while trying to deal with a floating-point arithmetic problem.

First, the code. I\'ve distilled the essence of my problem

相关标签:
3条回答
  • 2021-02-13 02:08

    Could be because the x86 FPU works in registers to 80 bits accuracy, but rounds to 64 bits when the value is stored to memory. GDB will be storing to memory on every step of the (interpreted) computation.

    0 讨论(0)
  • 2021-02-13 02:25

    GDB's runtime expression evaluation system is certainly not guaranteed to execute the same effective machine code for your floating point operations as the optimized and reordered machine code generated by your compiler to calculate the result of the same symbolic expression. Indeed, it is guaranteed not to execute the same machine code to calculate the value of the given expression z.d * (1 - tau.d), as this may be considered a subset of your program for which the isolated expression evaluation is performed at runtime in some arbitrary, "symbolically correct" way.

    Floating-point code generation and realization of its output by the CPU is particularly prone to symbolic inconsistency with other implementations (such as a runtime expression evaluator) due to optimization (substitution, reordering, subexpression elimination, etc.), choice of instructions, choice of register allocation, and the floating-point environment. If your snippet contains many automatic variables in temporary expressions (as yours does), the code generation has an especially large amount of freedom with even zero optimization passes, and with that freedom comes the chance of-- in this case-- losing precision in the least-significant bit in a manner that appears inconsistent.

    You won't get much insight into why GDB's runtime evaluator executed whatever instructions that it did w/o deep insight into the GDB source code, build settings, and its own compile-time generated code.

    You could peak at the generated assembly for your procedure for an idea of how the final stores into z, tau, and [in contrast] xiden work. The data flow for the floating-point operations leading to those stores is probably not as it seems.

    Much easier, try making the code generation more deterministic by disabling all compiler optimization (e.g., -O0 on GCC) and rewriting the floating-point expressions to use no temporaries / automatic variables. Then break on every line in GDB and compare.

    I wish that I could tell you exactly why that least-significant bit of the mantissa is flipped, but the truth is, the processor doesn't even "know" why something carried a bit and something else did not due to, e.g., order of evaluation without a complete instruction and data trace of both your code and GDB itself.

    0 讨论(0)
  • 2021-02-13 02:25

    Its not GDB vs the processor, it's the memory vs the processor. The x64 processor stores more bits of accuracy than the memory actually holds (80ish vs 64 bits). As long as it stays in the CPU and registers, it retains 80ish bits of accuracy, but when it gets sent to memory will determine when and therefore how it gets rounded. If GDB sends all intermittent calculation results out of the CPU (I have no idea if this is the case, or anywhere close), it will do the rounding at every step, which leads to slightly different results.

    0 讨论(0)
提交回复
热议问题