Why does GDB evaluate floating-point arithmetic differently from C++?

后端 未结 3 515
情深已故
情深已故 2021-02-13 01:46

I\'ve encountered something a little confusing while trying to deal with a floating-point arithmetic problem.

First, the code. I\'ve distilled the essence of my problem

3条回答
  •  别跟我提以往
    2021-02-13 02:25

    GDB's runtime expression evaluation system is certainly not guaranteed to execute the same effective machine code for your floating point operations as the optimized and reordered machine code generated by your compiler to calculate the result of the same symbolic expression. Indeed, it is guaranteed not to execute the same machine code to calculate the value of the given expression z.d * (1 - tau.d), as this may be considered a subset of your program for which the isolated expression evaluation is performed at runtime in some arbitrary, "symbolically correct" way.

    Floating-point code generation and realization of its output by the CPU is particularly prone to symbolic inconsistency with other implementations (such as a runtime expression evaluator) due to optimization (substitution, reordering, subexpression elimination, etc.), choice of instructions, choice of register allocation, and the floating-point environment. If your snippet contains many automatic variables in temporary expressions (as yours does), the code generation has an especially large amount of freedom with even zero optimization passes, and with that freedom comes the chance of-- in this case-- losing precision in the least-significant bit in a manner that appears inconsistent.

    You won't get much insight into why GDB's runtime evaluator executed whatever instructions that it did w/o deep insight into the GDB source code, build settings, and its own compile-time generated code.

    You could peak at the generated assembly for your procedure for an idea of how the final stores into z, tau, and [in contrast] xiden work. The data flow for the floating-point operations leading to those stores is probably not as it seems.

    Much easier, try making the code generation more deterministic by disabling all compiler optimization (e.g., -O0 on GCC) and rewriting the floating-point expressions to use no temporaries / automatic variables. Then break on every line in GDB and compare.

    I wish that I could tell you exactly why that least-significant bit of the mantissa is flipped, but the truth is, the processor doesn't even "know" why something carried a bit and something else did not due to, e.g., order of evaluation without a complete instruction and data trace of both your code and GDB itself.

提交回复
热议问题