问题
I am trying to come up with an efficient method to determine when rounding will/did occur for IEEE-754 operations. Unfortunately I am not able to simply check hardware flags. It would have to run on a few different platforms.
One of the approaches I thought of was to perform the operation in different rounding modes to compare the results.
Example for addition:
double result = operand1 + operand2;
// save rounding mode
int savedMode = fegetround();
fesetround(FE_UPWARD);
double upResult = operand1 + operand2;
fesetround(FE_DOWNWARD);
double downResult = operand1 + operand2;
// restore rounding mode
fesetround(savedMode);
return (result != upResult) || (result != downResult);
but this is obviously inefficient because it's having to perform the operation 3 times.
回答1:
Your example does not necessarily give the right results with optimization
levels -O1
or higher. See this Godbolt link:
only one addition vaddsd
is generated by the compiler.
With optimization
level -O0
the assembly looks ok, but that would lead to inefficient code.
Moreover calling fegetround
and fesetround
is relatively expensive,
compared to the cost of a few floating point operations.
The (self explaining) code below is probably an interesting alternative. It uses the well-known algorithms 2Sum and 2ProdFMA. On systems without hardware fma, or fma emulation, you can use the 2Prod algorithm instead of 2ProdFMA, See, for example, Accurate Floating Point Product and Exponentiation, by Stef Graillat.
/*
gcc -m64 -Wall -O3 -march=haswell round_ex.c -lm
or with fma emulation on systems without hardware fma support, for example:
gcc -m64 -Wall -O3 -march=nehalem round_ex.c -lm
*/
#include<math.h>
#include<float.h>
#include<stdio.h>
int add_is_not_exact(double operand1, double operand2){
double a = operand1;
double b = operand2;
double s, t, a_1, b_1, d_a, d_b;
/* Algorithm 2Sum computes s and t such that a + b = s + t, exactly. */
/* Here t is the error of the floating-point addition s = a + b. */
/* See, for example, On the robustness of the 2Sum and Fast2Sum algorithms */
/* by Boldo, Graillat, and Muller */
s = a + b;
a_1 = s - b;
b_1 = s - a_1;
d_a = a - a_1;
d_b = b - b_1;
t = d_a + d_b;
return (t!=0.0);
}
int sub_is_not_exact(double operand1, double operand2){
return add_is_not_exact(operand1, -operand2);
}
int mul_is_not_exact(double operand1, double operand2){
double a = operand1;
double b = operand2;
double s, t;
/* Algorithm 2ProdFMA computes s and t such that a * b = s + t, exactly. */
/* Here t is the error of the floating-point multiplication s = a * b. */
/* See, for example, Accurate Floating Point Product and Exponentiation */
/* by Graillat */
s = a * b;
t = fma(a, b, -s);
if (s!=0) return (t!=0.0); /* No underflow of a*b */
else return (a!=0.0)&&(b!=0.0); /* Underflow: inexact if s=0, but (a!=0.0)&&(b!=0.0) */
}
int div_is_not_exact(double operand1, double operand2){
double a = operand1;
double b = operand2;
double s, t;
s = a / b;
t = fma(s, b, -a); /* fma(x,y,z) computes x*y+z with infinite intermediate precision */
return (t!=0.0);
}
int main(){
printf("add_is_not_exact(10.0, 1.0) = %i\n", add_is_not_exact(10.0, 1.0));
printf("sub_is_not_exact(10.0, 1.0) = %i\n", sub_is_not_exact(10.0, 1.0));
printf("mul_is_not_exact( 2.5, 2.5) = %i\n", mul_is_not_exact( 2.5, 2.5));
printf("div_is_not_exact( 10, 2.5) = %i\n", div_is_not_exact( 10, 2.5));
printf("add_is_not_exact(10.0, 0.1) = %i\n", add_is_not_exact(10.0, 0.1));
printf("sub_is_not_exact(10.0, 0.1) = %i\n", sub_is_not_exact(10.0, 0.1));
printf("mul_is_not_exact( 2.6, 2.6) = %i\n", mul_is_not_exact( 2.6, 2.6));
printf("div_is_not_exact( 10, 2.6) = %i\n", div_is_not_exact( 10, 2.6));
printf("\n0x1.0p-300 = %20e, 0x1.0p-600 = %20e \n", 0x1.0p-300 , 0x1.0p-600 );
printf("mul_is_not_exact( 0x1.0p-300, 0x1.0p-300) = %i\n", mul_is_not_exact( 0x1.0p-300, 0x1.0p-300));
printf("mul_is_not_exact( 0x1.0p-600, 0x1.0p-600) = %i\n", mul_is_not_exact( 0x1.0p-600, 0x1.0p-600));
}
The output is:
$ ./a.out
add_is_not_exact(10.0, 1.0) = 0
sub_is_not_exact(10.0, 1.0) = 0
mul_is_not_exact( 2.5, 2.5) = 0
div_is_not_exact( 10, 2.5) = 0
add_is_not_exact(10.0, 0.1) = 1
sub_is_not_exact(10.0, 0.1) = 1
mul_is_not_exact( 2.6, 2.6) = 1
div_is_not_exact( 10, 2.6) = 1
0x1.0p-300 = 4.909093e-91, 0x1.0p-600 = 2.409920e-181
mul_is_not_exact( 0x1.0p-300, 0x1.0p-300) = 0
mul_is_not_exact( 0x1.0p-600, 0x1.0p-600) = 1
As noted in the comments, it is also possible to directly read the control and status register:
#include <fenv.h>
#pragma STDC FENV_ACCESS ON
int add_is_not_exact_v2(double a, double b)
{
fexcept_t excepts;
feclearexcept(FE_ALL_EXCEPT);
double c = a+b;
int tst = fetestexcept(FE_INEXACT);
return (tst!=0);
}
Note, however, that this may not work with compiler optimization level -O1 or higher.
In that case the addsd
double add instruction is sometimes optimized away completely,
leading to wrong results.
For example, with gcc 8.2 gcc -m64 -O1 -march=nehalem
:
add_is_not_exact_v2:
sub rsp, 8
mov edi, 61
call feclearexcept
mov edi, 32
call fetestexcept
test eax, eax
setne al
movzx eax, al
add rsp, 8
ret
With optimization level -O0
, with 2 function calls, and with relatively
expansive instructions to modify the control and status register, this is not necessarily the most efficient solution.
来源:https://stackoverflow.com/questions/56498773/determine-if-rounding-occurred-for-a-floating-point-operation-in-c-c