How is fma() implemented

痴心易碎 提交于 2019-11-29 07:32:34

The actual implementation varies from platform to platform, but speaking very broadly:

  • If you tell your compiler to target a machine with hardware FMA instructions (PowerPC, ARM with VFPv4 or AArch64, Intel Haswell or AMD Bulldozer and onwards), the compiler may replace calls to fma( ) by just dropping the appropriate instruction into your code. This is not guaranteed, but is generally good practice. Otherwise you will get a call to the math library, and:

  • When running on a processor that has hardware FMA, those instructions should be used to implement the function. However, if you have an older version of your operating system, or an older version of the math library, it may not take advantage of those instructions.

  • If you are running on a processor that does not have hardware FMA, or you are using an older (or just not very good) math library, then a software implementation of FMA will be used instead. This might be implemented using clever extended-precision floating-point tricks, or with integer arithmetic.

  • The result of the fma( ) function should always be correctly rounded (i.e. a "real fma"). If it is not, that's a bug in your system's math library. Unfortunately, fma( ) is one of the more difficult math library functions to implement correctly, so many implementations have bugs. Please report them to your library vendor so they get fixed!

Is there an intrinsic to make sure that a real FMA is used, when the precision is relied upon?

Given a good compiler, this shouldn't be necessary; it should suffice to use the fma( ) function and tell the compiler what architecture you are targeting. However, compilers are not perfect, so you may need to use the _mm_fmadd_sd( ) and related intrinsics on x86 (but report the bug to your compiler vendor!)

One way to implement FMA in software is by splitting the significant into high and low bits. I use Dekker's algorithm

typedef struct { float hi; float lo; } doublefloat;  
doublefloat split(float a) {
    float t = ((1<<12)+1)*a;
    float hi = t - (t - a);
    float lo = a - hi;
    return (doublefloat){hi, lo};
}

Once you split the the float you can calculate a*b-c with a single rounding like this

float fmsub(float a, float b, float c) {
    doublefloat as = split(a), bs = split(b);
    return ((as.hi*bs.hi - c) + as.hi*bs.lo + as.lo*bs.hi) + as.lo*bs.lo;
}

This basically subtracts c from (ahi,alo)*(bhi,blo) = (ahi*bhi + ahi*blo + alo*bhi + alo*blo).

I got this idea from the twoProd function in the paper Extended-Precision Floating-Point Numbers for GPU Computation and from the mul_sub_x function in Agner Fog's vector class library. He uses a different function for splitting vectors of floats which splits differently. I tried to reproduce a scalar version here

typedef union {float f; int i;} u;
doublefloat split2(float a) {
    u lo, hi = {a};
    hi.i &= -(1<<12);
    lo.f = a - hi.f;
    return (doublefloat){hi.f,lo.f};
}

In any case using split or split2 in fmsub agrees well with fma(a,b,-c) from the math library in glibc. For whatever reason my version is significantly faster than fma except on a machine that has hardware fma (in which case I use _mm_fmsub_ss anyway).

Z boson's FMA suggestion based on Dekker's algorithm is unfortunately incorrect. Unlike in Dekker's twoProduct, in the more general FMA case the magnitude of c is not known relative to the product terms, and hence wrong cancellations can occur.

So, while Dekker's twoProduct can be greatly accelerated with a hardware FMA, the error term computation of Dekker's twoProduct is not a robust FMA implementation.

A correct implementation would need to either use a summation algorithm with higher than double precision, or add the terms in decreasing order of magnitude.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!