Double precision floating point in CUDA

前端 未结 4 1118
盖世英雄少女心
盖世英雄少女心 2020-12-10 02:56

Does CUDA support double precision floating point numbers?

Also, what are the reasons for the same?

相关标签:
4条回答
  • 2020-12-10 03:25

    As mentioned by others, older CUDA cards don't support the double type. But if you want more precision than the one your old GPU provides you can use the float-float solution which is similar to the double-double technique. For more information about that technique read

    • Emulate "double" using 2 "float"s
    • Emulating FP64 with 2 FP32 on a GPU

    Of course on modern GPUs you can also use double-double to achieve an accuracy larger than double. double-double is also used for long double on PowerPC

    0 讨论(0)
  • As a tip:

    If you want to use double precision you have to set the GPU architecture to sm_13 (if your GPU supports it).

    Otherwise it will still convert all doubles to floats and gives only a warning (as seen in faya's post). (Very annoying if you get a error because of this :-) )

    The flag is: -arch=sm_13

    0 讨论(0)
  • 2020-12-10 03:45

    Following on from Paul R's comments, Compute Capability 2.0 devices (aka Fermi) have much improved double-precision support, with performance only half that of single-precision.

    This Fermi whitepaper has more details about the double performance of the new devices.

    0 讨论(0)
  • 2020-12-10 03:46

    If your GPU has compute capability 1.3 then you can do double precision. You should be aware though that 1.3 hardware has only one double precision FP unit per MP, which has to be shared by all the threads on that MP, whereas there are 8 single precision FPUs, so each active thread has its own single precision FPU. In other words you may well see 8x worse performance with double precision than with single precision.

    0 讨论(0)
提交回复
热议问题