Does CUDA support double precision floating point numbers?
Also, what are the reasons for the same?
As mentioned by others, older CUDA cards don't support the double
type. But if you want more precision than the one your old GPU provides you can use the float-float solution which is similar to the double-double technique. For more information about that technique read
Of course on modern GPUs you can also use double-double to achieve an accuracy larger than double. double-double
is also used for long double on PowerPC
As a tip:
If you want to use double precision you have to set the GPU architecture to sm_13
(if your GPU supports it).
Otherwise it will still convert all doubles to floats and gives only a warning (as seen in faya's post). (Very annoying if you get a error because of this :-) )
The flag is: -arch=sm_13
Following on from Paul R's comments, Compute Capability 2.0 devices (aka Fermi) have much improved double-precision support, with performance only half that of single-precision.
This Fermi whitepaper has more details about the double performance of the new devices.
If your GPU has compute capability 1.3 then you can do double precision. You should be aware though that 1.3 hardware has only one double precision FP unit per MP, which has to be shared by all the threads on that MP, whereas there are 8 single precision FPUs, so each active thread has its own single precision FPU. In other words you may well see 8x worse performance with double precision than with single precision.