I\'ve been trying to find info on performance of using float vs double on graphics hardware. I\'ve found plenty of info on float vs double on CPUs, but such info is more scarce
Modern graphic cards do many optimizations, e.g.: they can even operate on 24-bit floats. As far as I know, internally graphic cards don't use doubles as they're built for speed, not necessarily precision.
From entry on GPGPU on Wikipedia:
The implementations of floating point on Nvidia GPUs are mostly IEEE compliant; however, this is not true across all vendors. This has implications for correctness which are considered important to some scientific applications. While 64-bit floating point values (double precision float) are commonly available on CPUs, these are not universally supported on GPUs; some GPU architectures sacrifice IEEE-compliance while others lack double-precision altogether. There have been efforts to emulate double precision floating point values on GPUs; however, the speed tradeoff negates any benefit to offloading the computation onto the GPU in the first place.