Float vs Double Performance

后端 未结 4 619
鱼传尺愫
鱼传尺愫 2020-11-28 04:32

I did some timing tests and also read some articles like this one (last comment), and it looks like in Release build, float and double values take the same amount of process

相关标签:
4条回答
  • 2020-11-28 05:01

    There are still some cases where floats are preferred however - with OpenGL coding for example it's far more common to use the GLFloat datatype (generally mapped directly to 16 bit float) as it is more efficient on most GPUs than GLDouble.

    0 讨论(0)
  • 2020-11-28 05:06

    On x86 processors, at least, float and double will each be converted to a 10-byte real by the FPU for processing. The FPU doesn't have separate processing units for the different floating-point types it supports.

    The age-old advice that float is faster than double applied 100 years ago when most CPUs didn't have built-in FPUs (and few people had separate FPU chips), so most floating-point manipulation was done in software. On these machines (which were powered by steam generated by the lava pits), it was faster to use floats. Now the only real benefit to floats is that they take up less space (which only matters if you have millions of them).

    0 讨论(0)
  • 2020-11-28 05:14

    I had a small project where I used CUDA and I can remember that float was faster than double there, too. For once the traffic between Host and Device is lower (Host is the CPU and the "normal" RAM and Device is the GPU and the corresponding RAM there). But even if the data resides on the Device all the time it's slower. I think I read somewhere that this has changed recently or is supposed to change with the next generation, but I'm not sure.

    So it seems that the GPU simply can't handle double precision natively in those cases, which would also explain why GLFloat is usually used rather than GLDouble.

    (As I said it's only as far as I can remember, just stumbled upon this while searching for float vs. double on a CPU.)

    0 讨论(0)
  • 2020-11-28 05:24

    It depends on 32-bit or 64-bit system. If you compile to 64-bit, double will be faster. Compiled to 32-bit on 64-bit (machine and OS) made float around 30% faster:

        public static void doubleTest(int loop)
        {
            Console.Write("double: ");
            for (int i = 0; i < loop; i++)
            {
                double a = 1000, b = 45, c = 12000, d = 2, e = 7, f = 1024;
                a = Math.Sin(a);
                b = Math.Asin(b);
                c = Math.Sqrt(c);
                d = d + d - d + d;
                e = e * e + e * e;
                f = f / f / f / f / f;
            }
        }
    
        public static void floatTest(int loop)
        {
            Console.Write("float: ");
            for (int i = 0; i < loop; i++)
            {
                float a = 1000, b = 45, c = 12000, d = 2, e = 7, f = 1024;
                a = (float) Math.Sin(a);
                b = (float) Math.Asin(b);
                c = (float) Math.Sqrt(c);
                d = d + d - d + d;
                e = e * e + e * e;
                f = f / f / f / f / f;
            }
        }
    
        static void Main(string[] args)
        {
            DateTime time = DateTime.Now;
            doubleTest(5 * 1000000);
            Console.WriteLine("milliseconds: " + (DateTime.Now - time).TotalMilliseconds);
    
            time = DateTime.Now;
            floatTest(5 * 1000000);
            Console.WriteLine("milliseconds: " + (DateTime.Now - time).TotalMilliseconds);
    
            Thread.Sleep(5000);
        }
    
    0 讨论(0)
提交回复
热议问题