Are doubles faster than floats in C#?

前端 未结 10 758
[愿得一人]
[愿得一人] 2020-12-02 20:06

I\'m writing an application which reads large arrays of floats and performs some simple operations with them. I\'m using floats, because I thought it\'d be faster than doubl

相关标签:
10条回答
  • 2020-12-02 20:16

    If load & store operations are the bottleneck, then floats will be faster, because they're smaller. If you're doing a significant number of calculations between loads and stores, it should be about equal.

    Someone else mentioned avoiding conversions between float & double, and calculations that use operands of both types. That's good advice, and if you use any math library functions that return doubles (for example), then keeping everything as doubles will be faster.

    0 讨论(0)
  • 2020-12-02 20:19

    I just read the "Microsoft .NET Framework-Application Development Foundation 2nd" for the MCTS exam 70-536 and there is a note on page 4 (chapter 1):

    NOTE Optimizing performance with built-in types
    The runtime optimizes the performance of 32-bit integer types (Int32 and UInt32), so use those types for counters and other frequently accessed integral variables. For floating-point operations, Double is the most efficient type because those operations are optimized by hardware.

    It's written by Tony Northrup. I don't know if he's an authority or not, but I would expect that the official book for the .NET exam should carry some weight. It is of course not a gaurantee. I just thought I'd add it to this discussion.

    0 讨论(0)
  • I profiled a similar question a few weeks ago. The bottom line is that for x86 hardware, there is no significant difference in the performance of floats versus doubles unless you become memory bound, or you start running into cache issue. In that case floats will generally have the advantage because they are smaller.

    Current Intel CPUs perform all floating point operations in 80 bit wide registers so the actual speed of the computation shouldn't vary between floats and doubles.

    0 讨论(0)
  • 2020-12-02 20:20

    Matthijs,

    You are wrong. 32-bit is far more efficient than 16-bit - in modern processors... Perhaps not memory-wise, but in effectiveness 32-bit is the way to go.

    You really should update your professor to something more "up-to-date". ;)

    Anyway, to answer the question; float and double has exactly the same performance, at least on my Intel i7 870 (as in theory).

    Here are my measurements:

    (I made an "algorithm" that I repeated for 10,000,000 times, and then repeated that for 300 times, and out of that I made a average.)

    double
    -----------------------------
    1 core  = 990 ms
    4 cores = 340 ms
    6 cores = 282 ms
    8 cores = 250 ms
    
    float
    -----------------------------
    1 core  = 992 ms
    4 cores = 340 ms
    6 cores = 282 ms
    8 cores = 250 ms
    
    0 讨论(0)
  • 2020-12-02 20:21

    I have always thought that the processors were optimized or the same regardless of float or double. Searching for optimizations on my intensive computations (lots of gets from a matrix, comparisons of two values) I found out that floats run about 13% faster.

    This surprised me, but I guess it is due to the nature of my problem. I don't do casts between float and double in the core of the operations, and my computations are mainly adding, multiplying and subtracting.

    This is on my i7 920, running a 64-bit operating system.

    0 讨论(0)
  • 2020-12-02 20:22

    This indicates that floats are slightly faster than doubles: http://www.herongyang.com/cs_b/performance.html

    In general, any time you do a comparison on performance, you should take into account any special cases, like does using one type require additional conversions or data massaging? Those add up and can belie generic benchmarks like this.

    0 讨论(0)
提交回复
热议问题