When to use a Float

前端 未结 7 1067
南笙
南笙 2021-02-03 18:40

Years ago I learned the hard way about precision problems with floats so I quit using them. However, I still run into code using floats and it make me cringe because I know som

相关标签:
7条回答
  • 2021-02-03 19:02

    The most common reason I could think of is to save space. Not that this is often worth worrying about, but in some instances it matters. A float takes up half as much memory as a double, so you can get twice as many in the same space. For example, I've had an array of numbers that was too big to fit into RAM as doubles but fit as an array floats.

    0 讨论(0)
  • 2021-02-03 19:05

    There are many cases you would want to use a float. What I don't understand however, is what you can use instead. If you mean using double instead of float, then yeah, in most cases, you want to do that. However, double will also have precision issues. You should use decimal whenever the accuracy is important.

    float and double are very useful in many applications. decimal is an expensive data type and its range (the magnitude of the largest number it can represent) is less than double. Computers usually have special hardware level support for those data types. They are used a lot in scientific computing. Basically, they are primary fractional data types you want to use. However, in monetary calculations, where precision is extremely important, decimal is the way to go.

    0 讨论(0)
  • 2021-02-03 19:12

    All floating point calculations are inaccurature in a general case, floats just more so than doubles. If you want more information have a read of What Every Computer Scientist Should Know About Floating-Point Arithmetic

    As for when to use floats - they are often used when precision is less important than saving memory. For example simple particle simulations in video games.

    0 讨论(0)
  • 2021-02-03 19:13

    First, never use floats or doubles if you want to represent decimal values exactly - use either integer types (int, long etc) or decimal (which is just an integer type with a scaling factor). Floats and doubles are converted internally to an exponential representation in base 2 and numbers represented exactly in an exponential representation in base 10 cannot in general be represented exactly. (E.g., the number 10 is only represented approximately by floats or doubles).

    Second, in terms of precision it depends on what you need. I don't agree with your sentiment that there are never calculations where precision does not matter. You normally have a specific need that your final result is accurate to say, 3 digits. It does not make sense to look for the highest precision possible if your input has only limited accuracy - say you weigh some 5g of flour and your scale only has an accuracy to 0.5g. That said, intermediate calculation usually benefit from higher precision but something that is more important than high precision if quite often speed.

    Third, when preforming a series of calculations, say within a loop, you need to know what you are doing when dealing with any inexact calculations - you will incur round-off errors and some algorithms may not arrive at an answer to any degree of precision. Understanding these issues in detail may require a course in numerical analysis. This does not depend on whether you choose floats or doubles for your calculations.

    For floating point calculations I would usually go with doubles since they are more general and faster than floats. However, floats are smaller and if you need to store a lot of them they are the choice to prevent performance issue due to cache misses.

    To my knowledge, floating point processing is supported in hardware for doubles but not floats, so using floats incurs a conversion to double. However, some routines would stop sooner when calculating a value iteratively when you pass a float, since this implies that you only want about 8 digits precision vs. about 16 for doubles.

    0 讨论(0)
  • 2021-02-03 19:15

    Short answer: You only have to use a float when you know exactly what you're doing and why.

    Long answer: floats (as opposed to doubles) aren't really used anymore outside 3D APIs as far as I know. Floats and doubles have the same performance characteristics on modern CPUs, doubles are somewhat bigger and that's all. If in doubt, just use double.

    Oh yes, and use decimal for financial calculations, of course.

    0 讨论(0)
  • 2021-02-03 19:20

    There is in fact one thing where it is still common to use floats aka "single precision" with 32 bits: Graphic applications and printing.

    The other reason are graphic cards with their GPUs. The smaller the datatype, the faster the operation because less bits must be transported. Integer datatypes have problems with High Dynamic Range Images: The eye is able to function over a luminosity range of 1: 10^13 and discerns ca. 4000 levels. So while integer datatypes can store the number of levels they are unable to store the background brightness while floats have no problems with that. In fact IEEE 754R allows a new "half precision" float with 16 bits and a 10 bit mantissa which loses some precision but would allow an even greater speed. OpenGL and DirectX e.g. use floats extensively. The eye is very forgiving with artifacts, so no problem there.

    All other media building on graphics are inheriting floats as convienient measure. The mantissa has 24 bits allowing therefore 2^24 = 16,7 millions consecutive steps. If you have a printer with 2000 dpi resolution, you still are able to print 213x213 m sheets. More than enough precision.

    0 讨论(0)
提交回复
热议问题