Float or Double?

前端 未结 8 1868
北海茫月
北海茫月 2020-12-29 20:36

Which is faster, double or float, when preforming arithimic (+-*/%), and is it worth just using float for memory reasons? Precision is not an issue much of an issue.

相关标签:
8条回答
  • 2020-12-29 21:18

    The processing speed on both types should approximately be the same in CPUs nowadays.

    "use whichever precision is required for acceptable results."

    Related questions have been asked a couple of times here on SO, here is one.

    Edit:

    In speed terms, there's no difference between float and double on the more modern hardware.

    Please check out this article from developer.android.com.

    0 讨论(0)
  • 2020-12-29 21:27

    a float is 32 bits or 4 bytes

    a double is 64 bits or 8 bytes

    so yeah, floats are half the size according to the sun java certification book.

    0 讨论(0)
  • 2020-12-29 21:30

    I wouldn't advise either for fast operations but I would believe that a operations on floats would be faster as they are 32 bit vs 64 bit in doubles.

    0 讨论(0)
  • 2020-12-29 21:31

    Double rather than Float was advised by ADT v21 lint message due to the JIT (Just In Time) optimizations in Dalvik from Froyo onwards (API 8 and later).

    I was using FloatMath.sin and it suggested Math.sin instead with the following under "explain issue" context menu. It reads to me like a general message relating to double vs float and not just trig related.

    "In older versions of Android, using android.util.FloatMath was recommended for performance reasons when operating on floats. However, on modern hardware doubles are just as fast as float (though they take more memory), and in recent versions of Android, FloatMath is actually slower than using java.lang.Math due to the way the JIT optimizes java.lang.Math. Therefore, you should use Math instead of FloatMath if you are only targeting Froyo and above."

    Hope this helps.

    0 讨论(0)
  • 2020-12-29 21:31

    The android documentation quoted indicates that integers are preferable for fast operations. This seems a little strange on the face of it but the speed of an algorithm using ints vs floats vs doubles depends on several layers:

    1. The JIT or VM: these will convert the mathematical operations to the host machine's native instruction set and that translation can have a large impact on performance. Since the underlying hardware can vary dramatically from platform to platform, it can be very difficult to write a VM or JIT that will emit optimal code in all cases. It is probably still best to use the JIT/VM's recommended fast type (in this case, integers) because, as the JITs and VMs get better at emitting more efficient native instructions, your high-level code should get the associated performance boosts without any modification.

    2. The native hardware (why the first level isn't perfect): most processors nowadays have hardware floating point units (those support floats and doubles). If such a hardware unit is present, floats/doubles can be faster than integers, unless there is also hardware integer support. Compounding the issue is that most CPUs have some form of SIMD (Single Instruction Multiple Data) support that allow operations to be vectorized if the data types are small enough (eg. adding 4 floats in one instruction by putting two in each register instead of having to use one whole register for each of 4 doubles). This can allow data types that use fewer bits to be processed much faster than a double, at the expense of precision.

    Optimizing for speed requires detailed knowledge of both of these levels and how they interact. Even optimizing for memory use can be tricky because the VM can choose to represent your data in a larger footprint for other reasons: a float may occupy 8 bytes in the VM's code, though that is less likely. All of this makes optimization almost the antithesis of portability. So here again, it is better to use the VM's recommended "fast" data type because that should result in the best performance averaged across supported devices.

    This is not a bad question at all, even on desktops. Yes they are very fast today, but if you are implementing a complicated algorithm (for example, the fast Fourier transform), even small optimizations can have an enormous impact on the algorithm's run time. In any case, the answer to your question "which is faster: floats or doubles" is "it depends" :)

    0 讨论(0)
  • 2020-12-29 21:32

    http://developer.android.com/training/articles/perf-tips.html#AvoidFloat

    Avoid Using Floating-Point

    As a rule of thumb, floating-point is about 2x slower than integer on Android-powered devices.

    In speed terms, there's no difference between float and double on the more modern hardware. Space-wise, double is 2x larger. As with desktop machines, assuming space isn't an issue, you should prefer double to float.

    Also, even for integers, some processors have hardware multiply but lack hardware divide. In such cases, integer division and modulus operations are performed in software—something to think about if you're designing a hash table or doing lots of math.

    0 讨论(0)
提交回复
热议问题