When dealing with double
data types is multiplying by the inverse better or worse?
Which way is faster? Which way uses
The benefit will be very small or zero, depending on both the compiler and the hardware.
But it could still matter (in a tight loop), and then for readability you should write
SquareInches = MMSquared * (1 / 645.16)
And preferably use a constant for 645.16.
Multiplies and adds are the fastest operations the processor supports. Some processors don't even have hardware implementations of things like division, square root, etc.
If you're dividing by a literal value like 645.16
then it's very likely that there is no difference because the compiler can easily determine which version is faster and use that.
If you're dividing or multiplying by a variable then it's likely that multiplication is slightly faster because the logic is generally more simple.
As with anything, to be sure, use a profiler.
Multiplying by the inverse is faster. Compilers don't optimize this automatically because it can result in a small loss of precision. (This actually came up on a D newsgroup Walter Bright frequents, and he made it clear that compilers do not do this automatically.) You should normally divide because it is more readable and accurate.
If you are executing a piece of floating point code a billion times in a loop and you don't care about a small loss of precision and you will be dividing by the same number several times, then multiplying by the inverse can be a good optimization. I have actually gotten significant real world speedup in a few cases like the ones described by multiplying by the inverse, but these are extreme edge cases in loops executed several billion times that pretty much do nothing but multiply floats.
With most processors, it's faster to multiply than divide. But it's really insignificant for most applications, in my opinion you're better off with whichever is more readable, unless profiling shows it's a critical path.
If it's an interpreted language, the amount of time to read the source and convert it to numbers will overwhelm the time taken to actually do the math, especially if you use that many significant digits for the multiply. (Are you sure you really need that many significant digits?)
Division algorithms are slower than multiplication algorithms in most cases.
It's a tradeoff, you can choose either the more readable way, or the faster way.
// Using division operation
SquareInches = MMSquared / 645.16
This is easy to read and maintain, but performs slower than its multiplicative counterpart:
// Using a multiplication
SquareInches = MMSquared * 0.0015500031000062000124000248000496
If you go this way, you'll need more space in memory to store the inverse number digits, but the algorithm runs substancially faster. A user tested it on a VS2005 project, and reported an 8X faster performance for the multiplicative version.
The reason for that is that a multiplication can be blindly converted to shift and add operations on the processor, which are the most optimized operations on a CPU. A good algorithm for signed multiplication is Booth's algorithm (the processor does this for you). On the other hand, more control overhead is required when performing a division algorithm, thus rendering division algorithms slower.
If performance is your need, use additions, substractions (nothing more than adding the two's complement), multiplications, shiftings, but never divisions. You'd get a substantial non-negligible improvement if you compute all your inverses in advance, and use them to multiply in a division intensive program.