ieee-754

Double precision - decimal places

拈花ヽ惹草 提交于 2019-11-26 13:29:51
From what I have read, a value of data type double has an approximate precision of 15 decimal places. However, when I use a number whose decimal representation repeats, such as 1.0/7.0, I find that the variable holds the value of 0.14285714285714285 - which is 17 places (via the debugger). I would like to know why it is represented as 17 places internally, and why a precision of 15 is always written at ~15? An IEEE double has 53 significant bits (that's the value of DBL_MANT_DIG in <cfloat> ). That's approximately 15.95 decimal digits (log10(2 53 )); the implementation sets DBL_DIG to 15, not

Is SSE floating-point arithmetic reproducible?

十年热恋 提交于 2019-11-26 12:44:59
问题 The x87 FPU is notable for using an internal 80-bit precision mode, which often leads to unexpected and unreproducible results across compilers and machines. In my search for reproducible floating-point math on .NET, I discovered that both major implementations of .NET (Microsoft\'s and Mono) emit SSE instructions rather than x87 in 64-bit mode. SSE(2) uses strictly 32-bit registers for 32-bit floats, and strictly 64-bit registers for 64-bit floats. Denormals can optionally be flushed to zero

How computer does floating point arithmetic?

牧云@^-^@ 提交于 2019-11-26 12:43:57
问题 I have seen long articles explaining how floating point numbers can be stored and how the arithmetic of those numbers is being done, but please briefly explain why when I write cout << 1.0 / 3.0 <<endl; I see 0.333333 , but when I write cout << 1.0 / 3.0 + 1.0 / 3.0 + 1.0 / 3.0 << endl; I see 1 . How does the computer do this? Please explain just this simple example. It is enough for me. 回答1: The problem is that the floating point format represents fractions in base 2. The first fraction bit

Why does the floating-point value of 4*0.1 look nice in Python 3 but 3*0.1 doesn&#39;t?

时光怂恿深爱的人放手 提交于 2019-11-26 11:55:37
问题 I know that most decimals don\'t have an exact floating point representation (Is floating point math broken?). But I don\'t see why 4*0.1 is printed nicely as 0.4 , but 3*0.1 isn\'t, when both values actually have ugly decimal representations: >>> 3*0.1 0.30000000000000004 >>> 4*0.1 0.4 >>> from decimal import Decimal >>> Decimal(3*0.1) Decimal(\'0.3000000000000000444089209850062616169452667236328125\') >>> Decimal(4*0.1) Decimal(\'0.40000000000000002220446049250313080847263336181640625\')

Python float - str - float weirdness

喜夏-厌秋 提交于 2019-11-26 11:53:49
>>> float(str(0.65000000000000002)) 0.65000000000000002 >>> float(str(0.47000000000000003)) 0.46999999999999997 ??? What is going on here? How do I convert 0.47000000000000003 to string and the resultant value back to float? I am using Python 2.5.4 on Windows. str(0.47000000000000003) give '0.47' and float('0.47') can be 0.46999999999999997 . This is due to the way floating point number are represented (see this wikipedia article) Note: float(repr(0.47000000000000003)) or eval(repr(0.47000000000000003)) will give you the expected result, but you should use Decimal if you need precision. float

Read/Write bytes of float in JS

与世无争的帅哥 提交于 2019-11-26 11:08:19
问题 Is there any way I can read bytes of a float value in JS? What I need is to write a raw FLOAT or DOUBLE value into some binary format I need to make, so is there any way to get a byte-by-byte IEEE 754 representation? And same question for writing of course. 回答1: Would this snippet help? @Kevin Gadd: var parser = new BinaryParser ,forty = parser.encodeFloat(40.0,2,8) ,twenty = parser.encodeFloat(20.0,2,8); console.log(parser.decodeFloat(forty,2,8).toFixed(1)); //=> 40.0 console.log(parser

Ranges of floating point datatype in C?

荒凉一梦 提交于 2019-11-26 11:03:00
问题 I am reading a C book, talking about ranges of floating point, the author gave the table: Type Smallest Positive Value Largest value Precision ==== ======================= ============= ========= float 1.17549 x 10^-38 3.40282 x 10^38 6 digits double 2.22507 x 10^-308 1.79769 x 10^308 15 digits I dont know where the numbers in the columns Smallest Positive and Largest Value come from. 回答1: These numbers come from the IEEE-754 standard, which defines the standard representation of floating

Double vs float on the iPhone

不想你离开。 提交于 2019-11-26 10:17:59
问题 I have just heard that the iphone cannot do double natively thereby making them much slower that regular float. Is this true? Evidence? I am very interested in the issue because my program needs high precision calculations, and I will have to compromise on speed. 回答1: The iPhone can do both single and double precision arithmetic in hardware. On the 1176 (original iPhone and iPhone3G), they operate at approximately the same speed, though you can fit more single-precision data in the caches. On

Why converting from float to double changes the value?

☆樱花仙子☆ 提交于 2019-11-26 08:53:05
问题 I\'ve been trying to find out the reason, but I couldn\'t. Can anybody help me? Look at the following example. float f = 125.32f; System.out.println(\"value of f = \" + f); double d = (double) 125.32f; System.out.println(\"value of d = \" + d); This is the output: value of f = 125.32 value of d = 125.31999969482422 回答1: The value of a float does not change when converted to a double . There is a difference in the displayed numerals because more digits are required to distinguish a double

Why are floating point numbers printed so differently?

家住魔仙堡 提交于 2019-11-26 07:48:08
问题 It\'s kind of a common knowledge that (most) floating point numbers are not stored precisely (when IEEE-754 format is used). So one shouldn\'t do this: 0.3 - 0.2 === 0.1; // very wrong ... as it will result in false , unless some specific arbitrary-precision type/class was used (BigDecimal in Java/Ruby, BCMath in PHP, Math::BigInt/Math::BigFloat in Perl, to name a few) instead. Yet I wonder why when one tries to print the result of this expression, 0.3 - 0.2 , scripting languages (Perl and