floating-point-precision

.format() returns ValueError when using {0:g} to remove trailing zeros

会有一股神秘感。 提交于 2019-11-30 13:49:40
I'm trying to generate a string that involves an occasional float with trailing zeros. This is a MWE of the text string and my attempt at removing them with {0:g} : xn, cod = 'r', 'abc' ccl = [546.3500, 6785.35416] ect = [12.350, 13.643241] text = '${}_{{t}} = {0:g} \pm {0:g}\;{}$'.format(xn, ccl[0], ect[0], cod) print text Unfortunately this returns: ValueError: cannot switch from automatic field numbering to manual field specification This question Using .format() to format a list with field width arguments reported on the same issue but I can't figure out how to apply the answer given there

Is floating point precision mutable or invariant?

限于喜欢 提交于 2019-11-30 12:40:56
问题 I keep getting mixed answers of whether floating point numbers (i.e. float , double , or long double ) have one and only one value of precision, or have a precision value which can vary. One topic called float vs. double precision seems to imply that floating point precision is an absolute. However, another topic called Difference between float and double says, In general a double has 15 to 16 decimal digits of precision Another source says, Variables of type float typically have a precision

Is floating point precision mutable or invariant?

喜欢而已 提交于 2019-11-30 02:43:48
I keep getting mixed answers of whether floating point numbers (i.e. float , double , or long double ) have one and only one value of precision, or have a precision value which can vary. One topic called float vs. double precision seems to imply that floating point precision is an absolute. However, another topic called Difference between float and double says, In general a double has 15 to 16 decimal digits of precision Another source says, Variables of type float typically have a precision of about 7 significant digits Variables of type double typically have a precision of about 16

How to specify floating point decimal precision from variable?

二次信任 提交于 2019-11-30 01:58:11
问题 I have the following repetitive simple code repeated several times that I would like to make a function for: for i in range(10): id = "some id string looked up in dict" val = 63.4568900932840928 # some floating point number in dict corresponding to "id" tabStr += '%-15s = %6.1f\n' % (id,val) I want to be able to call this function: def printStr(precision) Where it preforms the code above and returns tabStr with val to precision decimal points. For example: printStr(3) would return 63.457 for

epsilon for various float values

一世执手 提交于 2019-11-29 18:13:55
There is FLT_MIN constant that is nearest to zero. How to get nearest to some number value? As an example: float nearest_to_1000 = 1000.0f + epsilon; // epsilon must be the smallest value satisfying condition: // nearest_to_1000 > 1000.0f I would prefer numeric formula without using special functions. Eric Postpischil Caution: Bugs were found in this code while working on another answer. I hope to update this later. In the meantime, it fails for some values involving subnormals. C provides a function for this, in the <math.h> header. nextafterf(x, INFINITY) is the next representable value

Why does 0.06 + 0.01 = 0.07 in ColdFusion?

拥有回忆 提交于 2019-11-29 13:07:23
Why don't math operations in ColdFusion seem to be affected by floating point math issues? Take the code: result = 0.06 + 0.01; writedump(result); writedump(result.getClass().getName()); Which outputs 0.07 java.lang.Double However the equivlant Java code produces what I"d expect when adding two doubles: public static void main(String[] args) { double a = 0.01d; double b = 0.06d; System.out.println(a + b); //0.06999999999999999 } This is what I'd expect to see from ColdFusion because of the realities of floating math ( http://download.oracle.com/docs/cd/E19957-01/806-3568/ncg_goldberg.html ).

sizeof long double and precision not matching?

大城市里の小女人 提交于 2019-11-29 10:23:29
Consider the following C code: #include <stdio.h> int main(int argc, char* argv[]) { const long double ld = 0.12345678901234567890123456789012345L; printf("%lu %.36Lf\n", sizeof(ld), ld); return 0; } Compiled with gcc 4.8.1 under Ubuntu x64 13.04 , it prints: 16 0.123456789012345678901321800735590983 Which tells me that a long double weights 16 bytes but the decimals seems to be ok only to the 20th place. How is it possible? 16 bytes corresponds to a quad, and a quad would give me between 33 and 36 decimals. The long double format in your C implementation uses an Intel format with a one-bit

PHP - Getting a float variable internal value

自闭症网瘾萝莉.ら 提交于 2019-11-29 08:43:29
I am trying to establish the delta I need when doing float comparison in PHP. I want to take a closer look at my variables to see the difference. I have 2 computed variables, $a, $b. $a = some_function(); $b = some_other_function(); How can I see the exact number which PHP uses? I want to compare them with this formula, where I need to specify the delta: $delta = 0.00001; if (abs($a-$b) < $delta) { echo "identical"; } var_dump($a, $b) returns 1.6215; 1.6215. but I know that they are not exactly equal because var_dump($a === $b); evaluates to false; Why doesn't var_dump() print the internal

Precision of repr(f), str(f), print(f) when f is float

こ雲淡風輕ζ 提交于 2019-11-29 07:41:19
If I run: >>> import math >>> print(math.pi) 3.141592653589793 Then pi is printed with 16 digits, However, according to: >>> import sys >>> sys.float_info.dig 15 My precision is 15 digits. So, should I rely on the last digit of that value (i.e. that the value of π indeed is 3.141592653589793nnnnnn). TL;DR The last digit of str(float) or repr(float) can be "wrong" in that it seems that the decimal representation is not correctly rounded. >>> 0.100000000000000040123456 0.10000000000000003 But that value is still closer to the original than 0.1000000000000000 (with 1 digit less). In the case of

Why are my BigDecimal objects initialized with unexpected rounding errors?

落爺英雄遲暮 提交于 2019-11-28 23:14:30
In Ruby 2.2.0, why does: BigDecimal.new(34.13985572755337, 9) equal 34.0 but BigDecimal.new(34.13985572755338, 9) equal 34.1398557 ? Note that I am running this on a 64 bit machine. Initialize with Strings Instead of Floats In general, you can't get reliable behavior with Floats. You're making the mistake of initializing your BigDecimals with Float values instead of String values, which introduces some imprecision right at the beginning. For example, on my 64-bit system: float1 = 34.13985572755337 float2 = 34.13985572755338 # You can use string literals here, too, if your Float can't be