floating-accuracy

Why does this happen when rounding floating numbers in Python?

梦想的初衷 提交于 2020-01-14 06:31:09
问题 I am looking into rounding floating point numbers in Python and the following behavior seems quite strange: Code : a = 203.25 print '%.2f'%(a/10.) print '%.2f'%(round(a/10., 2)) print '%.2f'%(0.1*a) Output : 20.32 20.32 20.33 Why does the first and especially the second case fail? 回答1: http://en.wikipedia.org/wiki/Rounding#Round_half_to_even Round half to even A tie-breaking rule that is less biased is round half to even , namely: If the fraction of y is 0.5, then q is the even integer

Floating point equivalence?

拈花ヽ惹草 提交于 2020-01-14 03:11:31
问题 I want to be able to compare two doubles disregarding a possible precision loss. Is there a method already that handles this case? If not, is there a threshold/guideline to know how much is an adequate equivalence between two doubles? 回答1: The threshold is completely dependent to the problem itself. For some problems, you might consider 1.001 equal to 1.002 and for some problems, you might need a much smaller threshold. The general techique is: Math.Abs(a - b) < some_epsilon // `a` is roughly

Floating point equivalence?

邮差的信 提交于 2020-01-14 03:11:08
问题 I want to be able to compare two doubles disregarding a possible precision loss. Is there a method already that handles this case? If not, is there a threshold/guideline to know how much is an adequate equivalence between two doubles? 回答1: The threshold is completely dependent to the problem itself. For some problems, you might consider 1.001 equal to 1.002 and for some problems, you might need a much smaller threshold. The general techique is: Math.Abs(a - b) < some_epsilon // `a` is roughly

Why and how does python truncate numerical data?

∥☆過路亽.° 提交于 2020-01-13 17:48:58
问题 Am dealing with two variables here, but confused because their values seem to be changing (they loose precision) when I want to send them as URL parameters as they are. Look at this scenario as I reproduce it here from the python interpreter: >>> lat = 0.33245794180134 >>> long = 32.57355093956 >>> lat 0.33245794180133997 >>> long 32.57355093956 >>> nl = str(lat) >>> nl '0.332457941801' >>> nlo = str(long) >>> nlo '32.5735509396' So what is happening? and how can I ensure that when I

Error due to limited precision of float and double

做~自己de王妃 提交于 2020-01-13 17:46:27
问题 In C++, I use the following code to work out the order of magnitude of the error due to the limited precision of float and double: float n=1; float dec = 1; while(n!=(n-dec)) { dec = dec/10; } cout << dec << endl; (in the double case all I do is exchange float with double in line 1 and 2) Now when I compile and run this using g++ on a Unix system, the results are Float 10^-8 Double 10^-17 However, when I compile and run it using MinGW on Windows 7, the results are Float 10^-20 Double 10^-20

Java more precision in arithmetic

泄露秘密 提交于 2020-01-13 03:44:06
问题 I am building a web app in Java that does math and shows steps to the user. When doing basic arithmetic with decimals I often get the messy in accurate outputs. Here is my problem: double a = 0.15; double b = 0.01; System.out.println(a - b); // outputs 0.13999999999999999 float a = 0.15; float b = 0.01; System.out.println(a - b); // outputs 0.14 float a = 0.16f; float b = 0.01f; System.out.println(a - b); // outputs 0.14999999 double a = 0.16; double b = 0.01; System.out.println(a - b); //

Is integer division always equal to the floor of regular division?

南笙酒味 提交于 2020-01-10 20:25:10
问题 For large quotients, integer division ( // ) doesn't seem to be necessarily equal to the floor of regular division ( math.floor(a/b) ). According to Python docs (https://docs.python.org/3/reference/expressions.html - 6.7), floor division of integers results in an integer; the result is that of mathematical division with the ‘floor’ function applied to the result. However, math.floor(648705536316023400 / 7) = 92672219473717632 648705536316023400 // 7 = 92672219473717628 '{0:.10f}'.format

Ruby Float#round method behaves incorrectly with round(2)

风流意气都作罢 提交于 2020-01-10 05:43:28
问题 I learned that it's recommended to use BigDecimal instead of Float , but this one is either a bug or highlights the esoteric nature of Float . It seems that Float#round(2) has a problem with "1.015", "1.025" and "1.035". 1.015.round(2) => 1.01 # => WRONG .. should be 1.02 1.025.round(2) => 1.02 # => WRONG .. should be 1.03 1.035.round(2) => 1.03 # => WRONG .. should be 1.04 1.045.round(2) => 1.05 # => CORRECT 1.016.round(2) => 1.02 # => CORRECT round(3) works fine. 1.0015.round(3) => 1.002 #

IEEE-754 floating-point precision: How much error is allowed?

*爱你&永不变心* 提交于 2020-01-09 08:01:30
问题 I'm working on porting the sqrt function (for 64-bit doubles) from fdlibm to a model-checker tool I'm using at the moment (cbmc). As part of my doings, I read a lot about the ieee-754 standard, but I think I didn't understand the guarantees of precision for the basic operations (incl. sqrt). Testing my port of fdlibm's sqrt, I got the following calculation with sqrt on a 64-bit double: sqrt

IEEE-754 floating-point precision: How much error is allowed?

与世无争的帅哥 提交于 2020-01-09 08:01:07
问题 I'm working on porting the sqrt function (for 64-bit doubles) from fdlibm to a model-checker tool I'm using at the moment (cbmc). As part of my doings, I read a lot about the ieee-754 standard, but I think I didn't understand the guarantees of precision for the basic operations (incl. sqrt). Testing my port of fdlibm's sqrt, I got the following calculation with sqrt on a 64-bit double: sqrt