precision

How to keep float/double arithmetic deterministic?

落花浮王杯 提交于 2021-01-27 14:34:55
问题 If we use algorithms with double and float arithmetic, how can we guarantee that the results are the same running it in Python and C, in x86 and x64 Linux and Windows computers and ARM microcontrollers? We re using an algorithm that uses: double + double double + float double exp(double) float * float On the same computer, compiling it for x86 and x64 MinGW gives different results. The algorithm makes a lot of math so any small error will make a difference in the end. Right now the ARM mcu

R: How to convert long number to string to save precision

别来无恙 提交于 2021-01-27 11:59:45
问题 I have a problem to convert a long number to a string in R. How to easily convert a number to string to preserve precision? A have a simple example below. a = -8664354335142704128 toString(a) [1] "-8664354335142704128" b = -8664354335142703762 toString(b) [1] "-8664354335142704128" a == b [1] TRUE I expected toString(a) == toString(b) , but I got different values. I suppose toString() converts the number to float or something like that before converting to string. Thank you for your help.

R: How to convert long number to string to save precision

三世轮回 提交于 2021-01-27 11:59:24
问题 I have a problem to convert a long number to a string in R. How to easily convert a number to string to preserve precision? A have a simple example below. a = -8664354335142704128 toString(a) [1] "-8664354335142704128" b = -8664354335142703762 toString(b) [1] "-8664354335142704128" a == b [1] TRUE I expected toString(a) == toString(b) , but I got different values. I suppose toString() converts the number to float or something like that before converting to string. Thank you for your help.

Python rounding error with simple sum

一个人想着一个人 提交于 2021-01-27 06:43:19
问题 >>> sum([0.3, 0.1, 0.2]) 0.6000000000000001 >>> sum([0.3, 0.1, 0.2]) == 0.6 False What can I do to make the result be exactly 0.6? I don't want to round the result to a certain number of decimal digits because then I could lose precision for other list instances. 回答1: A float is inherently imprecise in pretty much every language because it cannot be represented precisely in binary. If you need exact precision use the Decimal class : from decimal import Decimal num1 = Decimal("0.3") num2 =

Python rounding error with simple sum

℡╲_俬逩灬. 提交于 2021-01-27 06:41:29
问题 >>> sum([0.3, 0.1, 0.2]) 0.6000000000000001 >>> sum([0.3, 0.1, 0.2]) == 0.6 False What can I do to make the result be exactly 0.6? I don't want to round the result to a certain number of decimal digits because then I could lose precision for other list instances. 回答1: A float is inherently imprecise in pretty much every language because it cannot be represented precisely in binary. If you need exact precision use the Decimal class : from decimal import Decimal num1 = Decimal("0.3") num2 =

C - Printing out float values

老子叫甜甜 提交于 2021-01-27 04:54:55
问题 I have a C++ program that takes in values and prints out values like this: getline(in,number); cout << setw(10) << number << endl; I have an equivalent C program that takes in values and prints out like so: fscanf(rhs, "%e", &number); printf("%lf\n", number); But while the C++ program prints out, 0.30951 the C program prints out 0.309510 . More examples: C++: 0.0956439 C: 0.095644 . It seems to print the same results as long as the value is 7 digits long, but if its shorter the 7 digits, it

C - Printing out float values

风格不统一 提交于 2021-01-27 04:54:09
问题 I have a C++ program that takes in values and prints out values like this: getline(in,number); cout << setw(10) << number << endl; I have an equivalent C program that takes in values and prints out like so: fscanf(rhs, "%e", &number); printf("%lf\n", number); But while the C++ program prints out, 0.30951 the C program prints out 0.309510 . More examples: C++: 0.0956439 C: 0.095644 . It seems to print the same results as long as the value is 7 digits long, but if its shorter the 7 digits, it

Floating point accuracy with different languages

送分小仙女□ 提交于 2021-01-26 19:15:35
问题 I'm currently doing distance calculations between coordinates and have been getting slightly different results depending on the language used. Part of the calculation is taking calculating the cosine of a given radian . I get the following results // cos(0.8941658257446736) // 0.6261694290123146 node // 0.6261694290123146 rust // 0.6261694290123148 go // 0.6261694290123148 python // 0.6261694290123148 swift // 0.6261694290123146 c++ // 0.6261694290123146 java // 0.6261694290123147 c I would

LocalDateTime.now() has different levels of precision on Windows and Mac machine

一世执手 提交于 2021-01-19 14:11:32
问题 When creating a new LocalDateTime using LocalDateTime.now() on my Mac and Windows machine i get a nano precision of 6 on my Mac and a nano precision of 3 on my Windows machine. Both are running jdk-1.8.0-172 . Is it possible to limit or increase the precision on one of the machines? And why is the precision actually different? 回答1: The precision is different because LocalDateTime.now() uses a system default Clock. Obtains the current date-time from the system clock in the default time-zone.

LocalDateTime.now() has different levels of precision on Windows and Mac machine

二次信任 提交于 2021-01-19 14:09:24
问题 When creating a new LocalDateTime using LocalDateTime.now() on my Mac and Windows machine i get a nano precision of 6 on my Mac and a nano precision of 3 on my Windows machine. Both are running jdk-1.8.0-172 . Is it possible to limit or increase the precision on one of the machines? And why is the precision actually different? 回答1: The precision is different because LocalDateTime.now() uses a system default Clock. Obtains the current date-time from the system clock in the default time-zone.