floating-point-precision

How to actually avoid floating point errors when you need to use float?

牧云@^-^@ 提交于 2019-12-19 08:18:17
问题 I am trying to affect the translation of a 3D model using some UI buttons to shift the position by 0.1 or -0.1. My model position is a three dimensional float so simply adding 0.1f to one of the values causes obvious rounding errors. While I can use something like BigDecimal to retain precision, I still have to convert it from a float and back to a float at the end and it always results in silly numbers that are making my UI look like a mess. I could just pretty the displayed values but the

Exact binary representation of a double [duplicate]

天大地大妈咪最大 提交于 2019-12-19 07:50:11
问题 This question already has answers here : Closed 8 years ago . Possible Duplicate: Float to binary in C++ I have a very small double var, and when I print it I get -0. (using C++). Now in order to get better precision I tried using cout.precision(18); \\i think 18 is the max precision i can get. cout.setf(ios::fixed,ios::floatfield); cout<<var;\\var is a double. but it just writes -0.00000000000... I want to see the exact binary representation of the var. In other words I want to see what

Exact binary representation of a double [duplicate]

狂风中的少年 提交于 2019-12-19 07:50:08
问题 This question already has answers here : Closed 8 years ago . Possible Duplicate: Float to binary in C++ I have a very small double var, and when I print it I get -0. (using C++). Now in order to get better precision I tried using cout.precision(18); \\i think 18 is the max precision i can get. cout.setf(ios::fixed,ios::floatfield); cout<<var;\\var is a double. but it just writes -0.00000000000... I want to see the exact binary representation of the var. In other words I want to see what

C++ sqrt function precision for full squares

£可爱£侵袭症+ 提交于 2019-12-19 02:20:31
问题 Let, x is an integer and y = x * x . Then is it guaranteed that sqrt(y) == x ? For example, can I be sure that sqrt(25) or sqrt(25.0) will return 5.0 , not 5.0000000003 or 4.999999998 ? 回答1: No, you cannot be guaranteed. For integers and their squares that fit in the dynamic range of the floating point type's mantissa (2^53 for a typical C/C++ double ), you're likely to be OK, but not necessarily guaranteed. You should avoid equals comparisons between floating point values and exact values,

Turn float into string

社会主义新天地 提交于 2019-12-17 14:57:02
问题 I have proceeded to state when I need to turn IEEE-754 single and double precision numbers into strings with base 10 . There is FXTRACT instruction available, but it provides only exponent and mantissa for base 2, as the number calculation formula is: value = (-1)^sign * 1.(mantissa) * 2^(exponent-bias) If I had some logarithmic instructions for specific bases, I would be able to change base of 2 exponent - bias part in expression, but currently I don't know what to do. I was also thinking of

What is the purpose of max_digits10 and how is it different from digits10?

落爺英雄遲暮 提交于 2019-12-17 12:17:16
问题 I am confused about what max_digits10 represents. According to its documentation, it is 0 for all integral types. The formula for floating-point types for max_digits10 looks similar to int 's digits10 's. 回答1: To put it simple, digits10 is the number of decimal digits guaranteed to survive text → float → text round-trip. max_digits10 is the number of decimal digits needed to guarantee correct float → text → float round-trip. There will be exceptions to both but these values give the minimum

Does parseDouble exist in JavaScript?

我的梦境 提交于 2019-12-17 10:52:04
问题 In JavaScript, I have a number which is 21 digits, and I want to parse it. Does a parseDouble method exist in JavaScript? 回答1: It's not possible to natively deal with a 21-digit precision number in JavaScript. JavaScript only has one kind of number: "number", which is a IEEE-754 Double Precision ("double") value. As such, parseFloat in JavaScript is the equivalent of a "parse double" in other languages. However, a number/"double" only provides 16 significant digits (decimal) of precision and

Trouble with floats in Objective-C

家住魔仙堡 提交于 2019-12-17 02:33:18
问题 I've a small problem and I can't find a solution! My code is (this is only a sample code, but my original code do something like this): float x = [@"2.45" floatValue]; for(int i=0; i<100; i++) x += 0.22; NSLog(@"%f", x); the output is 52.450001 and not 52.450000 ! I don't know because this happens! Thanks for any help! ~SOLVED~ Thanks to everybody! Yes, I've solved with the double type! 回答1: Floats are a number representation with a certain precision. Not every value can be represented in

Awk: Length of column number

半世苍凉 提交于 2019-12-14 04:07:06
问题 I have a (maybe) silly question. My data: File 1 1234.34 a 1235.34 d 3456.23 b 3457.23 e 2325.89 c 2327.89 f I want something like awk '{if($1==$3) print $4}' But of course if I do this, it will print nothing. So I want to modify the "precision" of $3 (in this case) in the sense that when awk read $3 it finds this: 124 345 232 then it must be a way to do this, but I don't know it. awk '{if($1==(three digits precision $3)) print $4}' Help? 回答1: You could calculate the difference of the two

Strange Heroku float bit-precision error (ruby on rails)

蹲街弑〆低调 提交于 2019-12-13 18:02:22
问题 Originally a coordinate field on my model was using integer, but when I tried to deploy to Heroku, I was reminded (by a crash) that I needed it to be a float instead (since I had decimal points in my coordinate). So I generated a change_column migration on my local machine, to change_column them to be floats instead. and everything went fine. I tried to deploy to heroku again, first with a heroku pg:reset and then with a heroku db:setup . During the db:setup, I get the following error: