precision

What is the precision of the UITouch timestamp in iOS?

给你一囗甜甜゛ 提交于 2021-02-08 16:56:56
问题 How precise is the timestamp property of the UITouch class in iOS? Milliseconds? Tens of milliseconds? I'm comparing an iPad's internal measurements with a custom touch detection circuit taped on the screen, and there is quite a bit of variability between the two (standard deviation ~ 15ms). I've seen it suggested that the timestamp is discretized according to the frame refresh interval, but the distribution I'm getting looks continuous. 回答1: Prior to the iPad Air 2, the touch detection

scala.math.BigDecimal : 1.2 and 1.20 are equal

左心房为你撑大大i 提交于 2021-02-07 17:32:06
问题 How to keep precision and the trailing zero while converting a Double or a String to scala.math.BigDecimal ? Use Case - In a JSON message, an attribute is of type String and has a value of "1.20". But while reading this attribute in Scala and converting it to a BigDecimal, I am loosing the precision and it is converted to 1.2 回答1: @Saurabh What a nice question! It is crucial that you shared the use case! I think my answer lets to solve it in a most safe and efficient way... In a short form it

scala.math.BigDecimal : 1.2 and 1.20 are equal

冷暖自知 提交于 2021-02-07 17:31:16
问题 How to keep precision and the trailing zero while converting a Double or a String to scala.math.BigDecimal ? Use Case - In a JSON message, an attribute is of type String and has a value of "1.20". But while reading this attribute in Scala and converting it to a BigDecimal, I am loosing the precision and it is converted to 1.2 回答1: @Saurabh What a nice question! It is crucial that you shared the use case! I think my answer lets to solve it in a most safe and efficient way... In a short form it

scala.math.BigDecimal : 1.2 and 1.20 are equal

三世轮回 提交于 2021-02-07 17:30:39
问题 How to keep precision and the trailing zero while converting a Double or a String to scala.math.BigDecimal ? Use Case - In a JSON message, an attribute is of type String and has a value of "1.20". But while reading this attribute in Scala and converting it to a BigDecimal, I am loosing the precision and it is converted to 1.2 回答1: @Saurabh What a nice question! It is crucial that you shared the use case! I think my answer lets to solve it in a most safe and efficient way... In a short form it

If two languages follow IEEE 754, will calculations in both languages result in the same answers?

心已入冬 提交于 2021-02-07 12:01:49
问题 I'm in the process of converting a program from Scilab code to C++. One loop in particular is producing a slightly different result than the original Scilab code (it's a long piece of code so I'm not going to include it in the question but I'll try my best to summarise the issue below). The problem is, each step of the loop uses calculations from the previous step. Additionally, the difference between calculations only becomes apparent around the 100,000th iteration (out of approximately 300

Multiplication issue: dividing and multiplying by the same decimal does not return original number [duplicate]

喜夏-厌秋 提交于 2021-01-29 20:42:09
问题 This question already has answers here : Ruby float precision (2 answers) Closed 12 months ago . I am aware of the Floating Point precision issues on multiple languages, but I thought I was only going to encounter those issues when multiplying very small or big numbers. This simple math is incorrect (byebug) 30*36.3/36.3 30.000000000000004 Why is this happening and what is the suggested way around it? I don't want to have to use the .to_i function since not always I will be dealing with

sklearn precision_recall_curve and threshold

走远了吗. 提交于 2021-01-29 17:28:59
问题 I was wondering how sklearn decides how many thresholds to use in precision_recall_curve. There is another post on this here: How does sklearn select threshold steps in precision recall curve?. It mentions the source code where I found this example import numpy as np from sklearn.metrics import precision_recall_curve y_true = np.array([0, 0, 1, 1]) y_scores = np.array([0.1, 0.4, 0.35, 0.8]) precision, recall, thresholds = precision_recall_curve(y_true, y_scores) which then gives >>>precision

HP Nonstop Tandem T4SQLMX driver double precision issue

余生长醉 提交于 2021-01-29 13:30:20
问题 I use T4SQLMX type 4 jdbc driver to read a double precision field from a SQL/MX table. The Actual value is 29963.26 , however, the value read using the jdbc driver seems to be 29963.260000000002 . This seems to be an issue even if I read it as resultset.getString() or resultset.getBigDecimal() because the driver always returns 29963.260000000002. Similarly, value 99.76 is returned as 99.759999999999 . We use CAIL to view the actual value 99.76 from the SQL/MX table, and SQL-Squirrel client

Odd behavior with SAS numeric comparison; precision issue?

纵然是瞬间 提交于 2021-01-28 11:36:38
问题 I'm running a simple inequality filter in SAS as follows: data my_data; set my_data; my_var = sum(parent_var1, -parent_var2) run; proc sql; select my_var format=32.32 from my_data where my_var < 0.02; quit; I get the following result: my_var .0200000000000000000000000000000 .0200000000000000000000000000000 .0200000000000000000000000000000 (etc...) The problem, in case it's not obvious, is that I want numbers below .02, but it looks very much like my number is .02. According to the properties

Prevent underflow in floating point division in Python

扶醉桌前 提交于 2021-01-28 08:23:52
问题 Suppose both x and y are very small numbers, but I know that the true value of x / y is reasonable. What is the best way to compute x/y ? In particular, I have been doing np.exp(np.log(x) - np.log(y) instead, but I'm not sure if that would make a difference at all? 回答1: Python uses the floating-point features of the hardware it runs on, according to Python documentation. On most common machines today, that is IEEE-754 arithmetic or something near it. That Python documentation is not explicit