floating-point-precision

To what precision does perl print floating-point numbers?

倖福魔咒の 提交于 2019-12-10 14:33:41
问题 my $num = log(1_000_000) / log(10); print "num: $num\n"; print "int(num): " . int($num) . "\n"; print "sprintf(num): " . sprintf("%0.16f", $num) . "\n"; produces: num: 6 int(num): 5 sprintf(num): 5.9999999999999991 To what precision does perl print floating-point numbers? Using: v5.8.8 built for x86_64-linux-thread-multi 回答1: When stringifying floating point numbers, whether to print or otherwise, Perl generally uses the value of DBL_DIG or LDBL_DIG from the float.h or limits.h file where it

OpenCL speed and float point precision

半城伤御伤魂 提交于 2019-12-10 10:22:58
问题 I have just started working with OpenCL. However, I have found some weird behavior of OpenCl, which i can't understand. The source i built and tested, was http://www.codeproject.com/Articles/110685/Part-1-OpenCL-Portable-Parallelism . I have a ATI Radeon HD 4770, and a AMD Fx 6200 3.8 ghz 6 core cpu. Speed Firstly the speed is not linearly to the number of maximum work group items. I ran App profiler to analyze the time spent during the kernel execution. The result was a bit shocking, my GPU

Why does g++ (4.6 and 4.7) promote the result of this division to a double? Can I stop it?

让人想犯罪 __ 提交于 2019-12-10 05:52:46
问题 I was writing some templated code to benchmark a numeric algorithm using both floats and doubles, in order to compare against a GPU implementation. I discovered that my floating point code was slower and after investigating using Vtune Amplifier from Intel I discovered that g++ was generating extra x86 instructions (cvtps2pd/cvtpd2ps and unpcklps/unpcklpd) to convert some intermediate results from float to double and then back again. The performance degradation is almost 10% for this

Java - maximum loss of precision in one double addition/subtraction

拟墨画扇 提交于 2019-12-09 17:28:06
问题 Is it possible to establish, even roughly, what the maximum precision loss would be when dealing with two double values in java (adding/subtracting)? Probably the worst case scenario is when two numbers cannot be represented exactly, and then an operation is performed on them, which results in a value that also cannot be represented exactly. 回答1: Have a look at Math.ulp(double). The ulp of a double is the delta to the next highest value. For instance, if you add to numbers and one is smaller

Can I specify a numpy dtype when generating random values?

£可爱£侵袭症+ 提交于 2019-12-09 00:30:40
问题 I'm creating a numpy array of random values and adding them to an existing array containing 32-bit floats. I'd like to generate the random values using the same dtype as the target array, so that I don't have to convert the dtypes manually. Currently I do this: import numpy as np x = np.zeros((10, 10), dtype='f') x += np.random.randn(*x.shape).astype('f') What I'd like to do instead of the last line is something like: x += np.random.randn(*x.shape, dtype=x.dtype) but randn (and actually none

Does C round floating-point constants

孤街醉人 提交于 2019-12-08 19:25:34
问题 A question about floating-point precision in Go made me wonder how C handles the problem. With the following code in C: float a = 0.1; Will a have the closest IEEE 754 binary representation of: 00111101110011001100110011001101 (Decimal: 0.10000000149011612) or will it just crop it to: 00111101110011001100110011001100 (Decimal: 0.09999999403953552) Or will it differ depending on compiler/platform? 回答1: An implementation is allowed to do either (or even be off by one more): For decimal floating

Precision, why do Matlab and Python numpy give so different outputs?

不打扰是莪最后的温柔 提交于 2019-12-08 14:40:15
问题 I know about basic data types and that float types (float,double) can not hold some numbers exactly. In porting some code from Matlab to Python (Numpy) I however found some significant differences in calculations, and I think it's going back to precision. Take the following code, z-normalizing a 500 dimensional vector with only first two elements having a non-zero value. Matlab: Z = repmat(0,500,1); Z(1)=3;Z(2)=1; Za = (Z-repmat(mean(Z),500,1)) ./ repmat(std(Z),500,1); Za(1) >>> 21.1694

80 bit floating point arithmetic in C/C++

最后都变了- 提交于 2019-12-08 08:03:48
问题 Let assume a, b are _int64 variables. Need to calculate sqrt((long double)a)*sqrt((long double)b) in high precision 80 bit floating point. Example. (__int64)(sqrt((long double)a)*sqrt((long double)a) + 0.5) != a in many cases as should be. Which win32 C/C++ compiler can manage 80 bit floating point arithmetic? 回答1: You probably should not be using floating point to take the square root of an integer, especially long double which is poorly supported and might have an approximate (not accurate)

Objective-C Float / Double precision

。_饼干妹妹 提交于 2019-12-08 04:22:03
问题 I was messing around with storing float s and double s using NSUserDefaults for use in an iPhone application, and I came across some inconsistencies in how the precision works with them, and how I understood it works. This works exactly as I figured: { NSString *key = @"OneLastKey"; [PPrefs setFloat:235.1f forKey:key]; GHAssertFalse([PPrefs getFloatForKey:key] == 235.1, @""); [PPrefs removeObjectForKey:key]; } However, this one doesn't: { NSString *key = @"SomeDoubleKey"; [PPrefs setDouble

Print __float128, without using quadmath_snprintf

夙愿已清 提交于 2019-12-07 17:15:20
问题 In my question about Analysis of float/double precision in 32 decimal digits, one answer said to take a look at __float128 . I used it and the compiler could find it, but I can not print it, since the complier can not find the header quadmath.h . So my questions are: __float128 is standard, correct? How to print it? Isn't quadmath.h standard? These answers did not help: Use extern C Precision in C++ Printing The ref also did not help. Note that I do not want to use any non standard library.