arbitrary-precision

Keeping accuracy when taking decimal to power of integer

陌路散爱 提交于 2019-12-11 19:11:11
问题 My code is as follows (I have simplified it for ease of reading, sorry for the lack of functions): #include <stdio.h> #include <string.h> #include <math.h> #include <iostream> #include <iomanip> #include <fstream> #include <time.h> #include <stdlib.h> #include <sstream> #include <gmpxx.h> using namespace std; #define PI 3.14159265358979323846 int main() { int a,b,c,d,f,i,j,k,m,n,s,t,Success,Fails; double p,theta,phi,Time,Averagetime,Energy,energy,Distance,Length,DotProdForce, Forcemagnitude

How do I declare the precision of a number to be an adjustable parameter?

你离开我真会死。 提交于 2019-12-11 06:06:09
问题 In 2013 there was a question on converting a big working code from double to quadruple precision: "Converting a working code from double-precision to quadruple-precision: How to read quadruple-precision numbers in FORTRAN from an input file", and the consensus was to declare variables using an adjustable parameter "WP" that specifies the "working precision", instead of having a separate version of the program with variables declared using D+01, and another version using Q+01. This way we can

MatLab - variable precision arithmetic

寵の児 提交于 2019-12-11 05:19:50
问题 I have a brief question regarding the vpa command one may use to evaluate symbolic expressions in MatLab. My textbook says the following: "You need to be careful when you use functions such as sqrt on numbers, which by default result in a double-precision floating-point number. You need to pass such input to vpa as a symbolic string for correct evaluation: vpa('sqrt(5)/pi') ." I don't quite understand the jargon here. Why is it that for most inputs I get the exact same answer whether I type

Equivalent of float128

半世苍凉 提交于 2019-12-11 03:55:13
问题 How to work with equivalent of __float128 in Python? What precision should I use for decimal.getcontext() ? I mean, is the precision specified in decimal places or bits? from decimal import * getcontext().prec = # 34 or 128 ? Is it possible to set the precision "locally" for a given operation, rather than setting it "globally" with getcontext().prec ? Per Simon Byrne comment, is it even possible to simulate __float128 as defined by IEEE 754 with Decimal ? What other options do I have in

Precision loss when solving nonlinear equations with long integer parameters by mpreal.h

这一生的挚爱 提交于 2019-12-11 02:31:32
问题 I have a numerical computation problem which requires solving nonlinear equations (with long integers) in multiple precision. I tried an MPFR C++ wrapper from this link by Pavel: mpfr C++ wrapper by Pavel The wrapper can be downloaded here: mpfrc++-3.5.6.zip However, there is precision loss in the solution when handling very long integers (equations with small integers worked well). I tried three options as in the sample code below: to use the code immediately does not work with "constant

big integer addition without carry flag

a 夏天 提交于 2019-12-10 10:17:00
问题 In assembly languages, there is usually an instruction that adds two operands and a carry. If you want to implement big integer additions, you simply add the lowest integers without a carry and the next integers with a carry. How would I do that efficiently in C or C++ where I don't have access to the carry flag? It should work on several compilers and architectures, so I cannot simply use inline assembly or such. 回答1: You can use "nails" (a term from GMP): rather than using all 64 bits of a

Ordering operation to maximize double precision

巧了我就是萌 提交于 2019-12-10 09:56:25
问题 I'm working on some tool that gets to compute numbers that can get close to 1e-25 in the worst cases, and compare them together, in Java. I'm obviously using double precision. I have read in another answer that I shouldn't expect more than 1e-15 to 1e-17 precision, and this other question deals with getting better precision when ordering operations in a "better" order. Which double precision operations are more keen to loose precision along the way? Should I try to work with number as big as

Python mpmath not arbitrary precision?

别来无恙 提交于 2019-12-08 17:50:30
I'm trying to continue on my previous question in which I'm trying to calculate Fibonacci numbers using Benet's algorithm. To work with arbitrary precision I found mpmath . However the implementation seems to fail above certain value. For instance the 99th value gives: 218922995834555891712 This should be ( ref ): 218922995834555169026 Here is my code: from mpmath import * def Phi(): return (1 + sqrt(5)) / 2 def phi(): return (1 - sqrt(5)) / 2 def F(n): return (power(Phi(), n) - power(phi(), n)) / sqrt(5) start = 99 end = 100 for x in range(start, end): print(x, int(F(x))) mpmath provides

Floats vs rationals in arbitrary precision fractional arithmetic (C/C++)

◇◆丶佛笑我妖孽 提交于 2019-12-08 15:41:35
问题 Since there are two ways of implementing an AP fractional number, one is to emulate the storage and behavior of the double data type, only with more bytes, and the other is to use an existing integer APA implementation for representing a fractional number as a rational i.e. as a pair of integers, numerator and denominator, which of the two ways are more likely to deliver efficient arithmetic in terms of performance? (Memory usage is really of minor concern.) I'm aware of the existing C/C++

Python mpmath not arbitrary precision?

霸气de小男生 提交于 2019-12-08 03:24:06
问题 I'm trying to continue on my previous question in which I'm trying to calculate Fibonacci numbers using Benet's algorithm. To work with arbitrary precision I found mpmath . However the implementation seems to fail above certain value. For instance the 99th value gives: 218922995834555891712 This should be (ref): 218922995834555169026 Here is my code: from mpmath import * def Phi(): return (1 + sqrt(5)) / 2 def phi(): return (1 - sqrt(5)) / 2 def F(n): return (power(Phi(), n) - power(phi(), n)