Note: I\'ve used the Matlab tag just in case they maintain the same precision. (From what I can tell both programs are very similar.)
As a follow-up to a pr
Every common language (standard C++, scilab, matlab, ...) use the same format for representing decimal numbers. It's known at IEEE754 and its precise documentation is explained in:
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
It implies that the precision remains constant on nearly all common used system. It's a number close to 2^-52 (or equivalently 2.2204e-16). It defines the "distance from 1.0 to the next largest double-precision number".
When you're using scilab, you can confirm it with the %eps
command https://help.scilab.org/docs/5.5.1/fr_FR/percenteps.html. For matlab, it's stored in the eps
variable http://nl.mathworks.com/help/matlab/ref/eps.html. For C++, it's a bit harder (see: http://en.cppreference.com/w/cpp/types/numeric_limits/epsilon).
So, don't worry about precision if you do not use a specific machine (atypic architectures or very very old computers or high precision decimals (64bits double)). The default ones will always follow the same standard (IEEE 754).
But, don't forgot, even if it seems to be a constant, that error can is not the same between very high numbers and very small (the system is designed to have the best precision for the interval [0, 1[ and for the interval [1, MAXIMUM[).
It can be shown in the following example:
>>> 1e100 == 1e100+1
True
>>> 1 == 2
False
To ensure that your code is portable for different languages, I suggest you to explicitly refer to the functions that give the machine precision. For instance, in scipy: print(np.finfo(float).eps)
. But, generally, well designed algorithms won't become a lot different on machine with slightly different epsilons.
For instance, if I implement a loop for something that will tends to be 0 (asymptoticly), in matlab, I should write:
while(val < eps) do
...
end
So, the main advice should be: don't build an algorithm that would try to use too many information from the machine. Either you can use the real value of epsilon, either you can hard-code something like 2e-15. It would work on a lot of different machines.
Re-posting comment as an answer:
IEEE 754 double-precision floating point numbers are the standard representation in most common languages, like MATLAB, C++ and SciLab:
https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0ahUKEwiajY7LzZbNAhWJix4KHcrEA1wQFgggMAA&url=http%3A%2F%2Fforge.scilab.org%2Findex.php%2Fp%2Fdocscifloat%2Fdownloads%2Fget%2Ffloatingpoint_v0.2.pdf&usg=AFQjCNFQiOVdgkjuxhFXhp1PwDFY-J-Qbg&sig2=vH0cpadZqi0bNqa9F0Gmig&cad=rja
so I don't expect you would need to do anything special to represent the precision, other than using C++ doubles (unless your SciLab code is using high-precision floats).
Note that the representations of two different IEEE 754 compliant implementations can differ after 16 significant digits:
MATLAB:
>> fprintf('%1.30f\n',1/2342317.0)
0.000000426927695952341190000000
Python:
>> "%1.30f" % (1/2342317,)
'0.000000426927695952341193713560'