Standard for the sine of very large numbers

醉酒当歌 提交于 2019-12-18 05:56:19

问题


I am writing an (almost) IEEE 854 compliant floating point implementation in TeX (which only has support for 32-bit integers). This standard only specifies the result of +, -, *, /, comparison, remainder, and sqrt: for those operations, the result should be identical to rounding the exact result to a representable number (according to the rounding mode).

I seem to recall that IEEE specifies that transcendental functions (sin, exp...) should yield faithful results (in the default round-to-nearest mode, they should output one of the two representable numbers surrounding the exact result). Computing the sine of small numbers is rather straightforward: shift by a multiple of 2*pi to obtain a number in the range [0,2*pi], then do some more work to reduce the range to [0,pi/4], and use a Taylor series.

Now assume that I want to compute sin(1e300). For that I would need to find 1e300 modulo 2*pi. That requires to know 300 (316?) decimals of pi, because with only 16 decimals, the result would have no significance whatsoever (in particular, it souldn't be faithful).

Is there a standard on what the result of sin(1e300) and similar very large numbers should be?

What do other floating point implementations do?


回答1:


There is no standard that requires faithful rounding of transcendental functions. IEEE-754 (2008) recommends, but does not require, that these functions be correctly rounded.

Most good math libraries strive to deliver faithfully rounded results over the entire range (yes, even for huge inputs to sin( ) and similarly hard cases). As you note, this requires that the library know somewhat more digits of π then there are digits in the largest representable number. This is called an "infinite-pi" argument reduction.

To the point that @spraff raises, good math libraries adopt the viewpoint that the inputs are infinitely precise (i.e., the function should behave as though the input is always represented accurately). One can debate whether or not this is a reasonable position, but thats the working assumption for essentially all good math libraries.

All that said, there are plenty of libraries that take the easy route and use a "finite-pi" reduction, which basically treats a function like sin( ) as though π were a representable finite number. It turns out that this doesn't really cause any trouble for most uses, and is certainly easier to implement.




回答2:


If you're doing operations on such large numbers, of course you're going to run out of precision:

#include <iostream>
#include <math.h>

int main () {
    long double i = 1;
    std :: cout << sin (i) << "\n" << sin (i+0.1) << "\n";
    i = pow (10, 300);
    std :: cout << sin (i) << "\n" << sin (i+0.1);
}

Output:

0.841471

0.891207

-0.817882

-0.81788

If you can't represent the inputs accurately, you can't represent the outputs accurately. Subtracting pi*pow(10,int(log_10(n/pi)) or whatever is going to make things worse for "small" n but when n gets suitably large, you're just adding noise to noise and it doesn't matter any more.



来源:https://stackoverflow.com/questions/6665418/standard-for-the-sine-of-very-large-numbers

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!