How do calculators work with precision?
问题 I wonder how calculators work with precision. For example the value of sin(M_PI) is not exactly zero when computed in double precision: #include <math.h> #include <stdio.h> int main() { double x = sin(M_PI); printf("%.20f\n", x); // 0.00000000000000012246 return 0; } Now I would certainly want to print zero when user enters sin(π). I can easily round somewhere on 1e–15 to make this particular case work, but that’s a hack, not a solution. When I start to round like this and the user enters