long-double

Define LDBL_MAX/MIN on C

我的梦境 提交于 2019-12-01 13:06:18
I'm working with C, I have to do an exercise in which I have to print the value of long double min and long double max . I used float.h as header, but these two macros ( LDBL_MIN/MAX ) give me the same value as if it was just a double. I'm using Visual Studio 2015 and if I hover the mouse on LDBL MIN it says #define LDBL_MIN DBL_MIN . Is that why it prints dbl_min instead of ldbl_min ? How can I fix this problem? printf("Type: Long Double Value: %lf Min: %e Max: %e Memory:%lu\n", val10, LDBL_MIN, LDBL_MAX, longd_size); It is a problem because my assignment requires two different values for

Cout long double issue

℡╲_俬逩灬. 提交于 2019-12-01 08:01:27
So, I'm working on a C++ project. I have a var of long double type and assigned it a value like "1.02" Then, I try to use cout to print it and the result is: -0 I already tried to use setprecision and all I found googling the problem. What is the solution for this? Example code: #include <cstdlib> #include <iomanip> using namespace std; int main(int argc, char** argv) { cout.precision(15); long double var = 1.2; cout << var << endl; return 0; } OS: Windows 8.1 64 bits Compiler: minGW IDE: NetBeans 8.0.2 It seems to be a problem with compiler. Take a look here: http://mingw.5.n7.nabble.com

sizeof long double and precision not matching?

大城市里の小女人 提交于 2019-11-29 10:23:29
Consider the following C code: #include <stdio.h> int main(int argc, char* argv[]) { const long double ld = 0.12345678901234567890123456789012345L; printf("%lu %.36Lf\n", sizeof(ld), ld); return 0; } Compiled with gcc 4.8.1 under Ubuntu x64 13.04 , it prints: 16 0.123456789012345678901321800735590983 Which tells me that a long double weights 16 bytes but the decimals seems to be ok only to the 20th place. How is it possible? 16 bytes corresponds to a quad, and a quad would give me between 33 and 36 decimals. The long double format in your C implementation uses an Intel format with a one-bit

How can you easily calculate the square root of an unsigned long long in C?

断了今生、忘了曾经 提交于 2019-11-29 07:28:05
I was looking at another question ( here ) where someone was looking for a way to get the square root of a 64 bit integer in x86 assembly. This turns out to be very simple. The solution is to convert to a floating point number, calculate the sqrt and then convert back. I need to do something very similar in C however when I look into equivalents I'm getting a little stuck. I can only find a sqrt function which takes in doubles. Doubles do not have the precision to store large 64bit integers without introducing significant rounding error. Is there a common math library that I can use which has

How can you easily calculate the square root of an unsigned long long in C?

馋奶兔 提交于 2019-11-28 04:37:34
问题 I was looking at another question (here) where someone was looking for a way to get the square root of a 64 bit integer in x86 assembly. This turns out to be very simple. The solution is to convert to a floating point number, calculate the sqrt and then convert back. I need to do something very similar in C however when I look into equivalents I'm getting a little stuck. I can only find a sqrt function which takes in doubles. Doubles do not have the precision to store large 64bit integers

sizeof long double and precision not matching?

有些话、适合烂在心里 提交于 2019-11-28 03:44:23
问题 Consider the following C code: #include <stdio.h> int main(int argc, char* argv[]) { const long double ld = 0.12345678901234567890123456789012345L; printf("%lu %.36Lf\n", sizeof(ld), ld); return 0; } Compiled with gcc 4.8.1 under Ubuntu x64 13.04 , it prints: 16 0.123456789012345678901321800735590983 Which tells me that a long double weights 16 bytes but the decimals seems to be ok only to the 20th place. How is it possible? 16 bytes corresponds to a quad, and a quad would give me between 33

long double (GCC specific) and __float128

空扰寡人 提交于 2019-11-27 06:54:31
I'm looking for detailed information on long double and __float128 in GCC/x86 (more out of curiosity than because of an actual problem). Few people will probably ever need these (I've just, for the first time ever, truly needed a double ), but I guess it is still worthwile (and interesting) to know what you have in your toolbox and what it's about. In that light, please excuse my somewhat open questions: Could someone explain the implementation rationale and intended usage of these types, also in comparison of each other? For example, are they "embarrassment implementations" because the

What is the precision of long double in C++?

允我心安 提交于 2019-11-27 01:42:53
Does anyone know how to find out the precision of long double on a specific platform? I appear to be losing precision after 17 decimal digits, which is the same as when I just use double . I would expect to get more, since double is represented with 8 bytes on my platform, while long double is 12 bytes. Before you ask, this is for Project Euler, so yes I do need more than 17 digits. :) EDIT: Thanks for the quick replies. I just confirmed that I can only get 18 decimal digits by using long double on my system. Johannes Schaub - litb You can find out with std::numeric_limits : #include <iostream

What is the precision of long double in C++?

﹥>﹥吖頭↗ 提交于 2019-11-26 22:13:53
问题 Does anyone know how to find out the precision of long double on a specific platform? I appear to be losing precision after 17 decimal digits, which is the same as when I just use double . I would expect to get more, since double is represented with 8 bytes on my platform, while long double is 12 bytes. Before you ask, this is for Project Euler, so yes I do need more than 17 digits. :) EDIT: Thanks for the quick replies. I just confirmed that I can only get 18 decimal digits by using long

long double vs double

你说的曾经没有我的故事 提交于 2019-11-26 17:33:10
I know that size of various data types can change depending on which system I am on. I use XP 32bits, and using the sizeof() operator in C++, it seems like long double is 12 bytes, and double is 8. However, most major sources states that long double is 8 bytes, and the range is therefore the same as a double. How come I have 12 bytes? If long double is indeed 12 bytes, doesn't this extends the range of value also? Or the long signature is only used (the compiler figures) when the value exceed the range of a double, and thus, extends beyond 8 bytes? Thank you. Borealid Quoting from Wikipedia :