double val = 0.1;
std::stringstream ss;
ss << val;
std::string strVal= ss.str();
In the Visual Studio debugger, val
has the value 0.
There are two issues you have to consider. The first is the precision
parameter, which defaults to 6 (but which you can set to whatever you
like). The second is what this parameter means, and that depends on the
format option you are using: if you are using fixed or scientific
format, then it means the number of digits after the decimal (which in
turn has a different effect on what is usually meant by precision in the
two formats); if you are using the default precision, however (ss.setf(
std::ios_base::fmtflags(), std::ios_base::formatfield )
, it means the
number of digits in the output, regardless of whether the output was
actually formatted using scientific or fixed notation. This explains
why your display is 12.1231
, for example; you're using both the
default precision and the default formattting.
You might want to try the following with different values (and maybe different precisions):
std::cout.setf( std::ios_base::fmtflags(), std::ios_base::floatfield );
std::cout << "default: " << value[i] << std::endl;
std::cout.setf( std::ios_base::fixed, std::ios_base::floatfield );
std::cout << "fixed: " << value[i] << std::endl;
std::cout.setf( std::ios_base::scientific, std::ios_base::floatfield );
std::cout << "scientific: " << value[i] << std::endl;
Seeing the actual output will probably be clearer than any detailed description:
default: 0.1
fixed: 0.100000
scientific: 1.000000e-01