What are the default Format Flags (and widths) for double output in a std::stringstream?

浪子不回头ぞ 提交于 2019-12-03 21:35:02

The defaults are setup by std::basic_ios::init and are the same for all streams derived from ios_base. The defaults are:

rdbuf()         sb
tie()           0
rdstate()       goodbit if sb is not a null pointer, otherwise badbit.
exceptions()    goodbit
flags()         skipws | dec
width()         0
precision()     6
fill()          widen(’ ’)
getloc()        a copy of the value returned by locale()
iarray          a null pointer
parray          a null pointer

So the default precision is 6

Where can I find a list of the default format setup for a stringstream?

The authoritative source for all standard library behaviour is the standard document. In this case, the table labeled basic_ios::init() effects in the section [basic.ios.members].

+--------------+--------------------------------------------------------+
|   Element    |                         Value                          |
+--------------+--------------------------------------------------------+
| rdbuf()      | sb                                                     |
| tie()        | 0                                                      |
| rdstate()    | goodbit if sb is not a null pointer, otherwise badbit. |
| exceptions() | goodbit                                                |
| flags()      | skipws | dec                                           |
| width()      | 0                                                      |
| precision()  | 6                                                      |
| fill()       | widen(’ ’)                                             |
| getloc()     | a copy of the value returned by locale()               |
| iarray       | a null pointer                                         |
| parray       | a null pointer                                         |
+--------------+--------------------------------------------------------+

Formatting floating point numbers depends on the floatfield flag, which is unset in the default flags(). The behaviour is defined in the table Floating-point conversions in section [facet.num.put.virtuals].

+----------------------------------------------------------------------+-----------------+
|                                State                                 | stdioequivalent |
+----------------------------------------------------------------------+-----------------+
| floatfield == ios_base::fixed                                        | %f              |
| floatfield == ios_base::scientific && !uppercase                     | %e              |
| floatfield == ios_base::scientific                                   | %E              |
| floatfield == (ios_base::fixed | ios_base::scientific) && !uppercase | %a              |
| floatfield == (ios_base::fixed | ios_base::scientific)               | %A              |
| !uppercase                                                           | %g              |
| otherwise                                                            | %G              |
+----------------------------------------------------------------------+-----------------+

So, formatting in the initial state should match stdio format flag %G. The stdio format flags are specified in the C standard in the section [Formatted input/output functions]:

f,F

A double argument representing a floating-point number is converted to decimal notation in style [−]ddd.ddd, where the number of digits after the decimal-point character is equal to the precision specification. If the precision is missing, it is taken as 6; if the precision is zero and the # flag is not specified, no decimal-point character appears. If a decimal-point character appears, at least one digit appears before it. The value is rounded to the appropriate number of digits.

e,E

A double argument representing a floating-point number is converted in the style [-]d.ddde±dd, where there is one digit (which is nonzero if the argument is nonzero) before the decimal-point character and the number of digits after it is equal to the precision; if the precision is missing, it is taken as 6; if the precision is zero and the # flag is not specified, no decimal-point character appears. The value is rounded to the appropriate number of digits. The E conversion specifier produces a number with E instead of e introducing the exponent. The exponent always contains at least two digits, and only as many more digits as necessary to represent the exponent. If the value is zero, the exponent is zero.

A double argument representing an infinity is converted in one of the styles [-]inf or [-]infinity - which style is implementation-defined. A double argument representing a NaN is converted in one of the styles [-]nan* or **[-nan](n-char-sequence) - which style, and the meaning of any n-char-sequence, is implementation-defined. The F conversion specifier produces INF,INFINITY, or NAN instead of inf,infinity ,or nan, respectively.

g,G

A double argument representing a floating-point number is converted in style f or e (or in style F or E in the case of G conversion specifier), depending on the value converted and the precision. Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero. Then, if a conversion with style E would have an exponent of X:

  • if P > X ≥ -4, the conversion is with style f (or F) and precision P - (X + 1).
  • otherwise, the conversion is with style e (or E) and precision P - 1.

Is the default format the same for all derived classes of std::istream (within the stdlib)?

Yes, the defaults are the same.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!