Here are some excerpts from my copy of the 2014 draft standard N4140
22.5 Standard code conversion facets [locale.stdcvt]
3 F
Both your interpretations are incorrect. The standard doesn't require that there be a single wchar_t
encoding, just like it doesn't require a single char
encoding. The codecvt_utf8
facet must convert between UTF-8 and UCS-2 or UCS-4.
This true even UTF-8, UCS-2, and UCS-4 are not supported as character sets in any locale.
If Elem
is of type wchar_t
and isn't big enough to store a UCS-2 value than then the conversion operations of the codecvt_utf8
facet are undefined because the standard doesn't say what happens in that case. If it is big enough (or if you want to argue that the standard requires that it must be big enough) then it's merely implementation defined whether the UCS-2 or UCS-4 wchar_t
values the facet generates or consumes are in an encoding compatible with any locale defined wchar_t
encoding.
The first interpretation is conditionally true.
If __STDC_ISO_10646__
macro (imported from C) is defined, then wchar_t
is a superset of some version of Unicode.
__STDC_ISO_10646__
An integer literal of the formyyyymmL
(for example,199712L
). If this symbol is defined, then every character in the Unicode required set, when stored in an object of typewchar_t
, has the same value as the short identifier of that character. The Unicode required set consists of all the characters that are defined by ISO/IEC 10646, along with all amendments and technical corrigenda as of the specified year and month.
It appears that if the macro is defined, some kind of UCS4 can be assumed. (Not UCS2 as ISO 10646 never had a 16-bit version; the first release of ISO 10646 corresponds to Unicode 2.0).
So if the macro is defined, then
codecvt_utf8<wchar_t>
is compatible with this native encodingNone of these things are required to hold if the macro is not defined.
There are also __STDC_UTF_16__
and __STDC_UTF_32__
but the C++ standard doesn't say what they mean. The C standard says that they signify UTF-16 and UTF-32 encodings for char16_t
and char32_t
respectively, but in C++ these encodings are always used.
Incidentally, the functions mbrtoc32
and c32rtomb
convert back and forth between char
sequences and char32_t
sequences. In C they only use UTF-32 if __STDC_UTF_32__
is defined, but in C++ UTF-32 is always used for char32_t
. So it would appear than even if __STDC_ISO_10646__
is not defined, it should be possible to convert between UTF-8 and wchar_t
by going from UTF-8 to UTF-32-encoded char32_t
to natively encoded char
to natively encoded wchar_t
, but I'm afraid of this complex stuff.
Let us differentiate between wchar_t
and string literals built using the L
prefix.
wchar_t
is just an integer type, which may be larger than char
.
String literals using the L
prefix will generate strings using wchar_t
characters. Exactly what that means is implementation-dependent. There is no requirement that such literals use any particular encoding. They might use UTF-16, UTF-32, or something else that has nothing to do with Unicode at all.
So if you want a string literal which is guaranteed to be encoded in a Unicode format, across all platforms, use u8
, u
, or U
prefixes for the string literal.
One interpretation of these two paragraphs is that wchar_t must be encoded as either UCS2 or UCS4.
No, that is not a valid interpretation. wchar_t
has no encoding; it's just a type. It is data which is encoded. A string literal prefixed by L
may or may not be encoded in UCS2 or UCS4.
If you provide codecvt_utf8
a string of wchar_t
s which are encoded in UCS2 or UCS4 (as appropriate to sizeof(wchar_t)
), then it will work. But not because of wchar_t
; it only works because the data you provide it is correctly encoded.
If 4.1 said "The facet shall convert between UTF-8 multibyte sequences and UCS2 or UCS4 or whatever encoding is imposed on wchar_t by the current global locale" there would be no problem.
The whole point of those codecvt_*
facets is to perform locale-independent conversions. If you want locale-dependent conversions, you shouldn't use them. You should instead use the global codecvt
facet.
It appears your first conclusion is shared by Microsoft who enumerate the possible options, and note that UTF-16, although "widely used as such[sic]" is not a valid encoding.
The same wording is also used by QNX, which points at the source of the wording: Both QNX and Microsoft derive their Standard Library implementation from Dinkumware.
Now, as it happens, Dinkumware is also the author of N2401 which introduced these classes. So I'm going to side with them.
No.
wchar
is only required to hold the biggest locale supported by the compiler. Which could theoretically fit in a char.
Type wchar_t is a distinct type whose values can represent distinct codes for all members of the largest extended character set specified among the supported locales (22.3.1).
— C++ [basic.fundamental] 3.9.1/5
as such it's not even required to support Unicode
The width of wchar_t is compiler-specific and can be as small as 8 bits. Consequently, programs that need to be portable across any C or C++ compiler should not use wchar_t for storing Unicode text. The wchar_t type is intended for storing compiler-defined wide characters, which may be Unicode characters in some compilers.
ISO/IEC 10646:2003 Unicode standard 4.0
As Elem
can be wchar_t
, char16_t
, or char32_t
, the clause 4.1 says nothing about a required wchar_t
encoding. It states something about the conversion performed.
From the wording, it is clear that the conversion is between UTF-8 and either UCS-2 or UCS-4, depending on the size of Elem
. So if wchar_t
is 16 bits, the conversion will be with UCS-2, and if it is 32 bits, UCS-4.
Why does the standard mention UCS-2 and UCS-4 and not UTF-16 and UTF-32 ? Because codecvt_utf8
will convert a multi-byte UTF8 to a single wide character:
codecvt_utf8
)Although, it is not clear to me what will happen, if an UTF-8 text would contain a sequence corresponds to a unicode character that is not available in UCS-2 used for a receiving char16_t
.