char16_t and char32_t endianness

大兔子大兔子 提交于 2019-12-13 16:43:21

问题


In C11, support for portable wide char types char16_t and char32_t are added for UTF-16 and UTF-32 respectively.

However, in the technical report, there is no mention of endianness for these two types.

For example, the following snippet in gcc-4.8.4 on my x86_64 computer when compiled with -std=c11:

#include <stdio.h>
#include <uchar.h>

char16_t utf16_str[] = u"十六";  // U+5341 U+516D
unsigned char *chars = (unsigned char *) utf16_str;
printf("Bytes: %X %X %X %X\n", chars[0], chars[1], chars[2], chars[3]);

will produce

Bytes: 41 53 6D 51

Which means that it's little-endian.

But is this behaviour platform/implementation dependent: does it always adhere to the platform's endianness or may some implementation choose to always implement char16_t and char32_t in big-endian?


回答1:


char16_t and char32_t do not guarantee Unicode encoding. (That is a C++ feature.) The macros __STDC_UTF_16__ and __STDC_UTF_32__, respectively, indicate that Unicode code points actually determine the fixed-size character values. See C11 §6.10.8.2 for these macros.

(By the way, __STDC_ISO_10646__ indicates the same thing for wchar_t, and it also reveals which Unicode edition is implemented via wchar_t. Of course, in practice, the compiler simply copies code points from the source file to strings in the object file, so it doesn't need to know much about particular characters.)

Given that Unicode encoding is in effect, code point values stored in char16_t or char32_t must have the same object representation as uint_least16_t and uint_least32_t, because they are defined to be typedef aliases to those types, respectively (C11 §7.28). This is again somewhat in contrast to C++, which makes those types distinct but explicitly requires compatible object representation.

The upshot is that yes, there is nothing special about char16_t and char32_t. They are ordinary integers in the platform's endianness.

However, your test program has nothing to do with endianness. It simply uses the values of the wide characters without inspecting how they map to bytes in memory.




回答2:


However, in the technical report, there is no mention of endianness for these two types.

Indeed. The C standard doesn't specify much regarding the representation of multibyte characters in source files.

char16_t utf16_str[] = u"十六"; // U+5341 U+516D
printf("U+%X U+%X\n", utf_16_str[0], utf_16_str[1]);

will produce U+5341 U+516D Which means that it's little-endian.

But is this behaviour platform/implementation dependent: does it always adhere to the platform's endianness or may some implementation choose to always implement char16_t and char32_t in big-endian?

Yes, The behaviour is implementation dependent, as you call it. See C11§5.1.1.2:

Physical source file multibyte characters are mapped, in an implementation-defined manner, to the source character set (introducing new-line characters for end-of-line indicators) if necessary.

That is, whether the multibyte characters in your source code are considered big endian or little endian is implementation-defined. I would advise using something like u"\u5341\u516d", if portability is an issue.




回答3:


UTF-16 and UTF-32 does not have an endianness defined. They are usually encoded in the hosts native byte ordering. This is why there are Byte Order Markers (BOM) which can be inserted at the beginning of the string to indicate the endianness for an UTF-16 or UTF-32 string.



来源:https://stackoverflow.com/questions/31433324/char16-t-and-char32-t-endianness

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!