Why isn't wchar_t widely used in code for Linux / related platforms?

前端 未结 4 1848
迷失自我
迷失自我 2020-12-30 04:22

This intrigues me, so I\'m going to ask - for what reason is wchar_t not used so widely on Linux/Linux-like systems as it is on Windows? Specifically, the Windo

相关标签:
4条回答
  • 2020-12-30 04:43

    wchar_t is a wide character with platform-defined width, which doesn't really help much.

    UTF-8 characters span 1-4 bytes per character. UCS-2, which spans exactly 2 bytes per character, is now obsolete and can't represent the full Unicode character set.

    Linux applications that support Unicode tend to do so properly, above the byte-wise storage layer. Windows applications tend to make this silly assumption that only two bytes will do.

    wchar_t's Wikipedia article briefly touches on this.

    0 讨论(0)
  • 2020-12-30 04:46

    The first people to use UTF-8 on a Unix-based platform explained:

    The Unicode Standard [then at version 1.1] defines an adequate character set but an unreasonable representation [UCS-2]. It states that all characters are 16 bits wide [no longer true] and are communicated and stored in 16-bit units. It also reserves a pair of characters (hexadecimal FFFE and FEFF) to detect byte order in transmitted text, requiring state in the byte stream. (The Unicode Consortium was thinking of files, not pipes.) To adopt this encoding, we would have had to convert all text going into and out of Plan 9 between ASCII and Unicode, which cannot be done. Within a single program, in command of all its input and output, it is possible to define characters as 16-bit quantities; in the context of a networked system with hundreds of applications on diverse machines by different manufacturers [italics mine], it is impossible.

    The italicized part is less relevant to Windows systems, which have a preference towards monolithic applications (Microsoft Office), non-diverse machines (everything's an x86 and thus little-endian), and a single OS vendor.

    And the Unix philosophy of having small, single-purpose programs means fewer of them need to do serious character manipulation.

    The source for our tools and applications had already been converted to work with Latin-1, so it was ‘8-bit safe’, but the conversion to the Unicode Standard and UTF[-8] is more involved. Some programs needed no change at all: cat, for instance, interprets its argument strings, delivered in UTF[-8], as file names that it passes uninterpreted to the open system call, and then just copies bytes from its input to its output; it never makes decisions based on the values of the bytes...Most programs, however, needed modest change.

    ...Few tools actually need to operate on runes [Unicode code points] internally; more typically they need only to look for the final slash in a file name and similar trivial tasks. Of the 170 C source programs...only 23 now contain the word Rune.

    The programs that do store runes internally are mostly those whose raison d’être is character manipulation: sam (the text editor), sed, sort, tr, troff, (the window system and terminal emulator), and so on. To decide whether to compute using runes or UTF-encoded byte strings requires balancing the cost of converting the data when read and written against the cost of converting relevant text on demand. For programs such as editors that run a long time with a relatively constant dataset, runes are the better choice...

    UTF-32, with code points directly accessible, is indeed more convenient if you need character properties like categories and case mappings.

    But widechars are awkward to use on Linux for the same reason that UTF-8 is awkward to use on Windows. GNU libc has no _wfopen or _wstat function.

    0 讨论(0)
  • 2020-12-30 04:56

    wchar_t is not the same size on all platforms. On Windows it is a UTF-16 code unit that uses two bytes. On other platforms it typically uses 4 bytes (for UCS-4/UTF-32). It is therefore unlikely that these platforms would standardize on using wchar_t, since it would waste a lot of space.

    0 讨论(0)
  • 2020-12-30 05:03

    UTF-8, being compatible to ASCII, makes it possible to ignore Unicode somewhat.

    Often, programs don't care (and in fact, don't need to care) about what the input is, as long as there is not a \0 that could terminate strings. See:

    char buf[whatever];
    printf("Your favorite pizza topping is which?\n");
    fgets(buf, sizeof(buf), stdin); /* Jalapeños */
    printf("%s it shall be.\n", buf);
    

    The only times when I found I needed Unicode support is when I had to have a multibyte character as a single unit (wchar_t); e.g. when having to count the number of characters in a string, rather than bytes. iconv from utf-8 to wchar_t will quickly do that. For bigger issues like zero-width spaces and combining diacritics, something more heavy like icu is needed—but how often do you do that anyway?

    0 讨论(0)
提交回复
热议问题