wchar-t

May wchar_t be promoted to wint_t?

孤者浪人 提交于 2020-01-04 02:18:25
问题 I see one contradiction of glibc reference and Amendment 1 to C90. The quote from glibc reference says that wchar_t may be promoted to wint_t: if wchar_t is defined as char the type wint_t must be defined as int due to the parameter promotion But AMD1 says this: Currently, an existing implementation could have wchar_t be int and wint_t be long, and default promotions would not change int to long. Basically, this is due to wchar_t and wint_t being typedefs. Hence, we will not now have wchar_t

What's the difference between glib gunichar and wchar_t and which is better for cross-platform solutions?

假装没事ソ 提交于 2020-01-02 04:07:21
问题 I'm trying to write some C code which is portable only so far as the user has gcc , and has glib installed. From all my research, I've found that with gcc , a wchar_t is always defined as 4 bytes, and with glib a gunichar is also 4 bytes. What I haven't figured out is if like a gunichar , a wchar_t is encoded as UCS4 as well. Is this the case? If so, I should be able to simply cast a gunichar* to a wchar_t* and use the stdc wcs* functions, right? 回答1: If you use GLib, don't use wchar_t . Use

Is it possible to get a pointer to String^'s internal array in C++/CLI?

試著忘記壹切 提交于 2020-01-01 11:50:57
问题 The goal is to avoid copying the string data when I need a const wchar_t* . The answer seems to be yes, but the function PtrToStringChars doesn't have its own MSDN entry (it's only mentioned in the KB and blogs as a trick). That made me suspicious and I want to check with you guys. Is it safe to use that function? 回答1: Yes, no problem. It is actually somewhat documented but hard to find. The MSDN docs for the C++ libraries aren't great. It returns an interior pointer, that's not suitable for

Print wchar to Linux console?

爱⌒轻易说出口 提交于 2019-12-31 22:33:07
问题 My C program is pasted below. In bash, the program print "char is ", Ω is not printed. My locale are all en_US.utf8. #include <stdio.h> #include <wchar.h> #include <stdlib.h> int main() { int r; wchar_t myChar1 = L'Ω'; r = wprintf(L"char is %c\n", myChar1); } 回答1: This was quite interesting. Apparently the compiler translates the omega from UTF-8 to UNICODE but somehow the libc messes it up. First of all: the %c -format specifier expects a char (even in the wprintf-version) so you have to

Printing Unicode characters using write(2) in c

安稳与你 提交于 2019-12-31 05:29:28
问题 I'm working on a small piece of code that prints characters to the screen, and must support all of Unicode contained in a wchar_t , and i'm limited to only write(2) . I managed to print an emoji using : write(1, "\U0001f921", 6); So \U seem to be the way to go. However, i can't get to convert the wchar_t into the proper escape sequence, ie converting wchar_t c = L'🤡'; into \U0001f921 Can i even do that in C ? Thanks a lot. 回答1: I'm working on a small piece of code that prints characters to

Solution for missing std::wstring support in Android NDK?

人走茶凉 提交于 2019-12-30 09:30:17
问题 I have a game which uses std::wstring as its basic string type in thousand of places as well as doing operations with wchar_t and its functions: wcsicmp() wcslen() vsprintf(), etc. The problem is wstring is not supported in R5c (latest ndk at the time of this writting). I can't change the code to use std::string because of internationalization and I would be breaking the game engine which is used by many games ... Which options do I have? 1 - Replace string and wstring with my own string

setlocale() doesn't work in iOS simulator?

你。 提交于 2019-12-24 15:09:10
问题 Update: Strangely, setlocale() only fails on the iOS Simulator so I have amended the question title. It works fine on actual devices. I'm working with native (C/C++) code under iOS 6 and I need to format arbitrary wchar_t strings. However, when formatting strings containing codepoints outside the Latin-1 codepage, swprintf fails (return value -1 with errno= EILSEQ ). wchar_t buff[256]; swprintf(buff, 256, L"\u00A9 %ls", L"ascii"); // works swprintf(buff, 256, L"\u03A0 %ls", L"ascii"); // will

wchar_t is 2-bytes in visual studio and stores UTF-16. How do Unicode-aware applications work with characters above U+FFFF?

我怕爱的太早我们不能终老 提交于 2019-12-23 19:28:46
问题 We are at our company planning to make our application Unicode-aware, and we are analyzing what problems we are going to encounter. Particularly, our application will for example rely heavily on lengths of strings and we would like to use wchar_t as base character class. The problem arises when dealing with characters that must be stored in 2 units of 16 bits in UTF-16, namely characters above U+10000. Simple example: I have the UTF-8 string "蟂" (Unicode character U+87C2, in UTF-8: E8 9F 82)

Why there are no “unsigned wchar_t” and “signed wchar_t” types?

拥有回忆 提交于 2019-12-23 12:44:08
问题 The signedness of char is not standardized. Hence there are signed char and unsigned char types. Therefore functions which work with single character must use the argument type which can hold both signed char and unsigned char (this type was chosen to be int ), because if the argument type was char , we would get type conversion warnings from the compiler (if -Wconversion is used) in code like this: char c = 'ÿ'; if (islower((unsigned char) c)) ... warning: conversion to ‘char’ from ‘unsigned

Why there are no “unsigned wchar_t” and “signed wchar_t” types?

。_饼干妹妹 提交于 2019-12-23 12:39:32
问题 The signedness of char is not standardized. Hence there are signed char and unsigned char types. Therefore functions which work with single character must use the argument type which can hold both signed char and unsigned char (this type was chosen to be int ), because if the argument type was char , we would get type conversion warnings from the compiler (if -Wconversion is used) in code like this: char c = 'ÿ'; if (islower((unsigned char) c)) ... warning: conversion to ‘char’ from ‘unsigned