This is the way I try to do it:
#include
#include
using namespace std;
int main() {
SetConsoleOutputCP(CP_UTF8);
//ge
By default the wide print functions on Windows do not handle characters outside the ascii range.
There are a few ways to get Unicode data to the Windows console.
use the console API directly, WriteConsoleW. You'll have to ensure you're actually writing to a console and use other means when the output is to something else.
set the mode of the standard output file descriptors to one of the 'Unicode' modes, _O_U16TEXT or _O_U8TEXT. This causes the wide character output functions to correctly output Unicode data to the Windows console. If they're used on file descriptors that don't represent a console then they cause the output stream of bytes to be UTF-16 and UTF-8 respectively. N.B. after setting these modes the non-wide character functions on the corresponding stream are unusable and result in a crash. You must use only the wide character functions.
UTF-8 text can be printed directly to the console by setting the console output codepage to CP_UTF8, if you use the right functions. Most of the higher level functions such as basic_ostream<char>::operator<<(char*)
don't work this way, but you can either use lower level functions or implement your own ostream that works around the problem the standard functions have.
The problem with the third method is this:
putc('\302'); putc('\260'); // doesn't work with CP_UTF8
puts("\302\260"); // correctly writes UTF-8 data to Windows console with CP_UTF8
Unlike most operating systems, the console on Windows is not simply another file that accepts a stream of bytes. It's a special device created and owned by the program and accessed via its own unique WIN32 API. The issue is that when the console is written to, the API sees exactly the extent of the data passed in that use of its API, and the conversion from narrow characters to wide characters occurs without considering that the data may be incomplete. When a multibyte character is passed using more than one call to the console API, each separately passed piece is seen as an illegal encoding, and is treated as such.
It ought to be easy enough to work around this, but the CRT team at Microsoft views it as not their problem whereas whatever team works on the console probably doesn't care.
You might solve it by implementing your own streambuf subclass which handles doing the conversion to wchar_t correctly. I.e. accounting for the fact that bytes of multibyte characters may come separately, maintaining conversion state between writes (e.g., std::mbstate_t
).
Another trick, instead of SetConsoleOutputCP
, would be using _setmode on stdout
:
// Includes needed for _setmode()
#include <io.h>
#include <fcntl.h>
int main() {
_setmode(_fileno(stdout), _O_U16TEXT);
wchar_t * unicode_text = L"aäbcdefghijklmnoöpqrsßtuüvwxyz";
wprintf(L"%s", unicode_text);
return 0;
}
Don't forget to remove the call to SetConsoleOutputCP(CP_UTF8);
I had similar problems, but none of the existing answers worked for me. Something else I observed is that, if I stick UTF-8 characters in a plain string literal, they would print properly, but if I tried to use a UTF-8 literal (u8"text"
), the characters get butchered by the compiler (proved by printing out their numeric values one byte at a time; the raw literal had the correct UTF-8 bytes, as verified on a Linux machine, but the UTF-8 literal was garbage).
After some poking around, I found the solution: /utf-8. With that, everything Just Works; my sources are UTF-8, I can use explicit UTF-8 literals, and output works with no other changes needed.
I solved the problem in the following way:
Lucida Console doesn't seem to support umlauts, so changing the console font to Consolas, for example, works.
#include <stdio.h>
#include <Windows.h>
int main()
{
SetConsoleOutputCP(CP_UTF8);
// I'm using Visual Studio, so encoding the source file in UTF-8 won't work
const char* message = "a" "\xC3\xA4" "bcdefghijklmno" "\xC3\xB6" "pqrs" "\xC3\x9F" "tu" "\xC3\xBC" "vwxyz";
// Note the capital S in the first argument, when used with wprintf it
// specifies a single-byte or multi-byte character string (at least on
// Visual C, not sure about the C library MinGW is using)
wprintf(L"%S", message);
}
EDIT: fixed stupid typos and the decoding of the string literal, sorry about those.
//Save As UTF8 without signature
#include<stdio.h>
#include<windows.h>
int main() {
SetConsoleOutputCP(65001);
const char unicode_text[]="aäbcdefghijklmnoöpqrsßtuüvwxyz";
printf("%s\n", unicode_text);
}
Result:
aäbcdefghijklmnoöpqrsßtuüvwxyz
Console can be set to display UTF-8 chars: @vladasimovic answers SetConsoleOutputCP(CP_UTF8)
can be used for that. Alternatively, you can prepare your console by DOS command chcp 65001
or by system call system("chcp 65001 > nul")
in the main program. Don't forget to save the source code in UTF-8 as well.
To check the UTF-8 support, run
#include <stdio.h>
#include <windows.h>
BOOL CALLBACK showCPs(LPTSTR cp) {
puts(cp);
return true;
}
int main() {
EnumSystemCodePages(showCPs,CP_SUPPORTED);
}
65001
should appear in the list.
Windows console uses OEM codepages by default and most default raster fonts support only national characters. Windows XP and newer also supports TrueType fonts, which should display missing chars (@Devenec suggests Lucida Console in his answer).
As @bames53 points in his answer, Windows console is not a stream device, you need to write all bytes of multibyte character. Sometimes printf
messes the job, putting the bytes to output buffer one by one. Try use sprintf
and then puts
the result, or force to fflush only accumulated output buffer.
Note the UTF-8 format: one character is displayed as 1-5 bytes. Use this function to shift to next character in the string:
const char* ucshift(const char* str, int len=1) {
for(int i=0; i<len; ++i) {
if(*str==0) return str;
if(*str<0) {
unsigned char c = *str;
while((c<<=1)&128) ++str;
}
++str;
}
return str;
}
...and this function to transform the bytes into unicode number:
int ucchar(const char* str) {
if(!(*str&128)) return *str;
unsigned char c = *str, bytes = 0;
while((c<<=1)&128) ++bytes;
int result = 0;
for(int i=bytes; i>0; --i) result|= (*(str+i)&127)<<(6*(bytes-i));
int mask = 1;
for(int i=bytes; i<6; ++i) mask<<= 1, mask|= 1;
result|= (*str&mask)<<(6*bytes);
return result;
}
Then you can try to use some wild/ancient/non-standard winAPI function like MultiByteToWideChar (don't forget to call setlocale()
before!)
or you can use your own mapping from Unicode table to your active working codepage. Example:
int main() {
system("chcp 65001 > nul");
char str[] = "příšerně"; // file saved in UTF-8
for(const char* p=str; *p!=0; p=ucshift(p)) {
int c = ucchar(p);
if(c<128) printf("%c\n",c);
else printf("%d\n",c);
}
}
This should print
p
345
237
353
e
r
n
283
If your codepage doesn't support that Czech interpunction, you could map 345=>r, 237=>i, 353=>s, 283=>e. There are at least 5(!) different charsets just for Czech. To display readable characters on different Windows locale is a horror.