Here\'s a little program:
#!/usr/bin/env python
# -*- encoding: utf-8 -*-
print(\'abcd kΩ ☠ °C √Hz µF ü ☃ ♥\')
print(u\'abcd kΩ ☠ °C √Hz µF ü ☃ ♥\')
@dan04: You are right that the problem is that the encoding of the file does not match the encoding of stdout. Nevertheless one way to solve the problem is to change the encoding of the file. So on Windows Notepad++ can used to save the code with UTF-8 character encoding.
An alternative is GNU recode.
Unicode output from Python to the Windows console just doesn't work. Python can't be persuaded to emit the native Windows encoding which expects wide characters and UCS2.
I/O in Python (and most other languages) is based on bytes. When you write a byte string (str
in 2.x, bytes
in 3.x) to a file, the bytes are simply written as-is. When you write a Unicode string (unicode
in 2.x, str
in 3.x) to a file, the data needs to be encoded to a byte sequence.
For a further explanation of this distinction see the Dive into Python 3 chapter on strings.
print('abcd kΩ ☠ °C √Hz µF ü ☃ ♥')
Here, the string is a byte string. Because the encoding of your source file is UTF-8, the bytes are
'abcd k\xce\xa9 \xe2\x98\xa0 \xc2\xb0C \xe2\x88\x9aHz \xc2\xb5F \xc3\xbc \xe2\x98\x83 \xe2\x99\xa5'
The print
statement writes these bytes to the console as-is. But the Windows console interprets byte strings as being encoded in the "OEM" code page, which in the US is 437. So the string you actually see on your screen is
abcd kΩ ☠ °C √Hz µF ü ☃ ♥
On your Ubuntu system, this doesn't cause a problem because there the default console encoding is UTF-8, so you don't have the discrepancy between source file encoding and console encoding.
print(u'abcd kΩ ☠ °C √Hz µF ü ☃ ♥')
When printing a Unicode string, the string has to get encoded into bytes. But it only works if you have an encoding that supports those characters. And you don't.
☠☃♥
Ω☠√☃♥
.So, in both cases, you get a UnicodeEncodeError trying to print the string.
What gives?
Windows and Linux took vastly different approaches to supporting Unicode.
Originally, they both worked pretty much the same way: Each locale has its own language-specific char
-based encoding (the "ANSI code page" in Windows). Western languages used ISO-8859-1 or windows-1252, Russian used KOI8-R or windows-1251, etc.
When Windows NT added support for Unicode (int the early days when it was assumed that Unicode would use 16-bit characters), it did so by creating a parallel version of its API that used wchar_t
instead of char
. For example, the MessageBox function was split into the two functions:
int MessageBoxA(HWND hWnd, const char* lpText, const char* lpCaption, unsigned int uType);
int MessageBoxW(HWND hWnd, const wchar_t* lpText, const wchar_t* lpCaption, unsigned int uType);
The "W" functions are the "real" ones. The "A" functions exist for backwards compatibility with DOS-based Windows and mostly just convert their string arguments to UTF-16 and then call the corresponding "W" function.
In the Unix world (specifically, Plan 9), writing a whole new version of the POSIX API was seen as impractical, so Unicode support was approached in a different manner. The existing support for multi-byte encoding in CJK locales was used to implement a new encoding now known as UTF-8.
The preference towards UTF-8 on Unix-like systems and UTF-16 on Windows is a huge pain the the ass when writing cross-platform code that supports Unicode. Python tries to hide this from the programmer, but printing to the console is one of Joel's "leaky abstractions".
Your problem here is that your program expects, and outputs, UTF-8 characters, but consoles and various python runners on the web use other code pages. There is no way to code special characters that work in all encodings without modification. However, if you choose to use UTF-8 everywhere, you should be safe.
I think any terminal in Windows will do - so don't bother switching out the default one (cmd.exe) just because of this. Instead, change the encoding of the terminal to be UTF-8 as well, to match the encoding of your python script.
Unfortunately, I've never been able to find a way to set the code page to UTF-8 as default, so it has to be done every time you open a new command prompt. But it's done via a simple command, so it's only half-bad... You change the encoding by switching codepage:
>chcp 65001
Current codepage is now 65001
Note that you have to use one of the standard fonts for this to work. Most sources on the web seem to suggest Lucida Console.
There are two possible reasons:
print
. You cannot output raw Unicode, so print
needs to figure out how to convert it to the byte stream expected by the console (it uses sys.stdout.encoding
AFAIK), which brings us to