I recently realized that I don\'t fully understand Java\'s string encoding process.
Consider the following code:
public class Main
{
public stati
javac -encoding...
); otherwise, platform encoding is assumedSystem.out
PrintStream will transform your strings from UTF-16 to bytes in the system encoding prior to writing them to stdoutNotes:
If you compile with different encodings, these encodings only affect your source files. If you don't have any special characters inside your sources, there will be no difference in the resulting byte code.
For runtime, the default charset of the operating system is used. This is independent from the charset you used for compiling.
A summary of "what to know" about string encodings in Java:
String
instance, in memory, is a sequence of 16-bit "code units", which Java handles as char
values. Conceptually, those code units encode a sequence of "code points", where a code point is "the number attributed to a given character as per the Unicode standard". Code points range from 0 to a bit more than one million, although only 100 thousands or so have been defined so far. Code points from 0 to 65535 are encoded into a single code unit, while other code points use two code units. This process is called UTF-16 (aka UCS-2). There are a few subtleties (some code points are invalid, e.g. 65535, and there is a range of 2048 code points in the first 65536 reserved precisely for the encoding of the other code points).System.out.println()
, the JVM will convert the string into something suitable for wherever those characters go, which often means converting them to bytes using a charset which depends on the current locale (or what the JVM guessed of the current locale).javac
) accepts a command-line flag (-encoding
) which can be used to override that default choice.String
instances do not depend on any kind of encoding, as long as they remain in RAM, some of the operations you may want to perform on strings are locale-dependent. This is not a question of encoding; but a locale also defines a "language" and it so happens that the notions of uppercase and lowercase depend on the language which is used. The Usual Suspect is calling "unicode".toUpperCase()
: this yields "UNICODE"
except if the current locale is Turkish, in which case you get "UNİCODE"
(the "I
" has a dot). The basic assumption here is that if the current locale is Turkish then the data the application is managing is probably Turkish text; personally, I find this assumption at best questionable. But so it is.In practical terms, you should specify encodings explicitly in your code, at least most of the time. Do not call String.getBytes()
, call String.getBytes("UTF-8")
. Use of the default, locale-dependent encoding is fine when it is applied to some data exchanged with the user, such as a configuration file or a message to display immediately; but elsewhere, avoid locale-dependent methods whenever possible.
Among other locale-dependent parts of Java, there are calendars. There is the whole time zone business, which depends on the "time zone", which should relate to the geographical position of the computer (and this is not part of the "locale" stricto sensu...). Also, countless Java application mysteriously fail when run in Bangkok, because in a Thai locale, Java defaults to the Buddhist calendar according to which the current year is 2553.
As a rule of thumb, assume that the World is vast (it is !) and keep things generic (do not do anything which depends on a charset until the very last moment, when I/O must actually be performed).
Erm based on this and this the ACK control character is exactly the same in both encodings. The difference the link you pointed out is talking about how DOS/Windows actually has symbols for most of the control characters in Windows-1252 (like the Heart/Club/Spade/Diamond characters and simileys) while ISO-8859 does not.