问题
I found the following code in SO. Does this really work?
String xml = new String("áéíóúñ");
byte[] latin1 = xml.getBytes("UTF-8");
byte[] utf8 = new String(latin1, "ISO-8859-1").getBytes("UTF-8");
I mean, latin1
is UTF-8-encoded in the second line, but read als ISO-8859-1-encoded in the third? Can this ever work?
Not that I did not want to criticize the cited code, I am just confused since I ran into some legacy code that is very similar, that seems to work, and I cannot explain why.
EDIT: I guess in the original post, "UTF-8" in line 2 was just a TYPO. But I am not sure ...
EDIT2: After my initial posting, someone edited the code above and changed the 2nd line to byte[] latin1 = xml.getBytes("ISO-8859-1");
. I don't know who did that and why he did it, but clearly this messed up pretty much. Sorry to all who saw the wrong version of the code. I don't know who edited it. The code above is correct now.
回答1:
getBytes(Charset charset)
results in a byte array encoded using the charset
, so latin1 is UTF-8 encoded.
Put System.out.println(latin1.length);
as the third line and it will tell you that byte array length is 12. This means that it is really UTF-8 encoded.
new String(latin1, "ISO-8859-1")
is incorrect because latin1 is UTF-8 encoded and you're telling to parse it as ISO-8859-1. That's why it produces a String made of 12 symbols of garbage: áéÃóúñ
.
When you're getting bytes from áéÃóúñ
using UTF-8 encoding it results in a 24 long byte array.
I hope everything is clear now.
回答2:
Those characters are present in the both character encodings. It's just that UTF-8 and ISO-8859-1 uses each different byte representations of each character beyond the ASCII range.
If you used a character which is present in UTF-8, but not in ISO-8859-1, then it will of course fail.
来源:https://stackoverflow.com/questions/9330793/conversion-between-utf-8-and-iso-8859-1