Why don't DataOutputStream.writeChars(str) and String(byte[]) use the same encoding?

旧城冷巷雨未停 提交于 2019-12-08 06:30:20

问题


I'm writing some marshaling/unmarshaling routines for a class project and am a bit perplexed about Java's default behavior in this case. Here are my "naive" subroutines for writing and reading strings to and from byte streams:

protected static void write(DataOutputStream dout, String str)
        throws IOException{
    dout.writeInt(str.length());
    dout.writeChars(str);
}

protected static String readString(DataInputStream din)
        throws IOException{
    int strLength = 2*din.readInt(); // b/c there are two bytes per char
    byte[] stringHolder = new byte[strLength];
    din.read(stringHolder);
    return new String(stringHolder);
}

Unfortunately, this simply doesn't work; the characters are written in UTF-16 format by default, but String(byte[]) seems to assume that each byte will contain a character, and since ASCII characters all start with a 0 byte in UTF-16, the constructor appears to just give up and return an empty string. The solution is to change readString to specify that it must use UTF-16 encoding:

protected static String readString(DataInputStream din)
        throws IOException{
    int strLength = 2*din.readInt();
    byte[] stringHolder = new byte[strLength];
    din.read(stringHolder);
    return new String(stringHolder, "UTF-16");
}

My question is, why is this necessary? Since Java uses UTF-16 for strings by default, why wouldn't it assume that UTF-16 is being used when reading chars from bytes? Or, alternatively, why wouldn't it just encode the chars as bytes in the first place by default? In short, why don't the default behaviors of the writeChars() method and the String(byte[]) constructor parallel each other?


回答1:


The issue is you are writing using the underlying char[] which is essentialy a byte[] that represents a UTF-16 representation of a string, see the javadoc.
You are then reading using the String(byte[] bytes) constructor, which is designed for reading data encoded with the system default encoding, in your case presumably this is UTF-8.
You need to be consistent, in fact the DataOutputStream.writeUTF() and DataInputStream.readUTF() functions are designed especially for this.
If you want use the underlying byte[] for some reason you can get the UTF-8 representation of the String easily using String.getBytes("UTF-8"), again, see the javadoc.
To simplify matters you could just use an ObjectOutputStream and an ObjectInputStream and that would serialize the actual String to the stream rather than just its char[] representation.




回答2:


Its better to think that Java does not use any encoding of its characters. Its Strings are simply the raw 16 bit char value which is the same as UTF16. The reason the "other" methods default to the system encoding is because different platforms use different default encodings. Eg it wouldnt make sense to write UTF8 which contains partial ascii characters to a mainframe which uses EBDCDIC (sp) .



来源:https://stackoverflow.com/questions/14927521/why-dont-dataoutputstream-writecharsstr-and-stringbyte-use-the-same-encod

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!