问题
I have a decryption routine in VB6. I now want the same decryption in C#. The strings that need decryption are in unicode, so I use Encoding.Unicode.GetString to read the input in C#. The input now looks exactly the same as in VB6.
The first few characters in the loop are decrypted ok! Then I encounter a difference... The program parses the character '˜' with a different index than in VB6.
When debugging I see the following in VB and in .Net:
VB6 ˜ = code 152
C# ˜ = code 732
Needless to say, decryption fails. I need to get 152 for the character mentioned above.
What's wrong here?
Regards,
Michel
回答1:
Your VB6 wasn't reading Unicode (I'd guess at Windows-1252 codepage), which is why it's come back with a different character code.
回答2:
What do you mean by "character 152" exactly? How did you get that number?
Note that being "in Unicode" could mean many different things. Are you sure it's encoded as UTF-16 in the binary data? If you could post more about the source data, that would be very helpful.
Also, encryption and decryption should almost always be done using bytes, not characters. While I understand you need to reproduce legacy behaviour, you should try to migrate away from treating strings as opaque binary data over time.
回答3:
I've done this before. The problem is in your encoding. Where .NET is unicode, VB6 is Unifail.
On the .NET side, you need to use Encoding.ASCII to convert your strings into byte arrays and vise versa.
Encoding.ASCII.GetString(decrypted);
//and
Encoding.ASCII.GetBytes(cleartext);
So, when you are encrypting to send to the VB app, you must use ASCII.GetBytes and then encrypt that byte array, and when you get a byte array from the VB side you must decrypt them and use ASCII.GetString to decode the bytes into a usable string.
来源:https://stackoverflow.com/questions/1058270/net-unicode-problem-vb6-legacy