For example: \"½\" or ASCII DEC 189. When I read the bytes from a text file the byte[] contains the valid value, in this case 189.
Converting to Unicode results in
Byte 189 represents a "½" in iso-8859-1 (aka "Latin-1"), so the following is maybe what you want:
var e = Encoding.GetEncoding("iso-8859-1");
var s = e.GetString(new byte[] { 189 });
All strings and chars in .NET are UTF-16 encoded, so you need to use an encoder/decoder to convert anything else, sometimes this is defaulted (e.g. UTF-8 for FileStream instances) but good practice is to always specify.
You will need some form of implicit or (better) explicit metadata to supply you with the information about which encoding.
System.String[]
can not store characters with ASCII > 127
if you are trying to work on any extended ASCII characters such as œ ¢ ½ ¾
here is the method to convert it into their binary and decimal equivalent
The old PC-8 or Extended ASCII character set was around before IBM and Microsoft introduced the idea of Code Pages to the PC world. This WAS Extended ASCII - in 1982. In fact, it was the ONLY character set available on PC's at the time, up until the EGA card allowed you to load other fonts in to VRAM.
This was also the default standard for ANSI terminals, and nearly every BBS I dialed up to in the 80's and early 90's used this character set for displaying menus and boxes.
Here's the code to turn 8-bit Extended ASCII in to Unicode text. Note the key bit of code: the GetEncoding("437"). That used Code Page 437 to translate the 8-bit ASCII text to the Unicode equivalent.
string ASCII8ToString(byte[] ASCIIData)
{
var e = Encoding.GetEncoding("437");
return e.GetString(ASCIIData);
}
It depends on exactly what the encoding is.
There's no such thing as "ASCII 189" - ASCII only goes up to 127. There are many encodings which a 8-bit encodings using ASCII for the first 128 values.
You may want Encoding.Default
(which is the default encoding for your particular system), but it's hard to know for sure. Where did your data come from?