Ok, I am reading in dat files into a byte array. For some reason, the people who generate these files put about a half meg\'s worth of useless null bytes at the end of the
Assuming 0=null, that is probably your best bet... as a minor tweak, you might want to use Buffer.BlockCopy
when you finally copy the useful data..
test this :
private byte[] trimByte(byte[] input)
{
if (input.Length > 1)
{
int byteCounter = input.Length - 1;
while (input[byteCounter] == 0x00)
{
byteCounter--;
}
byte[] rv = new byte[(byteCounter + 1)];
for (int byteCounter1 = 0; byteCounter1 < (byteCounter + 1); byteCounter1++)
{
rv[byteCounter1] = input[byteCounter1];
}
return rv;
}
How about this:
[Test]
public void Test()
{
var chars = new [] {'a', 'b', '\0', 'c', '\0', '\0'};
File.WriteAllBytes("test.dat", Encoding.ASCII.GetBytes(chars));
var content = File.ReadAllText("test.dat");
Assert.AreEqual(6, content.Length); // includes the null bytes at the end
content = content.Trim('\0');
Assert.AreEqual(4, content.Length); // no more null bytes at the end
// but still has the one in the middle
}
In my case LINQ approach never finished ^))) It's to slow to work with byte arrays!
Guys, why won't you use Array.Copy() method?
/// <summary>
/// Gets array of bytes from memory stream.
/// </summary>
/// <param name="stream">Memory stream.</param>
public static byte[] GetAllBytes(this MemoryStream stream)
{
byte[] result = new byte[stream.Length];
Array.Copy(stream.GetBuffer(), result, stream.Length);
return result;
}