Why does C# memory stream reserve so much memory?

瘦欲@ 提交于 2019-12-03 01:08:13

Because this is the algorithm for how it expands its capacity.

public override void Write(byte[] buffer, int offset, int count) {

    //... Removed Error checking for example

    int i = _position + count;
    // Check for overflow
    if (i < 0)
        throw new IOException(Environment.GetResourceString("IO.IO_StreamTooLong"));

    if (i > _length) {
        bool mustZero = _position > _length;
        if (i > _capacity) {
            bool allocatedNewArray = EnsureCapacity(i);
            if (allocatedNewArray)
                mustZero = false;
        }
        if (mustZero)
            Array.Clear(_buffer, _length, i - _length);
        _length = i;
    }

    //... 
}

private bool EnsureCapacity(int value) {
    // Check for overflow
    if (value < 0)
        throw new IOException(Environment.GetResourceString("IO.IO_StreamTooLong"));
    if (value > _capacity) {
        int newCapacity = value;
        if (newCapacity < 256)
            newCapacity = 256;
        if (newCapacity < _capacity * 2)
            newCapacity = _capacity * 2;
        Capacity = newCapacity;
        return true;
    }
    return false;
}

public virtual int Capacity 
{
    //...

    set {
         //...

        // MemoryStream has this invariant: _origin > 0 => !expandable (see ctors)
        if (_expandable && value != _capacity) {
            if (value > 0) {
                byte[] newBuffer = new byte[value];
                if (_length > 0) Buffer.InternalBlockCopy(_buffer, 0, newBuffer, 0, _length);
                _buffer = newBuffer;
            }
            else {
                _buffer = null;
            }
            _capacity = value;
        }
    }
}

So every time you hit the capacity limit it doubles the size of the capacity. The reason it does this is that Buffer.InternalBlockCopy operation is slow for large arrays so if it had to frequently resize every Write call the performance would drop significantly.

A few things you could do to improve the performance for you is you could set the initial capacity to be at least the size of your compressed array and you could then increase size by a factor smaller than 2.0 to reduce the amount of memory you are using.

const double ResizeFactor = 1.25;

private byte[] Decompress(byte[] data)
{
    using (MemoryStream compressedStream = new MemoryStream(data))
    using (GZipStream zipStream = new GZipStream(compressedStream, CompressionMode.Decompress))
    using (MemoryStream resultStream = new MemoryStream(data.Length * ResizeFactor)) //Set the initial size to be the same as the compressed size + 25%.
    {
        byte[] buffer = new byte[4096];
        int iCount = 0;

        while ((iCount = zipStream.Read(buffer, 0, buffer.Length)) > 0)
        {
            if(resultStream.Capacity < resultStream.Length + iCount)
               resultStream.Capacity = resultStream.Capacity * ResizeFactor; //Resize to 125% instead of 200%

            resultStream.Write(buffer, 0, iCount);
        }
        return resultStream.ToArray();
    }
}

If you wanted to you could do even more fancy algorithms like resizing based on the current compression ratio

const double MinResizeFactor = 1.05;

private byte[] Decompress(byte[] data)
{
    using (MemoryStream compressedStream = new MemoryStream(data))
    using (GZipStream zipStream = new GZipStream(compressedStream, CompressionMode.Decompress))
    using (MemoryStream resultStream = new MemoryStream(data.Length * MinResizeFactor)) //Set the initial size to be the same as the compressed size + the minimum resize factor.
    {
        byte[] buffer = new byte[4096];
        int iCount = 0;

        while ((iCount = zipStream.Read(buffer, 0, buffer.Length)) > 0)
        {
            if(resultStream.Capacity < resultStream.Length + iCount)
            {
               double sizeRatio = ((double)resultStream.Position + iCount) / (compressedStream.Position + 1); //The +1 is to prevent divide by 0 errors, it may not be necessary in practice.

               //Resize to minimum resize factor of the current capacity or the 
               // compressed stream length times the compression ratio + min resize 
               // factor, whichever is larger.
               resultStream.Capacity =  Math.Max(resultStream.Capacity * MinResizeFactor, 
                                                 (sizeRatio + (MinResizeFactor - 1)) * compressedStream.Length);
             }

            resultStream.Write(buffer, 0, iCount);
        }
        return resultStream.ToArray();
    }
}

MemoryStream doubles its internal buffer when it runs out of space. This can lead to 2x waste. I cannot tell why you are seeing more than that. But this basic behavior is expected.

If you don't like this behavior write your own stream that stores its data in smaller chunks (e.g. a List<byte[1024 * 64]>). Such an algorithm would bounds its amount of waste to 64KB.

Looks like you are looking at total amount of allocated memory, not the last call. Since memory stream doubles its size on reallocation it it will grow about twice each time - so total allocated memory would be approximately sum of powers of 2 like:

Sum i=1 k (2i) = 2k+1 -1.

(where k is number of re-allocations like k = 1 + log2 StreamSize

Which is about what you see.

Well, increasing the capacity of the streams means creating a whole new array with the new capacity, and copying the old one over. That's very expensive, and if you did it for each Write, your performance would suffer a lot. So instead, the MemoryStream expands more than necessary. If you want to improve that behaviour and you know the total capacity required, simply use the MemoryStream constructor with the capacity parameter :) You can then use MemoryStream.GetBuffer instead of ToArray too.

You're also seeing the discarded old buffers in the memory profiler (e.g. from 8 MiB to 16 MiB etc.).

Of course, you don't care about having a single consecutive array, so it might be a better idea for you to simply have a memory stream of your own that uses multiple arrays created as needed, in as big chunks as necessary, and then just copy it all at once to the output byte[] (if you even need the byte[] at all - quite likely, that's a design problem).

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!