As already discussed this is a fundamental issue with trying to get contiguous blocks of memory in the Gigabyte sort of size.
You will be limited by (in increasing difficulty)
- The amount of addressable memory
- since you are 64bit this will be you 12GB physical memory, less any holes in it required by devices plus any swap file space.
- Note that you must be running an app with the relevant PE headers that indicate it can run 64bit or you will run under WoW64 and only have 4GB of address space.
- Also note that the default target was changed in 2010, a contentious change.
- The CLR's limitation that no single object may consume more than 2GB of space.
- Finding a contiguous block within the available memory.
You can find that you run out of space before the CLR limit of 2
because the backing buffer in the stream is expanded in a 'doubling' fashion and this swiftly results in the buffer being allocated in the Large Object Heap. This heap is not compacted in the same way the other heaps are(1) and as a result the process of building up to the theoretical maximum size of the buffer under 2
fragments the LOH so that you fail to find a sufficiently large contiguous block before this happens.
Thus a mitigation approach if you are close to the limit is to set the initial capacity of the stream such that it definitely has sufficient space from the start via one of the constructors.
Given that you are writing to the memory stream as part of a serialization process it would make sense to actually use streams as intended and use only the data required.
- If you are serializing to some file based location then stream it into that directly.
- If this is data going into a Sql Server database consider using:
- FILESTREAM 2008 only I'm afraid.
- From 2005 onwards you can read/write in chunks but writing is not well integrated into ADO.Net
- For versions prior to 2005 there are relatively unpleasant workarounds
- If you are serializing this in memory for use in say a comparison then consider streaming the data being compared as well and diffing as you go along.
- If you are persisting an object in memory to recreate it latter then this really should be going to a file or a memory mapped file. In both cases the operating system is then free to structure it as best it can (in disk caches or pages being mapped in and out of main memory) and it is likely it will do a better job of this than most people are able to do themselves.
- If you are doing this so that the data can be compressed then consider using streaming compression. Any block based compression stream can be fairly easily converted into a streaming mode with the addition of padding. If your compression API doesn't support this natively consider using one that does or writing the wrapper to do it.
- If you are doing this to write to a byte buffer which is then pinned and passed to an unmanaged function then use the UnmanagedMemoryStream instead, this stands a slightly better chance of being able to allocate a buffer of this sort of size but is still not guaranteed to do so.
Perhaps if you tell us what you are serializing an object of this size for we might be able to tell you better ways to do it.
- This is an implementation detail you should not rely on