I have written the following code to check for sufficient memory,
while (true)
{
try
{
// Check for available memory.
memFailPoint = new
MemoryFailPoint checks for consecutive available memory as documented here : http://msdn.microsoft.com/fr-fr/library/system.runtime.memoryfailpoint.aspx
You may consume very little memory, but have fragmented it a lot and then be now unable to allocate a consecutive block of memory of the needed size. It's very typical of this problem to occur after a few hours. To avoid that, use a pool of object for object you keep instantiating, it will make the memory space in use more rigid.
Consider using the GC.GetTotalMemory
method to determine the amount of memory available before and after calling:
memFailPoint = new MemoryFailPoint(250);
InsufficientMemoryException
is thrown before starting an operation, by the MemoryFailPoint
constructor when you specify a projected memory allocation larger than the amount of currently available memory.
Like user7116 commented, that's why you should check first.
The example in this link should give you a solution: MemoryFailPoint Class
You can also check this msdn blog article: Out of memory? Easy ways to increase the memory available to your program
You can rely on this method working correctly, this exception is very likely to trip in a 32-bit process when you ask for 250 megabytes. That gets to be difficult to get when the program has been running for a while.
A program never crashes with OOM because you've consumed all available virtual memory address space. It crashes because there isn't a hole left in the address space that's big enough to fit the allocation. Your code requests a hole big enough to allocate 250 megabytes in one gulp. When you don't get the exception that you can be sure that this allocation will not fail.
But 250 megabytes is rather a lot, that's a really big array. And is very likely to fail due to a problem called "address space fragmentation". In other words, a program typically starts out with several very large holes, the largest about 600 megabytes. Holes available between the allocations made to store code and data that's used by the .NET runtime and unmanaged Windows DLLs. As the program allocates more memory, those holes get smaller. It is likely to release some memory but that doesn't reproduce a big hole. You typically get two holes, roughly half the size of the original, with an allocation somewhere in the middle that cuts the original big hole in two.
This is called fragmentation, a 32-bit process that allocates and releases a lot of memory ends up fragmenting the virtual memory address space so the biggest hole that's still available after a while gets smaller, around 90 megabytes is fairly typical. Asking for 250 megabytes is almost guaranteed to fail. You will need to aim lower.
You no doubt expected it to work differently, ensuring that the sum of allocations adding up to 250 megabytes is guaranteed to work. This however is not how MemoryFailPoint works, it only checks for the largest possible allocation. Needless to say perhaps, this makes it less than useful. I otherwise do sympathize with the .NET framework programmers, getting it to work the way we'd like it is both expensive and cannot actually provide a guarantee since the size of an allocation matters most.
Virtual memory is a plentiful resource that's incredibly cheap. But getting close to consuming it all is very troublesome. Once you consume a gigabyte of it then OOM striking at random is starting to get likely. Don't forget the easy fix for this problem, you are running on a 64-bit operating system. So just changing the EXE platform target to AnyCPU gets you gobs and gobs of virtual address space. Depends on the OS edition but a terabyte is possible. It still fragments but you just don't care anymore, the holes are huge.
Last but not least, visible in the comments, this problem has nothing to do with RAM. Virtual memory is quite unrelated to how much RAM you have. It is the operating system's job to map virtual memory addresses to physical addresses in RAM, it does so dynamically. Accessing a memory location may trip a page fault, the OS will allocate RAM for the page. And the reverse happens, the OS will unmap RAM for a page when it is needed elsewhere. You can never run out of RAM, the machine will slow down to a crawl before that can happen. The SysInternals' VMMap utility is nice to see what your program's virtual address space looks like, albeit that you tend to drown in the info for a large process.