问题
A MemoryFailPoint (MSDN) "checks for sufficient memory resources before executing an operation."
But how is it actually used correctly?
Does the MemoryFailPoint
automatically reserve some memory for the next big object I create? Or does it simply check whether the memory would be free, without reserving it?
Does it check physical memory, physical memory plus page file, virtual address space, or something else entirely?
When do I dispose it? Do I need to dispose the MemoryFailPoint
before actually creating the memory-hungry object, or must I create the object before disposing the MemoryFailPoint
?
e.g.
try
{
using (MemoryFailPoint mem = new MemoryFailPoint(500))
{
// allocate big object here?
}
}
catch (InsufficientMemoryException e)
{
// ...
}
// or allocate big object here?
// or allocate big object on another thread?
Can another thread within the same process steal the memory I have reserved with a MemoryFailPoint
, or does the MemoryFailPoint
reserve the memory exclusively for the current thread?
What happens if the MemoryFailPoint
is not disposed? Does an undisposed MemoryFailPoint
consume significant amounts of memory itself?
回答1:
The source code for MemoryFailPoint
is available at .NET Source. The very descriptive comment at the start of the class answers your questions. I am copying that comment here for easier reference:
This class allows an application to fail before starting certain activities. The idea is to fail early instead of failing in the middle of some long-running operation to increase the survivability of the application and ensure you don't have to write tricky code to handle an OOM anywhere in your app's code (which implies state corruption, meaning you should unload the appdomain, if you have a transacted environment to ensure rollback of individual transactions). This is an incomplete tool to attempt hoisting all your OOM failures from anywhere in your worker methods to one particular point where it is easier to handle an OOM failure, and you can optionally choose to not start a workitem if it will likely fail. This does not help the performance of your code directly (other than helping to avoid AD unloads). The point is to avoid starting work if it is likely to fail.
The Enterprise Services team has used these memory gates effectively in the unmanaged world for a decade.In Whidbey, we will simply check to see if there is enough memory available in the OS's page file & attempt to ensure there might be enough space free within the process's address space (checking for address space fragmentation as well). We will not commit or reserve any memory. To avoid ----s with other threads using MemoryFailPoints, we'll also keep track of a process-wide amount of memory "reserved" via all currently-active MemoryFailPoints. This has two problems:
1. This can account for memory twice. If a thread creates a MemoryFailPoint for 100 MB then allocates 99 MB, we'll see 99 MB less free memory and 100 MB less reserved memory. Yet, subtracting off the 100 MB is necessary because the thread may not have started allocating memory yet. Disposing of this class immediately after front-loaded allocations have completed is a great idea. 2. This is still vulnerable to ----s with other threads that don't use MemoryFailPoints.
So this class is far from perfect. But it may be good enough to meaningfully reduce the frequency of OutOfMemoryExceptions in managed apps.
In Orcas or later, we might allocate some memory from the OS and add it to a allocation context for this thread. Obviously, at that point we need some way of conveying when we release this block of memory. So, we implemented IDisposable on this type in Whidbey and expect all users to call this from within a using block to provide lexical scope for their memory usage. The call to Dispose (implicit with the using block) will give us an opportunity to release this memory, perhaps. We anticipate this will give us the possibility of a more effective design in a future version.
In Orcas, we may also need to differentiate between allocations that would go into the normal managed heap vs. the large object heap, or we should consider checking for enough free space in both locations (with any appropriate adjustments to ensure the memory is contiguous).
回答2:
The usage pattern is following:
const int sizeMB = 500;
using (MemoryFailPoint mem = new MemoryFailPoint(sizeMB))
{
// Allocate sizeMB - large object here. The allocation is *likely* going to succeed.
}
When MemoryFailPoint constructor fails, there is high probability for large object allocation to throw OOM. Even when successful, it would make the process less stable to other (even smaller) memory allocation operations.
In my 16 GB Windows environment the following example generated InsufficientMemoryException one iteration earlier (step = 5) than OOM without MemoryFailPoint around array allocation code (at step = 6):
List<byte[]> arrays = new List<byte[]>();
const int size = int.MaxValue/2;
const int sizeMB = size / 1024 / 1024;
for(int step = 0; step < 10000; step++)
{
using (new MemoryFailPoint(sizeMB))
{
arrays.Add(new byte[size]);
}
}
来源:https://stackoverflow.com/questions/35731975/how-do-i-use-memoryfailpoint