I\'m wondering how does the allocation and disposal of memory allocated for bitmaps work in .NET.
When I do a lot of bitmap creations in loops in a function and call
Why don't you use using
keyword. Just encapsulate your Bitmap object in it and Compiler will ensure that Dispose method is called.
Its simply a syntactic shortcut for
try
{
...
}
finally
{
...Dispose();
}
The Bitmap class is inevitably the one where you have to stop ignoring that IDisposable exists. It is a small wrapper class around a GDI+ object. GDI+ is unmanaged code. The bitmap occupies unmanaged memory. A lot of it when the bitmap is large.
The .NET garbage collector ensures that unmanaged system resources are released with the finalizer thread. Problem is, it only kicks into action when you create sufficient amounts of managed objects to trigger a garbage collection. That won't work well for the Bitmap class, you can create many thousands of them before generation #0 of the garbage collected heap fills up. You will run out of unmanaged memory before you can get there.
Managing the lifetime of the bitmaps you use is required. Call the Dispose() method when you no longer have a use for it. That's not always the golden solution, you may have to re-think your approach if you simply have too many live bitmaps. A 64-bit operating system is the next solution.
The .NET Bitmap class "encapsulates a GDI+ bitmap", that means you should call Dispose on a Bitmap
when you are finished with it,
"Always call Dispose before you release your last reference to the Image. Otherwise, the resources it is using will not be freed until the garbage collector calls the Image object's Finalize method."