Is there a way to globally trap MemoryError exceptions so that a library can clear out caches instead of letting a MemoryError be seen by user code?
I\'m developing
I wish I could comment on Glenn's answer... although I agree with the overall idea against using MemoryException as a way to handle cache size, it doesn't necessarily mean your system is our of whack when you catch them. Some people run without swap, and you can also get them when using ulimit to limit maximum process size. Also, when using soft limits you could even raise the soft limit to handle gracefully you process' own death on memory exhaustion (assuming there's a way to raise it without allocating any more memory; I never tried that yet).
Catching an uncaught exception means something went wrong, and you don't know what. This means your application may start behaving in unexpected ways, just as if you started removing random lines of codes! I used generic exception handler in some applications, but only to display a nice message to the user (especially useful with GUI's) and die off.
You can hook your exception handler as this:
sys.excepthook = <your_exceptionhook>
The parameters are the exception class, the exception instance and a traceback object. You can pass these parameters in the same order to traceback.format_exception() to generate the traceback message python writes to stderr on uncaught exceptions.
NB: I haven't tried if it's of any use with MemoryException errors, but this is the way you catch uncaught exceptions.
This is not a good way of handling memory management. By the time you see MemoryError, you're already in a critical state where the kernel is probably close to killing processes to free up memory, and on many systems you'll never see it because it'll go to swap or just OOM-kill your process rather than fail allocations.
The only recoverable case you're likely to see MemoryError is after trying to make a very large allocation that doesn't fit in available address space, only common on 32-bit systems.
If you want to have a cache that frees memory as needed for other allocations, it needs to not interface with errors, but with the allocator itself. This way, when you need to release memory for an allocation you'll know how much contiguous memory is needed, or else you'll be guessing blindly. It also means you can track memory allocations as they happen, so you can keep memory usage at a specific level, rather than letting it grow unfettered and then trying to recover when it gets too high.
I'd strongly suggest that for most applications this sort of caching behavior is overcomplicated, though--you're usually better off just using a set amount of memory for cache.
A MemoryError
is an exception, you should be able to catch it in an except
block.