Basically, I am getting a memory error in python when trying to perform an algebraic operation on a numpy matrix. The variable u
, is a large matrix of double (i
Your matrix has 288x288x156=12,939,264 entries, which for double
could come out to 400MB in memory. numpy
throwing a MemoryError
at you just means that in the function you called the memory needed to perform the operation wasn't available from the OS.
If you can work with sparse matrices this might save you a lot of memory.
Another tip I have found to avoid memory errors is to manually control garbage collection. When objects are deleted or go our of scope, the memory used for these variables isn't freed up until a garbage collection is performed. I have found with some of my code using large numpy arrays that I get a MemoryError, but that I can avoid this if I insert calls to gc.collect() at appropriate places.
You should only look into this option if using "op=" style operators etc doesn't solve your problem as it's probably not the best coding practice to have gc.collect() calls everywhere.
Rewrite to
p *= alpha
u += p
and this will use much less memory. Whereas p = p*alpha
allocates a whole new matrix for the result of p*alpha
and then discards the old p
; p*= alpha
does the same thing in place.
In general, with big matrices, try to use op=
assignment.