I want to find out what the maximum amount of RAM allocated during the call to a function is (in Python). There are other questions on SO related to tracking RAM usage:
This question seemed rather interesting and it gave me a reason to look into Guppy / Heapy, for that I thank you.
I tried for about 2 hours to get Heapy to do monitor a function call / process without modifying its source with zero luck.
I did find a way to accomplish your task using the built in Python library resource. Note that the documentation does not indicate what the RU_MAXRSS
value returns. Another SO user noted that it was in kB. Running Mac OSX 7.3 and watching my system resources climb up during the test code below, I believe the returned values to be in Bytes, not kBytes.
A 10000ft view on how I used the resource
library to monitor the library call was to launch the function in a separate (monitor-able) thread and track the system resources for that process in the main thread. Below I have the two files that you'd need to run to test it out.
Library Resource Monitor - whatever_you_want.py
import resource
import time
from stoppable_thread import StoppableThread
class MyLibrarySniffingClass(StoppableThread):
def __init__(self, target_lib_call, arg1, arg2):
super(MyLibrarySniffingClass, self).__init__()
self.target_function = target_lib_call
self.arg1 = arg1
self.arg2 = arg2
self.results = None
def startup(self):
# Overload the startup function
print "Calling the Target Library Function..."
def cleanup(self):
# Overload the cleanup function
print "Library Call Complete"
def mainloop(self):
# Start the library Call
self.results = self.target_function(self.arg1, self.arg2)
# Kill the thread when complete
self.stop()
def SomeLongRunningLibraryCall(arg1, arg2):
max_dict_entries = 2500
delay_per_entry = .005
some_large_dictionary = {}
dict_entry_count = 0
while(1):
time.sleep(delay_per_entry)
dict_entry_count += 1
some_large_dictionary[dict_entry_count]=range(10000)
if len(some_large_dictionary) > max_dict_entries:
break
print arg1 + " " + arg2
return "Good Bye World"
if __name__ == "__main__":
# Lib Testing Code
mythread = MyLibrarySniffingClass(SomeLongRunningLibraryCall, "Hello", "World")
mythread.start()
start_mem = resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
delta_mem = 0
max_memory = 0
memory_usage_refresh = .005 # Seconds
while(1):
time.sleep(memory_usage_refresh)
delta_mem = (resource.getrusage(resource.RUSAGE_SELF).ru_maxrss) - start_mem
if delta_mem > max_memory:
max_memory = delta_mem
# Uncomment this line to see the memory usuage during run-time
# print "Memory Usage During Call: %d MB" % (delta_mem / 1000000.0)
# Check to see if the library call is complete
if mythread.isShutdown():
print mythread.results
break;
print "\nMAX Memory Usage in MB: " + str(round(max_memory / 1000.0, 3))
Stoppable Thread - stoppable_thread.py
import threading
import time
class StoppableThread(threading.Thread):
def __init__(self):
super(StoppableThread, self).__init__()
self.daemon = True
self.__monitor = threading.Event()
self.__monitor.set()
self.__has_shutdown = False
def run(self):
'''Overloads the threading.Thread.run'''
# Call the User's Startup functions
self.startup()
# Loop until the thread is stopped
while self.isRunning():
self.mainloop()
# Clean up
self.cleanup()
# Flag to the outside world that the thread has exited
# AND that the cleanup is complete
self.__has_shutdown = True
def stop(self):
self.__monitor.clear()
def isRunning(self):
return self.__monitor.isSet()
def isShutdown(self):
return self.__has_shutdown
###############################
### User Defined Functions ####
###############################
def mainloop(self):
'''
Expected to be overwritten in a subclass!!
Note that Stoppable while(1) is handled in the built in "run".
'''
pass
def startup(self):
'''Expected to be overwritten in a subclass!!'''
pass
def cleanup(self):
'''Expected to be overwritten in a subclass!!'''
pass
Improvement of the answer of @Vader B (as it did not work for me out of box):
$ /usr/bin/time --verbose ./myscript.py
Command being timed: "./myscript.py"
User time (seconds): 16.78
System time (seconds): 2.74
Percent of CPU this job got: 117%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:16.58
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 616092 # WE NEED THIS!!!
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 432750
Voluntary context switches: 1075
Involuntary context switches: 118503
Swaps: 0
File system inputs: 0
File system outputs: 800
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
You can use python library resource to get memory usage.
import resource
resource.getrusage(resource.RUSAGE_SELF).ru_maxrss
It will give memory usage in kilobytes, to convert in MB divide by 1000.
Reading the source of free
's information, /proc/meminfo
on a linux system:
~ head /proc/meminfo
MemTotal: 4039168 kB
MemFree: 2567392 kB
MemAvailable: 3169436 kB
Buffers: 81756 kB
Cached: 712808 kB
SwapCached: 0 kB
Active: 835276 kB
Inactive: 457436 kB
Active(anon): 499080 kB
Inactive(anon): 17968 kB
I have created a decorator class to measure memory consumption of a function.
class memoryit:
def FreeMemory():
with open('/proc/meminfo') as file:
for line in file:
if 'MemFree' in line:
free_memKB = line.split()[1]
return (float(free_memKB)/(1024*1024)) # returns GBytes float
def __init__(self, function): # Decorator class to print the memory consumption of a
self.function = function # function/method after calling it a number of iterations
def __call__(self, *args, iterations = 1, **kwargs):
before = memoryit.FreeMemory()
for i in range (iterations):
result = self.function(*args, **kwargs)
after = memoryit.FreeMemory()
print ('%r memory used: %2.3f GB' % (self.function.__name__, (before - after) / iterations))
return result
Function to measure consumption:
@memoryit
def MakeMatrix (dim):
matrix = []
for i in range (dim):
matrix.append([j for j in range (dim)])
return (matrix)
Usage:
print ("Starting memory:", memoryit.FreeMemory())
m = MakeMatrix(10000)
print ("Ending memory:", memoryit.FreeMemory() )
Printout:
Starting memory: 10.58599853515625
'MakeMatrix' memory used: 3.741 GB
Ending memory: 6.864116668701172
It is possible to do this with memory_profiler. The function memory_usage
returns a list of values, these represent the memory usage over time (by default over chunks of .1 second). If you need the maximum, just take the max of that list. Little example:
from memory_profiler import memory_usage
from time import sleep
def f():
# a function that with growing
# memory consumption
a = [0] * 1000
sleep(.1)
b = a * 100
sleep(.1)
c = b * 100
return a
mem_usage = memory_usage(f)
print('Memory usage (in chunks of .1 seconds): %s' % mem_usage)
print('Maximum memory usage: %s' % max(mem_usage))
In my case (memory_profiler 0.25) if prints the following output:
Memory usage (in chunks of .1 seconds): [45.65625, 45.734375, 46.41015625, 53.734375]
Maximum memory usage: 53.734375
Standard Unix utility time
tracks maximum memory usage of the process as well as other useful statistics for your program.
Example output (maxresident
is max memory usage, in Kilobytes.):
> time python ./scalabilty_test.py
45.31user 1.86system 0:47.23elapsed 99%CPU (0avgtext+0avgdata 369824maxresident)k
0inputs+100208outputs (0major+99494minor)pagefaults 0swaps