memory-profiling

Interpretation of memory profiling output of `Rprof`

╄→尐↘猪︶ㄣ 提交于 2019-12-02 02:35:41
I am trying use profiling to see which part of my code is reposponsible for the maximum usage of 3GB of memory (as reported by gc() statistic on maximum used memory, see here how ). I am running memory profiling like this: Rprof(line.profiling = TRUE, memory.profiling = TRUE) graf(...) # ... here I run the profiled code Rprof(NULL) summaryRprof(lines = "both", memory = "both") And the output is the following: $by.total total.time total.pct mem.total self.time self.pct "graf" 299.12 99.69 50814.4 0.02 0.01 #2 299.12 99.69 50814.4 0.00 0.00 "graf.fit.laplace" 299.06 99.67 50787.2 0.00 0.00

Not able to plot graph: matplotlib is needed for plotting

拜拜、爱过 提交于 2019-12-01 17:45:42
I am able to generate *.dat file: vikas@server:~/memory_profiler-0.36$ ./mprof run --python test_sl.py vikas@server:~/memory_profiler-0.36$ ls *.dat mprofile_20151001035123.dat But when I am trying to plot graph then it is saying "matplotlib is needed for plotting" vikas@server:~/memory_profiler-0.36$ ./mprof plot --output=plot.png matplotlib is needed for plotting. Did I miss anything? Function I am profiling is run() which is inside: file_to_be_profiled.py #!/usr/bin/python import time import os, sys, commands from memory_profiler import profile from guppy import hpy @profile def run(): d =

Not able to plot graph: matplotlib is needed for plotting

此生再无相见时 提交于 2019-12-01 17:37:51
问题 I am able to generate *.dat file: vikas@server:~/memory_profiler-0.36$ ./mprof run --python test_sl.py vikas@server:~/memory_profiler-0.36$ ls *.dat mprofile_20151001035123.dat But when I am trying to plot graph then it is saying "matplotlib is needed for plotting" vikas@server:~/memory_profiler-0.36$ ./mprof plot --output=plot.png matplotlib is needed for plotting. Did I miss anything? Function I am profiling is run() which is inside: file_to_be_profiled.py #!/usr/bin/python import time

How can I track down memory peaks? (That's peaks with a p, not an l.)

瘦欲@ 提交于 2019-12-01 03:42:05
I've got a kiosk app, which, essentially shows a bunch of slides with various bits of information on them. I initially began coding this over a year ago, when I was beginning with Objective-C and iOS development. I find that my code style is much cleaner now than what it was, and I'm much more experienced, so I've decided to rewrite from scratch. I ran my app with the Allocations instrument to see what the memory usage was. Considering that this is a kiosk app, everything needs to run smoothly, without leaks. (Of course all apps need to run without leaks, but a kiosk app makes this an even

Code profiling for Shiny app?

岁酱吖の 提交于 2019-11-30 13:00:19
问题 For an R Shiny web app, what are some good ways to run code profiling that show the parts of the Shiny code that are taking the most processing time? I've got a big, fat, complex Shiny app, and I'd like to figure out where in this labyrinth of code I'm slowing my Shiny app down the most. I've tried out Rprof and profr but haven't gotten much insight from them. 回答1: A few (rough) ideas: Profiling the app in the browser might help. I have a largish app that uses navbarPage and the page build

Memory profiling in R - tools for summarizing

混江龙づ霸主 提交于 2019-11-30 06:23:06
R has some tools for memory profiling, like Rprofmem() , Rprof() with option "memory.profiling=TRUE" and tracemem() . The last one can only be used on objects, and hence is useful to follow how many times an object is copied, but doesn't give an overview on a function basis. Rprofmem should be able to do that, but the output of even the simplest function call like lm() gives over 500 lines of log. I tried to figure out what Rprof("somefile.log",memory.profile=T) actually does, but I don't think I really get it. The last I could find was this message of Thomas Lumley , saying that, and I quote

What do the fields of Ruby's GC.stat mean?

无人久伴 提交于 2019-11-29 02:46:08
问题 I am using GC.stat to profile memory usage in our Rails app. GC.stat returns a hash with the following keys: :count :heap_used :heap_length :heap_increment :heap_live_num :heap_free_num :heap_final_num Does anybody know exactly what these values mean? There's no documentation of them in the Ruby source (gc.c), just a comment: "The contents of the hash are implementation defined and may be changed in the future." Some of these fields make sense from context, e.g. count is the number of heaps

Memory profiler for numpy

自闭症网瘾萝莉.ら 提交于 2019-11-28 08:18:44
I have a numpy script that -- according to top -- is using about 5GB of RAM: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 16994 aix 25 0 5813m 5.2g 5.1g S 0.0 22.1 52:19.66 ipython Is there a memory profiler that would enable me to get some idea about the objects that are taking most of that memory? I've tried heapy , but guppy.hpy().heap() is giving me this: Partition of a set of 90956 objects. Total size = 12511160 bytes. Index Count % Size % Cumulative % Kind (class / dict of class) 0 42464 47 4853112 39 4853112 39 str 1 22147 24 1928768 15 6781880 54 tuple 2 287 0 1093352 9

Programmatically get memory usage in Chrome

倖福魔咒の 提交于 2019-11-28 07:22:22
How can I programmatically get memory usage (JS and total) of my website in Google Chrome? I looked at doing it from a Chrome extension using the undocumented HeapProfiler (see here ), but I can't find a way to get data from that. I want to measure the memory consumption it at every release, so this needs to be programmatic. EDIT: I figured out how to get the HeapProfiler method to work. Each addHeapSnapshotChunk event has a chunk of a JSON object. chrome.browserAction.onClicked.addListener(function(tab) { var heapData, debugId = {tabId:tab.id}; chrome.debugger.attach(debugId, '1.0', function(

Memory profiling in R - tools for summarizing

拥有回忆 提交于 2019-11-27 13:31:26
问题 R has some tools for memory profiling, like Rprofmem() , Rprof() with option "memory.profiling=TRUE" and tracemem() . The last one can only be used on objects, and hence is useful to follow how many times an object is copied, but doesn't give an overview on a function basis. Rprofmem should be able to do that, but the output of even the simplest function call like lm() gives over 500 lines of log. I tried to figure out what Rprof("somefile.log",memory.profile=T) actually does, but I don't