linux perf: how to interpret and find hotspots

前端 未结 5 1805
说谎
说谎 2020-11-28 21:14

I tried out linux\' perf utility today and am having trouble in interpreting its results. I\'m used to valgrind\'s callgrind which is of course a totally different approach

相关标签:
5条回答
  • 2020-11-28 21:32

    With Linux 3.7 perf is finally able to use DWARF information to generate the callgraph:

    perf record --call-graph dwarf -- yourapp
    perf report -g graph --no-children
    

    Neat, but the curses GUI is horrible compared to VTune, KCacheGrind or similar... I recommend to try out FlameGraphs instead, which is a pretty neat visualization: http://www.brendangregg.com/FlameGraphs/cpuflamegraphs.html

    Note: In the report step, -g graph makes the results output simple to understand "relative to total" percentages, rather than "relative to parent" numbers. --no-children will show only self cost, rather than inclusive cost - a feature that I also find invaluable.

    If you have a new perf and Intel CPU, also try out the LBR unwinder, which has much better performance and produces far smaller result files:

    perf record --call-graph lbr -- yourapp
    

    The downside here is that the call stack depth is more limited compared to the default DWARF unwinder configuration.

    0 讨论(0)
  • 2020-11-28 21:45

    You can get a very detailed, source level report with perf annotate, see Source level analysis with perf annotate. It will look something like this (shamelessly stolen from the website):

    ------------------------------------------------
     Percent |   Source code & Disassembly of noploop
    ------------------------------------------------
             :
             :
             :
             :   Disassembly of section .text:
             :
             :   08048484 <main>:
             :   #include <string.h>
             :   #include <unistd.h>
             :   #include <sys/time.h>
             :
             :   int main(int argc, char **argv)
             :   {
        0.00 :    8048484:       55                      push   %ebp
        0.00 :    8048485:       89 e5                   mov    %esp,%ebp
    [...]
        0.00 :    8048530:       eb 0b                   jmp    804853d <main+0xb9>
             :                           count++;
       14.22 :    8048532:       8b 44 24 2c             mov    0x2c(%esp),%eax
        0.00 :    8048536:       83 c0 01                add    $0x1,%eax
       14.78 :    8048539:       89 44 24 2c             mov    %eax,0x2c(%esp)
             :           memcpy(&tv_end, &tv_now, sizeof(tv_now));
             :           tv_end.tv_sec += strtol(argv[1], NULL, 10);
             :           while (tv_now.tv_sec < tv_end.tv_sec ||
             :                  tv_now.tv_usec < tv_end.tv_usec) {
             :                   count = 0;
             :                   while (count < 100000000UL)
       14.78 :    804853d:       8b 44 24 2c             mov    0x2c(%esp),%eax
       56.23 :    8048541:       3d ff e0 f5 05          cmp    $0x5f5e0ff,%eax
        0.00 :    8048546:       76 ea                   jbe    8048532 <main+0xae>
    [...]
    

    Don't forget to pass the -fno-omit-frame-pointer and the -ggdb flags when you compile your code.

    0 讨论(0)
  • 2020-11-28 21:52

    Ok, these functions might be slow, but how do I find out where they are getting called from? As all these hotspots lie in external libraries I see no way to optimize my code.

    Are you sure that your application someapp is built with the gcc option -fno-omit-frame-pointer (and possibly its dependant libraries) ? Something like this:

    g++ -m64 -fno-omit-frame-pointer -g main.cpp
    
    0 讨论(0)
  • 2020-11-28 21:53

    You should give hotspot a try: https://www.kdab.com/hotspot-gui-linux-perf-profiler/

    It's available on github: https://github.com/KDAB/hotspot

    It is for example able to generate flamegraphs for you.

    0 讨论(0)
  • 2020-11-28 21:54

    Unless your program has very few functions and hardly ever calls a system function or I/O, profilers that sample the program counter won't tell you much, as you're discovering. In fact, the well-known profiler gprof was created specifically to try to address the uselessness of self-time-only profiling (not that it succeeded).

    What actually works is something that samples the call stack (thereby finding out where the calls are coming from), on wall-clock time (thereby including I/O time), and report by line or by instruction (thereby pinpointing the function calls that you should investigate, not just the functions they live in).

    Furthermore, the statistic you should look for is percent of time on stack, not number of calls, not average inclusive function time. Especially not "self time". If a call instruction (or a non-call instruction) is on the stack 38% of the time, then if you could get rid of it, how much would you save? 38%! Pretty simple, no?

    An example of such a profiler is Zoom.

    There are more issues to be understood on this subject.

    Added: @caf got me hunting for the perf info, and since you included the command-line argument -g it does collect stack samples. Then you can get a call-tree report. Then if you make sure you're sampling on wall-clock time (so you get wait time as well as cpu time) then you've got almost what you need.

    0 讨论(0)
提交回复
热议问题