There is a method in our codebase which used to work fine, but not any more(without any modification to this method):
void XXX::setCSVFileName()
{
//get current working directory
char the_path[1024];
getcwd(the_path, 1023);
printf("current dir: %s \n",the_path);
std::string currentPath(the_path);
std::string currentPathTmp = currentPath + "/tmp_"+pathSetParam->pathSetTravelTimeTmpTableName;
std::string cmd = "mkdir -p "+currentPathTmp;
if (system(cmd.c_str()) == 0) // stops here
{
csvFileName = currentPathTmp+"/"+pathSetParam->pathSetTravelTimeTmpTableName + ".csv";
}
//...
}
I tried to debug it and found the culprit line to be if (system(cmd.c_str()) == 0)
. I put a breakpoint on that line and tried to step over it. it just stays there.
The value of cmd
as debugger shows is:
Details:{static npos = , _M_dataplus = {> = {<__gnu_cxx::new_allocator> = {}, }, _M_p = 0x306ae9e78 "mkdir -p /home/fm-simmobility/vahid/simmobility/dev/Basic/tmp_xuyan_pathset_exp_dy_traveltime_tmp"}}
I dont know what the system is doing but my application in top
shows around 100% cpu usage.
Have you ever hit such a situation?
IMPORTANT UPDATE As usual, I started reverting changes in my code one-by-one back to the state prior to the problem. Surprisingly, I found the problem(but not the solution....yet).
I added -pg to my compilation options to enable gprof
. and that is what caused the issue.
May be you have some knowledge of why gropf
doesn't line system()
or mkdir
??
thanks
You said in a comment on your other question that you needed to use gprof to support the results generated by your own profiler.
In other words, you want to write a profiler, and compare it to gprof, and you're questioning if the -pg
flag is making system
hang.
I'm saying forget about the -pg
flag. All that does is put call-counting code for gprof in the functions the compiler sees.
If I were you I would find something better to compare your profiler to.
Remember the typical reason why people use a profiler is to find speedups,
and they may think collecting measurements will help them do that.
It doesn't.
What it does instead is convince them there are no speedups to be found.
(They ask questions like "new
is taking 5% of the time, and that's my bottleneck, how can I speed it up?")
That's what gprof has done for us.
Here's a table of profiler features, from poor to better to best:
gprof perf zoom pausing
samples program counter | X | X | X | X |
show self % by function | X | X | X | X |
show inclusive % by function | | X | X | X |
samples stack | | X | X | X |
detects extra calls | | X | X | X |
show self % by line | | X | X | X |
show inclusive % by line | | ? | X | X |
handles recursion properly | | ? | X | X |
samples on wall-clock time | | | X | X |
let you examine samples | | | | X |
The reason these are important is that speedups are really good at hiding from profilers:
- If % by line not shown, speedup may be anywhere in a large function.
- If inclusive % not shown, extraneous calls are not seen.
- If samples not taken on wall-clock time, extraneous I/O or blocking not seen.
- If hot-path is shown, speedups can hide on either side of it.
- If call-graph is shown, speedups can hide in it by not being localized to A calls B, such as by a "tunnel" function.
- If flame-graph is shown, speedups can hide in it by not aggregating samples that could be removed.
But they can't hide from simply examining stack samples.
P.S. Here are some examples of how speedups can hide from profilers.
If the profiler shows a "hot-path", it only shows a small subset of the stack samples, so it can only show small problems.
But there could be a large problem that would be evident if only comparing stack samples for similarity, not equality:
Speedups can also hide in call graphs, as in this case the fact that A1 always calls C2 and A2 always calls C1 is obscured by the "tunnel function" B (which might be multiple layers). The call stacks are shown on the right, and a human recognizes the pattern easily:
In this case, the fact that A always calls C is obscured by A calling any of a number of Bi functions (possibly over multiple layers) that then call C. Again, the pattern is easily recognized in call stacks:
Another way is if the stack samples show that a lot of time is spent calling functions that have the same name but belong to different classes (and are therefore different functions), or have different names but are related by a similar purpose.
In a profiler these conspire to divide the time into small amounts, telling you there is nothing big going on. That's a consequence of people "looking for slow functions" which is actually a form of blinders.
来源:https://stackoverflow.com/questions/25660216/systemmkdir-hangs-in-my-application