Benchmarking with googletest?

﹥>﹥吖頭↗ 提交于 2019-12-08 14:52:11

问题


Background (skip to Question below if not interested)

I have a simulator that runs through three states:

  1. Single threaded startup (I/O ok)
  2. Multi-threaded in-memory CPU-bound simulation stage (I/O not ok)
  3. Post-simulation, post-join single threaded stage (I/O ok)

What the heck! During standard testing, CPU usage dropped from 100% down to 20%, and the total run took about 30 times longer than normal (130secs vs 4.2secs).

When Callgrind revealed nothing suspicious, my head buzzed as I was on the precipice of rolling back to the last commit, losing all bug-fixes.

Discouraged, I walked into the server room during a run and noticed nasty grinding sounds, verified later to be caused by writes to Mysql sockets in /proc/PID/fd!!! It turned out that Mysql code, several layers deep in Stage 2., was causing problems.

Lessons Learned

  1. Accidental I/O can be lethal to a real-time application
  2. Unit testing is not enough: I need benchmarking, too

Fix I will introduce thread-local-storage IOSentinels and asserts() on ReadAllowed() and WriteAllowed() to ensure that Stage 2 threads will never do any IO.

Question

Anyone have any luck with attaching/writing a benchmarking framework with googletest?

Unfortunately, all my googletests passed this time. Had I stepped away for a bit and come back without noticing the run-time, this would have been a disastrous commit, and possibly much harder to fix.

I would like googletest to fail if a run takes >2 or 3 times the last runtime: this last part is tricky because for very quick runs, system state can cause something to take twice as long but still be ok. But for a long simulation run/test, I don't expect runtimes to change by a great deal (>50% would be unusual).

I am open to suggestions here, but it would be nice to have a low-maintenance check that would work with automated testing so it will be obvious if the system suddenly got slow, even if all the outputs appear to be ok.


回答1:


Some updates on this question (in 2016):

  1. Here is a nice blog-post of Nick Brunn about his Hayai benchmarking framework. (2012)

    • It does not provide the possibility of specifying running time requirements.
    • It is very similar to Google Test. Syntax, etc.
    • It provides the benchmarking results to the user or a Continous Integration framework. Also have a look at MojaveWastelander's fork for active development and MSVC support.
  2. Google published 'Benchmark' in 2014. This provides similar behaviour then Hayai above. As far as I understand, defining requirements is not possible. Again, the syntax is inspired by GoogleTest.

    • There are even advanced features such as measuring complexity (big-O).
  3. GoogleTest has this as an open feature on Github. There is a rudimentary implementation but it is not part of GoogleTest yet.




回答2:


Isn't it just as simple as this?

const clock_t t0 = clock(); // or gettimeofday or whatever
int res = yourFunction();
const clock_t t1 = clock();
const double elapsedSec = (t1 - t0) / (double)CLOCKS_PER_SEC;
EXPECT_EQ(EXPECTED, res);
EXPECT_GT(10.0, elapsedSec);

Here, you need to manually change 10.0 depending on your task.

Of course, you can go further by something like:

double prev = -1;
{
  ifstream ifs("/var/tmp/time_record.txt");
  ifs >> prev;
}
if (prev < 0) prev = DEFAULT_VALUE;
// ...
EXPECT_GT(2 * prev, elapsedSec);
{
  ofstream ofs("/var/tmp/time_record.txt");
  ofs << elapsedSec << endl;
}

But I wonder this additional complexity can really be justified.




回答3:


Google Test framework proposes by default a measure of elapsed time. It is commanded by an environment variable, GTEST_PRINT_TIME. This variable defaults to 1.

So, why not monitor elapsed time using this feature of Google Test platform?

Here is a word on elapsed time variable in Google Test.



来源:https://stackoverflow.com/questions/8565666/benchmarking-with-googletest

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!