How to do “performance-based” (benchmark) unit testing in Python

匆匆过客 提交于 2019-12-20 10:13:23

问题


Let's say that I've got my code base to as high a degree of unit test coverage as makes sense. (Beyond a certain point, increasing coverage doesn't have a good ROI.)

Next I want to test performance. To benchmark code to make sure that new commits aren't slowing things down needlessly. I was very intrigued by Safari's zero tolerance policy for slowdowns from commits. I'm not sure that level of commitment to speed has a good ROI for most projects, but I'd at least like to be alerted that a speed regression has happened, and be able to make a judgment call about it.

Environment is Python on Linux, and a suggestion that was also workable for BASH scripts would make me very happy. (But Python is the main focus.)


回答1:


You will want to do performance testing at a system level if possible - test your application as a whole, in context, with data and behaviour as close to production use as possible.

This is not easy, and it will be even harder to automate it and get consistent results.

Moreover, you can't use a VM for performance testing (unless your production environment runs in VMs, and even then, you'd need to run the VM on a host with nothing else on).

When you say doing performance unit-testing, that may be valuable, but only if it is being used to diagnose a problem which really exists at a system level (not just in the developer's head).

Also, performance of units in unit testing sometimes fails to reflect their performance in-context, so it may not be useful at all.




回答2:


While I agree that testing performance at a system level is ultimately more relevant, if you'd like to do UnitTest style load testing for Python, FunkLoad http://funkload.nuxeo.org/ does exactly that.

Micro benchmarks have their place when you're trying to speed up a specific action in your codebase. And getting subsequent performance unit tests done is a useful way to ensure that this action that you just optimized does not unintentionally regress in performance upon future commits.




回答3:


MarkR is right... doing real-world performance testing is key, and may be somewhat dodgey in unit tests. Having said that, have a look at the cProfile module in the standard library. It will at least be useful for giving you a relative sense from commit-to-commit of how fast things are running, and you can run it within a unit test, though of course you'll get results in the details that include the overhead of the unit test framework itself.

In all, though, if your objective is zero-tolerance, you'll need something much more robust than this... cProfile in a unit test won't cut it at all, and may be misleading.




回答4:


When I do performance testing, I generally have a test suite of data inputs, and measure how long it takes the program to process each one.

You can log the performance on a daily or weekly basis, but I don't find it particularly useful to worry about performance until all the functionality is implemented.

If performance is too poor, then I break out cProfile, run it with the same data inputs, and try to see where the bottlenecks are.



来源:https://stackoverflow.com/questions/671503/how-to-do-performance-based-benchmark-unit-testing-in-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!