Is there hard evidence of the ROI of unit testing?

前端 未结 11 2329
既然无缘
既然无缘 2020-12-12 08:54

Unit testing sounds great to me, but I\'m not sure I should spend any time really learning it unless I can convince others that is has significant value. I have to convince

相关标签:
11条回答
  • 2020-12-12 09:50

    There are statistics that prove that fixing a bug found in the unit/integration test costs many times less than fixing once it's on the live system (they are based on monitoring thousand of real life projects).

    Edit: for example, as pointed out, the book "Code Complete" reports on such studies (paragraph 20.3, "Relative Effectiveness of Quality Techniques"). But there is also private research in the consulting field that proves that as well.

    0 讨论(0)
  • 2020-12-12 09:52

    " I have to convice the other programmers and, more importantly, the bean-counters in management, that all the extra time spent learning the testing framework, writing tests, keeping them updated, etc.. will pay for itself, and then some."

    Why?

    Why not just do it, quietly and discretely. You don't have to do it all at once. You can do this in little tiny pieces.

    The framework learning takes very little time.

    Writing one test, just one, takes very little time.

    Without unit testing, all you have is some confidence in your software. With one unit test, you still have your confidence, plus proof that at least one test passes.

    That's all it takes. No one needs to know you're doing it. Just do it.

    0 讨论(0)
  • 2020-12-12 09:52

    I take a different approach to this:

    What assurance do you have that your code is correct? Or that it doesn't break assumption X when someone on your team changes func1()? Without unit tests keeping you 'honest', I'm not sure you have much assurance.

    The notion of keeping tests updated is interesting. The tests themselves don't often have to change. I've got 3x the test code compared to the production code, and the test code has been changed very little. It is, however, what lets me sleep well at night and the thing that allows me to tell the customer that I have confidence that I can implement the Y functionality without breaking the system.

    Perhaps in academia there is evidence, but I've never worked anywhere in the commercial world where anyone would pay for such a test. I can tell you, however, that it has worked well for me, took little time to get accustomed to the testing framework and writing test made me really think about my requirements and the design, far more than I ever did when working on teams that wrote no tests.

    Here's where it pays for itself: 1) You have confidence in your code and 2) You catch problems earlier than you would otherwise. You don't have the QA guy say "hey, you didn't bother bounds-checking the xyz() function, did you? He doesn't get to find that bug because you found it a month ago. That is good for him, good for you, good for the company and good for the customer.

    Clearly this is anecdotal, but it has worked wonders for me. Not sure I can provide you with spreadsheets, but my customer is happy and that is the end goal.

    0 讨论(0)
  • 2020-12-12 09:53

    We've demonstrated with hard evidence that it's possible to write crappy software without Unit Testing. I believe there's even evidence for crappy software with Unit Testing. But this is not the point.

    Unit Testing or Test Driven Development (TDD) is a Design technique, not a test technique. Code that's written test driven looks completely different from code that is not.

    Even though this is not your question, I wonder if it's really the easiest way to go down the road and answer questions (and bring evidence that might be challenged by other reports) that might be asked wrong. Even if you find hard evidence for your case - somebody else might find hard evidence against.

    Is it the business of the bean counters to determine how the technical people should work? Are they providing the cheapest tools in all cases because they believe you don't need more expensive ones?

    This argument is either won based on trust (one of the fundamental values of agile teams) or lost based on role power of the winning party. Even if the TDD-proponents win based on role power I'd count it as lost.

    0 讨论(0)
  • 2020-12-12 09:54

    Just to add more information to these answers, there are two meta-analysis resources that may help out figuring out productivity & quality effects on academic and industry background:

    Guest Editors' Introduction: TDD—The Art of Fearless Programming [link]

    All researchers seem to agree that TDD encourages better task focus and test coverage. The mere fact of more tests doesn't necessarily mean that software quality will be better, but the increased programmer attention to test design is nevertheless encouraging. If we view testing as sampling a very large population of potential behaviors, more tests mean a more thorough sample. To the extent that each test can find an important problem that none of the others can find, the tests are useful, especially if you can run them cheaply.

    Table 1. A summary of selected empirical studies of test-driven development: industry participants*

    Table 2. A summary of selected empirical studies of TDD: academic participants*

    The Effects of Test-Driven Development on External Quality and Productivity: A Meta-Analysis [link]

    Abstract:

    This paper provides a systematic meta-analysis of 27 studies that investigate the impact of Test-Driven Development (TDD) on external code quality and productivity.

    The results indicate that, in general, TDD has a small positive effect on quality but little to no discernible effect on productivity. However, subgroup analysis has found both the quality improvement and the productivity drop to be much larger in industrial studies in comparison with academic studies. A larger drop of productivity was found in studies where the difference in test effort between the TDD and the control group's process was significant. A larger improvement in quality was also found in the academic studies when the difference in test effort is substantial; however, no conclusion could be derived regarding the industrial studies due to the lack of data.

    Finally, the influence of developer experience and task size as moderator variables was investigated, and a statistically significant positive correlation was found between task size and the magnitude of the improvement in quality.

    0 讨论(0)
提交回复
热议问题