Unit testing concurrent software - what do you do?

人走茶凉 提交于 2019-11-29 12:25:00

问题


As software gets more and more concurrent, how do you handle testing the core behaviour of the type with your unit tests (not the parallel behaviour, just the core behaviour)?

In the good old days, you had a type, you called it, and you checked either what it returned and/or what other things it called.

Nowadays, you call a method and the actual work gets scheduled to run on the next available thread; you don't know when it'll actually start and call the other things - and what's more, those other things could be concurrent too.

How do you deal with this? Do you abstract/inject the concurrent scheduler (e.g. abstract the Task Parallel Library and provide a fake/mock in the unit tests)?

What resources have you come across that helped you?


Edit

I've edited the question to emphasise testing the normal behaviour of the type (ignoring whatever parallel mechanism is used to take advantage of multi-core, e.g. the TPL)



回答1:


The field of Unit testing for race conditions and deadlocks is relativly new and lacks good tools.

I know of two such tools both in early alpha/beta stages:

  • Microsoft's Chess
  • Typemock Racer

ANother option is to try and write a "stress test" that would cause deadlocks/race condtions to surface, create multiople instances/threads and run them side by side. The downside of this approch is that if the test fail it would be very hard to reproduce it. I suggest using logs both in the test and production code so that you'll be able to understand what happened.




回答2:


Disclaimer: I work for Corensic, a small startup in Seattle. We've got a tool called Jinx that is designed to detect concurrency errors in your code. It's free for now while we're in Beta, so you might want to check it out. ( http://www.corensic.com/ )

In a nutshell, Jinx is a very thin hypervisor that, when activated, slips in between the processor and operating system. Jinx then intelligently takes slices of execution and runs simulations of various thread timings to look for bugs. When we find a particular thread timing that will cause a bug to happen, we make that timing "reality" on your machine (e.g., if you're using Visual Studio, the debugger will stop at that point). We then point out the area in your code where the bug was caused. There are no false positives with Jinx. When it detects a bug, it's definitely a bug.

Jinx works on Linux and Windows, and in both native and managed code. It is language and application platform agnostic and can work with all your existing tools.

If you check it out, please send us feedback on what works and doesn't work. We've been running Jinx on some big open source projects and already are seeing situations where Jinx can find bugs 50-100 times faster than simply stress testing code.




回答3:


I recommend picking up a copy of Growing Object Oriented Software by Freeman and Pryce. The last couple of chapters are very enlightening and deal with this specific topic. It also introduces some terminology which helps in pinning down the notation for discussion.

To summarize .... Their core idea is to split the functionality and concurrent/synchronization aspects.

  • First test-drive the functional part in a single synchronous thread like a normal object.
  • Once you have the functional part pinned down. You can move on to the concurrent aspect. To do that, you'd have to think and come up with "observable invariants w.r.t. concurrency" for your object, e.g. the count should be equal to the times the method is called. Once you have identified the invariants, you can write stress tests that run multiple threads et.all to try and break your invariants. The stress tests assert your invariants.
  • Finally as an added defence, run tools or static analysis to find bugs.

For passive objects, i.e. code that'd be called from clients on different threads: your test needs to mimic clients by starting its own threads. You would then need to choose between a notification-listening or sampling/polling approach to synchronize your tests with the SUT.

  • You could either block till you receive an expected notification
  • Poll certain observable side-effects with a reasonable timeout.



回答4:


A technique I've found useful is to run tests within a tool that detects race conditions like Intel Parallel Inspector. The test runs much slower than normal, because dependencies on timing have to be checked, but a single run can find bugs that otherwise would require millions of repeated ordinary runs.

I've found this very useful when converting existing systems for fine-grained parallelism via multi-core.




回答5:


Unit tests really should not test concurrency/asynchronous behaviour, you should use mocks there and verify that the mocks receive the expected input.

For integration tests I just explicitly call the background task, then check the expectations after that.

In Cucumber it looks like this:

When I press "Register"
And the email sending script is run
Then I should have an email



回答6:


Given that your TPL will have its own separate unit test you don't need to verify that.

Given that I write two tests for each module:
1) A single threaded unit test that uses some environment variable or #define to turn of the TPL so that I can test my module for functional correctness.
2) A stress test that runs the module in its threaded deployable mode. This test attempts to find concurrency issues and should use lots of random data.

The second test often includes many modules and so is probably more of an integration/system test.



来源:https://stackoverflow.com/questions/3362734/unit-testing-concurrent-software-what-do-you-do

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!