In writing unit tests, for each object that the unit interacts with, I am taking these steps (stolen from my understanding of JBrains\' Integration Tests are a Scam):
First, it's definitely harder to get this level of coverage with integration tests, so I think unit tests are still superior. However, I think you have a point. It's hard to keep your objects' behavior in sync.
An answer to this is to have partial integration tests that have real services 1 level deep, but beyond that are mocks. For instance:
var sut = new SubjectUnderTest(new Service1(Mock.Of<Service1A>(), ...), ...);
This solves the problem of keeping behaviors in sync, but compounds the level of complexity because you now have to setup many more mocks.
You can solve this problem in a functional programming language using discriminated unions. For instance:
// discriminated union
type ResponseType
| Success
| Fail of string // takes an argument of type string
// a function
let saveObject x =
if x = "" then
Fail "argument was empty"
else
// do something
Success
let result = saveObject arg
// handle response types
match result with
| Success -> printf "success"
| Fail msg -> printf "Failure: %s" msg
You define a discriminated union called ResponseType
that has a number of possible states, some of which can take arguments and other metadata. Every time you access a return value you have to deal with possible various conditions. If you were to add another failure type or success type, the compiler would give you warnings for each time you don't handle the new member.
This concept can go a long way toward handling the evolution of a program. C#, Java, Ruby and other languages use exceptions to communicate failure conditions. But these failure conditions are frequently not "exceptional" circumstances at all, which ends up leading to the situation you are dealing with.
I think functional languages are the closest to providing the best answer to your question. Frankly, I don't think there is a perfect answer, or even a good answer in many languages. But compile-time checking can go a long way
You should not trust human beings (even yourself) about keeping mock and real software components in synch.
I hear you ask?
Then what is your proposal?
My proposal is;
You should write mocks.
You should only write mocks for software components that you maintain.
If you maintain a software component with another developer, you and the other developer should maintain mock of that component together.
You should not mock someone else's component.
When you write a unit test for your component, you should write a separate unit test for the mock of that component. Let's call that MockSynchTest
.
In MockSynchTest
you should compare every behavior of the mock with the real component.
When you make changes to your component, you should run the MockSynchTest
to see if you made your mock and component out-of-synch or not.
If you need the mock of a component that you do not maintain while testing your components, ask the developer of that component about the mock. If she can provide you with a well tested mock, good for her and lucky for you. If she can not, kindly ask her to follow this guideline and provide you a well tested mock.
This way if you accidentally make your mock out of synch, there will be a failing test case there to warn you.
This way you do not need to know the implementation details of the foreign component for mocking.
How-to-write-good-tests#dont-mock-type-you-dont-own
I do it this way.
Suppose I have to change the responses from interface method foo()
. I gather all the collaboration tests that stub foo()
in a list. I gather all the contract tests for method foo()
, or if I don't have contract tests, I gather all the tests for all the current implementations of foo()
in a list.
Now I create a version control branch, because it'll be messy for a while.
I @Ignore
(JUnit speak) or otherwise disable the collaboration tests that stub foo()
and start re-implementing and re-running them one by one. I get them all passing. I can do this without touching any production implementation of foo()
.
Now I re-implement the objects that implement foo()
one by one with expected results that match the new return values from the stubs. Remember: stubs in collaboration tests correspond to expected results in contract tests.
At this point, all the collaboration tests now assume the new responses from foo()
and the contract tests/implementation tests now expect the new responses from foo()
, so It Should All Just Work.(TM)
Now integrate your branch and pour yourself some wine.
Revised: This is a tradeoff. Ease of testing by isolating an object from its environment vs Confidence that it all works when all the pieces come together.