Unit testing is good (as I have blogged before); with a test framework, it means you can test just a little bit of your code without having to set up an entire environment (if it even exists as yet!), run through complicated manual processes, before ever getting near the code you want to exercise.
The trouble is that tests can so easily miss the point of what it is you are testing.
I am a great believer in the style of design by contract -- objects interact with the world through an interface, which defines a set of stimuli to which the object can respond and what they should do in terms of success or failure (failure possibly being non-deterministic (e.g. out of memory, I/O failure...) for each stimulus. Tests should be written to validate the contract, and be oblivious to the implementation. If an implementation change is made -- such as to enhance performance -- then the tests should not fail. Coverage percentage may well change, but if the contract is still followed, and the test tests to the contract, it should still pass.
Where an object collaborates with other objects, it does so via their interfaces (in .Net terms, public, or public plus internal methods according to context). To prise an object free from the whole application context, then it needs to be provided with dummy objects which provide it with the collaborative environment it which it will work.
This is the problem with some well known mocking frameworks -- they do not facilitate implementation of the collaborator's interface in general form, as a proper test should. This can rapidly get you into a way of thinking about test writing that tests implementation and not interface. The NMock cheat-sheet is an example of what you should not do -- to take one particular example
|Possible Method Call Expectations|
Expect.Once Expect.Never Expect.AtLeastOnce
It should not matter to a functional test how many times a collaborating object is invoked or through which methods; the important thing is that the mocked collaborating object should respond according to the contract of its own interface, and that the test validates that the object under test honours its own interface definition. (The RhinoMocks record/replay approach is equally guilty in this respect).
I could do some processing on a block of 20 bytes from a stream of data by reading 2 bytes ten times, 20 bytes in a chunk, or reading up to 1024 bytes and pushing back any surplus onto the stream (assuming the stream contract supports an un-read/seek type behaviour). The unit test should only provide the stream with to be read, charged with an appropriate number of bytes (depending on whether the test is testing a positive case or a case of insufficient or superfluous data) and not constrain the process any further. Indeed, the number 20 may even be an implementation detail, and the real issue is that a whole semantic unit on the wire be read and no more. (See the unit testing for my refurbishment of the .Net/Erlang bridge for a starter.)
Where developing -- designing -- for testability does make changes is the need to write loosely coupled code. Where you don't need to refer to concrete types, only their interfaces, then that is what you should do, so that it is easy to replace inputs with appropriate mocks. To this end, constructor use -- rather than factory calls -- should be considered (people have also suggested that public constructors are evil for other reasons than just testing, too.)