ASSERTs ASSERTs ASSERTs.
I have 300000 loc (not counting comments) of highly factored and reused code in my personal libraries of which about 15%(a guess) is templates and 50000 loc is test code.
If an idiom is duplicated then it is made a function/method. Personally I view the ease of cut and paste as the invention of the DEVIL purposely put there to bloat code and propagate defects.
About 4% of the library is ASSERTS and debug code (very few printfs and almost all of the output is queued output to a cout stream custom low priority task because screen IO is just so expensive and thus timing changing). Maybe 50% of the asserts are there to guarantee class invariants and post conditions on method execution.
I refactor mecilessly when I revisit a piece of code that I may have rushed or maybe just made a mistake in the interface/object coupling design, say the subject object of a method really belongs as an object object and the method belonged in one of the originally object objects (parameter objects). Being liberal with asserts seems to protect me from some dumb mistakes if I makea substantial refactoring. This doesnt happen much but there are times.
I have a DEBUGGING macro that acts much like ASSERT so I can have code surrounded by
DEBUGGING(... code....);
and it is not compiled in non debug builds.
I do not use the vendor supplied assert. My asserts DO NOT abort & core dump they merely throw up a message box and call the debugger. If it is new code and the method is a const method, being able to return to the method and then re-execute it(the method) with the same set of parameters is pretty useful. Sometimes even the fact that some data is changed is irrelevant to the problem and one can re-invoke with knowledge gain.
I absolutely HATE command line debuggers. It is like going back 25 years in time - might as well be using a teletype and a 2400 baud line. I need and want a full blown IDE where one can right click on a data structure and open it up, close it, chase pointers execute methods etc etc etc.
I step through every line of my new code and examine every (one of my) variables for expected behaviour. The use of an IDE that highlights changes is invaluable here. To do that with GDB one must be a concert pianist with the memory of Carnac the Magnificent ;-).
For new development I also try to capture stream data/message data when an abnormal condition is encountered. This is especially useful for udp servers and frequently gives a head start on reproducibility.
I also like to have simulators that can "surround the application and drive it and consume from it and do verification. (cojoined/coupled source/sink simulators) Almost all of my code is headless or at least the human interaction is irrelavant to functionality so "surrounding" the app is frequently possible. I find it very important to have good support and management that understands that creation of the test data is very important and the the test data collection is what builds up into a suite of tests that can evolve into a comprehensive regression/smoke test.
I also used to like to set the OS scheduling quantum way down With multithreaded apps such short quanta can more easily bring out threading bugs. I especially like to drive thread safe object methods with many threads - tens if not hundreds. In general a threadsafe object cannot be tested in situ if the application is human driven - just impposible to drive it. So there is real need for custom test drivers that are at a much lower (component oriented) level. And it is in these test that the asserts can let you know if something broke. Obviously doesn't prove the code is right but does give some confidence.
It is also true that these preferences probably reflect the more class library/reusability oriented views and roles that I have had. When you write library code there are usually few "production" problems as the library is by definition heavily used and heavily tested. Logging and that sort of history seems to be more app oriented than library oriented.