In chapter 5 of \"The Practice of Programming\" Brian Kernighan and Rob Pike write:
As a personal choice, we tend not to use debuggers beyond getting a st
I prefer manualy tracing the code and observing logs to debugger. Debugger turns me into a passive observer and makes harder to view the whole picture.
Program execution has at least one timeline, usually more than one in multithreaded environment. Having an idea what's going on few steps before and what can happen after the current line of execution helps me tracking the bug.
Having to go to the debugger is a likely sign of a deeper problem in my view.
Before firing up the debugger, I ask questions that target deeper problems like:
Which test failed? If the answer is, "there is no test," then not having a test for the failure condition is a deeper problem to be fixed first.
What information does the exception have? (This assumes an environment with exceptions). If the answer is, "there is no exception," or the exception doesn't have much context, then scanning for places where an exception is swallowed, or adding more context to an exception is a deeper problem to be fixed first.
What warnings are produced by the build for that section of the system? If you're not building and analyzing your system with modern tools to find common mistakes, and correcting them before they show up at runtime, then you've got a deeper problem to be fixed first.
Do we understand the problem domain enough to reason about what might be happening? If the answer is, "no we're not clear on this," then discussions with subject matter experts who can make the purpose of a piece of the system more clear is in order. Without clearly understood requirements, more bugs will likely come.
Doing these sorts of things usually leads to at least one bug fix, if not several. And these approaches have the highly valuable side-effect of, well, forcing programmers to think about the problem, the whole problem, not just the line of code "where the error occurred."
Certainly there are cases where an error occurs on a single line like not checking for a null pointer/reference, etc. but aren't those "simple" errors the very types of errors that modern IDEs and tools help to eliminate? Just run a lint/static-analysis tool and heed the warnings - you won't get those types of errors anymore. Then you're left with things like design errors that require the reasoning of a human mind - how can a debugger figure that out?
I'd prefer to use TDD and add a test that breaks the code. Then it's easy to see where the bug is occurring and to fix it without a debugger and now I've got a test that will prevent that bug.
Different tools for different jobs. Just like you wouldn't use Perl for everything, you're not going to use a Debugger for every bug. Sometimes using a debugger fits a problem and sometimes it doesn't.
Take for example the bug that turned up in one of our products. It was pulling the last window to have focus back to focus after a print method. It couldn't be repo'd when the debugger was attached, but could when it was. This problem eventually was solved with good old fashioned Console.Write()
statements.
Two reasons for not using a debugger include:
I don't know if I'm "that kind of programmer" but I don't see what you'd want of a debugger beyond:
I've heard some people suggest that you step through code with a debugger when you write it, as a kind of code inspection, but I don't do that.
I use a debugger when :
In all it is a balance between speed and accuracy. However from experience if I end up spending a lot of time around a piece of code there is a good chance I will have to come back to it, so I add logs and I add tests so I do not have to come back to it, or it I do all the work I have done to understand the code remains and I can build on top.
One reason I do not like debuggers is that all the work I do figuring out how it works is wasted once the debugger is off. If I spend the time learning about a piece of code I want this knowledge to be available the next time I (or someone else) get to it. Adding trace code is a very good way to have "Dynamic comments" that is always there and can be summoned anytime.
At large... pretty much anything that is removed before shipping to the customer I shy away from. If I put a safety net around my system there is no reason my customer cannot benefit from it while using it as I did while programming it. This is especially true if I am the one that has to support it afterward... I hate supporting so I want to make it as painless as humanly possible.
I absolutely use my debugger. When a test I wrote is failing, I often step through the lines, checking my expectations against the actual values shown in the code.
That being said, after years of programming experience, you then to NEED the debugger less and less as you are more able to just "know" why the problem manifested.
Two thing that really make the debugger userfil are: conditional breakpoints and the object inspector. They are by far the most useful debugger features for me.