I try to design applications to be robust in the face of accidents -- either slips (inadvertent operations, such as clicking in the wrong place) or mistakes (cognitive errors, such as clicking Ok vs. Cancel on a dialog). Some ways to do this are:
- infinite (or at least multi-step) undo / redo
- integrate documentation with the interface, via dynamic tooltips and other context-sensitive means of communication (One paper that is particularly relevant is about 'Surprise, Explain, Reward' (direct link: SER) -- using typical psychological responses to unexpected behavior to inform users)
- Incorporate the state of the system into said documentation (use the current user's data as examples, and make the documentation concrete by using data that they can see right now)
- Expect user error. If there's a chance that someone will try to write to a:\ when there isn't a disk in place, then implement a time-out so the system can fail gracefully, and prompt for another location. Save the data in memory until it's secure on disk, etc.
This boils down to two core things: (1) Program defensively, and (2) Keep the user as well informed as you can. If the system's interface is easy to use, and behaves according to their expectations then they are more likely to know which button to click when an annoying dialog appears.
I also try very, very hard to avoid anything modal, so users can ignore most dialogs I have to use, at least for a while (and when they really need to pay attention to them, they have enough information to know what to do with it).
It's impossible to make a system completely fool-proof, but I've found that the above techniques go a long way in the right direction. (and they have been incorporated in the systems used to develop Surprise Explain Reward and other tools that have been vetted by extensive user studies.)