Here's how I approached this problem while maintaining a 10 year old legacy perl system. To give a slight bit of background, this was some of the worst of the worst. I was initially depressed when I started working at the company because the code was so poorly thought out and laid out (and it was my first programming job). By the end of that gig I had a pretty good system though.
First of all, every new feature was added as a new module or methods. Nothing was hacked in on top of the existing code. This let me write unit tests for the new stuff, providing the confidence in it, so to integrate with the old stuff was just a matter of a line or two.
Secondly, any bugfix was implemented the same way. I'd spend some time figuring out what was happening with the old stuff (not searching for the bug), and I'd write it out as a new method (or module), and wrap it in tests, and then typically could replace.
This however only works if you've got the time, and the buy-in to get it done. If you don't, keep track of the time that you spend on any given bug. Especially where it's multiple bugs in the same file, or process, or whatever. At some point in time, it's easy to point at a given collection of code and say that this is so bad, has cost N amount of time, it's time to rewrite.
I was successful enough with this approach that I gained the honorary title "Forensic Programmer", it's also a fantastic skill to have, because more often than not the new job already has some code written :P