We have a number of metrics that are collected as programmers do work, such as:
- Number of SLOC changed / added
- Number of errors / bugs injected in various stages of the process (during peer review, post peer review, post release)
- Change requests fulfilled / rejected
- Formal documents (software version descriptions, design docs, etc.)
All of these are tangibles which I find useful in presentations for management and software quality assurance. But I have never found them terribly useful in actual evaluations of people's performance - which is the point made by several of the links you listed. I've found that Joel's points here are valid - metrics never promote a good team atmosphere.
Unfortunately, we all live in a world where metrics are required by others (management, quality assurance, outside contractors, etc.). I've found that a balancing act is required - providing those metrics, but also providing evidence of intangibles - intangible being what each programmer has accomplished that is not necessarily tracked. For example, I had a programmer who spent a large amount of time investigating legacy code that no one else wanted to touch. Even though his metrics were low for that period of time, that effort was invaluable.
The only way I've found to include such things has been to push for the creation of an additional intangible category and give it equal weight with the other metrics. Usually this is sufficient to swing the balance for a particular programmer.