What was adviced to me is this:
- Try as you think ; After a while, evaluate yourself:
- If testing spend more time than you felt was reasonnable, and you had too little return over investment, test less.
- If your product was tested enough and you lost time, test more.
- Loop as needed.
Another algorithm: :-)
- Some testing is really easy, and really useful. Always do this, with high priority.
- Some testing is really hard to set up, and rarely come useful (for example, it could be duplicated by human testing that always happen in your process). Stop doing this, it's losing your time.
- In between, try to find a balance, that may vary as time goes depending on the phases of your project ...
UPDATED for the comment, about proving the usefulness of some tests (the ones that you firmly believe in):
I'm often telling to my younger collegues that we, technical people (developpers and the like), have a lack in communication with our management. As you say, for management, costs that are not listed do not exist, therefore they avoiding them cannot serve to justify another cost. I used to be frustrated about that also. But thinking about it, that is the very essence of their job. If they would accept unnecessary costs without justification, they would be poor managers !
It's not to say that they are right to negate us these activities, that we know are useful. But we first have to make apparent the costs. Even more, if we report the cost in an appropriate way, the management will have to make the decision we want (or they would be bad managers ; note that the decision may still be prioritized ...). So I suggest to track the cost so that they are not hidden any more :
- In the place where you track the time you spend, note separately the costs that come from the code being untested (if not available in the tool, add it as a comment)
- Aggregate those costs on a dedicated report if the tool doesn't, so that each week, your manager reads that X% of your time was spend on that
- Each time you evaluate loads, evaluate separately several options, with or without automated testing, showing the time spend on manual testing or automated testing is about the same (if you limit yourself to the most useful tests, as explained earlier), while the latter is an asset against regressions.
- Link bugs to the original code. If the link is not in your process, find a way to connect them : you need to show that the bug comes from having no automatic tests.
- Accumulate also a report of those links.
To really impact the manager, you could send them every week a spreadsheet up to date (but with the whole history, not only for the week). SpreadSheet gives graphics that give immediate understanding, and let the unbelieving manager get to the raw numbers...