I want to prevent automated html scraping from one of our sites while not affecting legitimate spidering (googlebot, etc.). Is there something that already exists to accomplish
If you want to protect yourself from generic crawler, use a honeypot.
See, for example, http://www.sqlite.org/cvstrac/honeypot. The good spider will not open this page because site's robots.txt disallows it explicitly. Human may open it, but is not supposed to click "i am a spider" link. The bad spider will certainly follow both links and so will betray its true identity.
If the crawler is created specifically for your site, you can (in theory) create a moving honeypot.