The most basic way of doing this is create a set of labeled training data and using it to train a classifier. How the classifier works is a more complicated issue- for spam filtering and many other things, just looking at the word frequency works pretty well.
Here is a basic example: http://openclassroom.stanford.edu/MainFolder/DocumentPage.php?course=MachineLearning&doc=exercises/ex6/ex6.html
It is trivial to write a Naive Bayes classifier; a package like MALLET will also have this plus better machine learning methods. Lingpipe will also have this sort of stuff.
What you really should care about is the quality of data and what your features are. By quality of data I mean lots of data without that many borderline cases, and by features I mean are you choosing just words, or combinations of words (word ngrams), or dependency features, or something more complex. You need a way to create the feature data as well as actually do the learning! In this sense Lingpipe is good as you can do tokenization and all that first as opposed to writing your own functions to do this or having to cobble other tools together into your own feature generation code.
A guide to MALLET can be found here: http://courses.washington.edu/ling570/fei_fall10/11_15_Mallet.pdf