There are two similar namespaces and assemblies for speech recognition in .NET. I’m trying to understand the differences and when it is appropriate to use one or the other.<
The short answer is that Microsoft.Speech.Recognition uses the Server version of SAPI, while System.Speech.Recognition uses the Desktop version of SAPI.
The APIs are mostly the same, but the underlying engines are different. Typically, the Server engine is designed to accept telephone-quality audio for command & control applications; the Desktop engine is designed to accept higher-quality audio for both command & control and dictation applications.
You can use System.Speech.Recognition on a server OS, but it's not designed to scale nearly as well as Microsoft.Speech.Recognition.
The differences are that the Server engine won't need training, and will work with lower-quality audio, but will have a lower recognition quality than the Desktop engine.
I found Eric’s answer really helpful, I just wanted to add some more details that I found.
System.Speech.Recognition can be used to program the desktop recognizers. SAPI and Desktop recognizers have shipped in the products:
Servers come with SAPI, but no recognizer:
Desktop recognizers have also shipped in products like office.
Microsoft.Speech.Recognition can be used to program the server recognizers. Server recognizers have shipped in the products:
The complete SDK for the Microsoft Server Speech Platform 10.2 version is available at http://www.microsoft.com/downloads/en/details.aspx?FamilyID=1b1604d3-4f66-4241-9a21-90a294a5c9a4. The speech engine is a free download. Version 11 is now available at http://www.microsoft.com/download/en/details.aspx?id=27226.
For Microsoft Speech Platform SDK 11 info and downloads, see:
Desktop recognizers are designed to run inproc or shared. Shared recognizers are useful on the desktop where voice commands are used to control any open applications. Server recognizers can only run inproc. Inproc recognizers are used when a single application uses the recognizer or when wav files or audio streams need to be recognized (shared recognizers can’t process audio files, just audio from input devices).
Only Desktop speech recognizers include a dictation grammar (system provided grammar used for free text dictation). The class System.Speech.Recognition.DictationGrammar has no complement in the Microsoft.Speech namespace.
You can use use the APIs to query determine your installed recongizers
I found that I can also see what recognizers are installed by looking at the registry keys:
--- Update ---
As discussed in Microsoft Speech Recognition - what reference do I have to add?, Microsoft.Speech is also the API used for the Kinect recognizer. This is documented in the MSDN article http://msdn.microsoft.com/en-us/library/hh855387.aspx
Here is the link for the Speech Library (MS Server Speech Platform):
Microsoft Server Speech Platform 10.1 Released (SR and TTS in 26 languages)
Seems Microsoft wrote an article that clears things up regarding the differences between Microsoft Speech Platform and Windows SAPI - https://msdn.microsoft.com/en-us/library/jj127858.aspx. A difference I found myself while converting Speech recognition code for Kinect from Microsoft.Speech to System.Speech (see http://github.com/birbilis/Hotspotizer) was that the former supports SGRS grammars with tag-format=semantics/1.0-literals, while the latter doesn't and you have to convert to semantics/1.0 by changing x to out="x"; at tags