Is it possible to use the synthesised speech from Web Speech API as a SourceNode
inside Web Audio API\'s audio context?
I actually asked about adding this on the Web Speech mailing list, and was basically told "no". To be fair to people on that mailing list, I was unable to think of more than one or two specific use cases when prompted.
So unless they've changed something in the past month or so, it sounds like this isn't a planned feature.
You can use Google's Web Speech API, you record the sound on your local machine and it is send to an external server, you can control some variants like when to stop or start the recognition and some other things. For more information here's a link:
http://updates.html5rocks.com/2013/01/Voice-Driven-Web-Apps-Introduction-to-the-Web-Speech-API