For everyone using Android\'s voice recognition API, there used to be a handy RecognitionListener you could register that would push various events to your callbacks. In particu
I ran in to the same problem. The reason why I didn't just accept that "this does not work" was because Google Nows "note-to-self" record the audio and sends it to you. What I found out in logcat while running the "note-to-self"-operation was:
02-20 14:04:59.664: I/AudioService(525): AudioFocus requestAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50
02-20 14:04:59.754: I/AbstractCardController.SelfNoteController(8675): #attach
02-20 14:05:01.006: I/AudioService(525): AudioFocus abandonAudioFocus() from android.media.AudioManager@42439ca8com.google.android.voicesearch.audio.ByteArrayPlayer$1@424cca50
02-20 14:05:05.791: I/ActivityManager(525): START u0 {act=com.google.android.gm.action.AUTO_SEND typ=text/plain cmp=com.google.android.gm/.AutoSendActivity (has extras)} from pid 8675
02-20 14:05:05.821: I/AbstractCardView.SelfNoteCard(8675): #onViewDetachedFromWindow
This makes me belive that google disposes the audioFocus from google now (the regonizerIntent), and that they use an audio recorder or something similar when the Note-to-self-tag appears in onPartialResults. I can not confirm this, has anyone else made tries to make this work?
I have a service that is implementing RecognitionListener and I also override onBufferReceived(byte[]) method. I was investigating why the speech recognition is much slower to call onResults() on <=ICS . The only difference I could find was that onBufferReceived is called on phones <= ICS. On JellyBean the onBufferReceived() is never called and onResults() is called significantly faster and I'm thinking its because of the overhead to call onBufferReceived every second or millisecond. Maybe thats why they did away with onBufferReceived()?
Google does not call this method their Jelly Bean speech app (QuickSearchBox). Its simply not in the code. Unless there is an official comment from a Google Engineer I cannot give a definite answer "why" they did this. I did search the developer forums but did not see any commentary about this decision.
The ics default for speech recognition comes from Google's VoiceSearch.apk. You can decompile this apk and see and find see there is an Activity to handle an intent of action *android.speech.action.RECOGNIZE_SPEECH*. In this apk I searched for "onBufferReceived" and found a reference to it in com.google.android.voicesearch.GoogleRecognitionService$RecognitionCallback.
With Jelly Bean, Google renamed VoiceSearch.apk to QuickSearch.apk and made a lot of new additions to the app (ex. offline dictation). You would expect to still find an onBufferReceived call, but for some reason it is completely gone.
I too was using the onBufferReceived method and was disappointed that the (non-guaranteed) call to the method was dropped in Jelly Bean. Well, if we can't grab the audio with onBufferReceived(), maybe there is a possibility of running an AudioRecord simultaneously with voice recognition. Anyone tried this? If not, I'll give it a whirl and report back.