问题
I am trying to build a simple app using Microsoft Azure's Cognitive Services Speech To Text SDK in Unity3D. I've following this tutorial, and it worked quite well. The only problem with this tutorial is that the Speech-To-Text is activated by a button. When you press the button, it'll transcribe for the duration of a sentence, and you'll have to press the button again for it to transcribe again. My problem is I'd like it to start transcribing as soon as the program is run in Unity, rather than having to press a button each time I want to transcribe a sentence.
Here is the code.
public async void ButtonClick()
{
// Creates an instance of a speech config with specified subscription key and service region.
// Replace with your own subscription key and service region (e.g., "westus").
var config = SpeechConfig.FromSubscription("[My API Key]", "westus");
// Make sure to dispose the recognizer after use!
using (var recognizer = new SpeechRecognizer(config))
{
lock (threadLocker)
{
waitingForReco = true;
}
// Starts speech recognition, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
var result = await recognizer.RecognizeOnceAsync().ConfigureAwait(false);
// Checks result.
string newMessage = string.Empty;
if (result.Reason == ResultReason.RecognizedSpeech)
{
newMessage = result.Text;
}
else if (result.Reason == ResultReason.NoMatch)
{
newMessage = "NOMATCH: Speech could not be recognized.";
}
else if (result.Reason == ResultReason.Canceled)
{
var cancellation = CancellationDetails.FromResult(result);
newMessage = $"CANCELED: Reason={cancellation.Reason} ErrorDetails={cancellation.ErrorDetails}";
}
lock (threadLocker)
{
message = newMessage;
waitingForReco = false;
}
}
}
void Start()
{
if (outputText == null)
{
UnityEngine.Debug.LogError("outputText property is null! Assign a UI Text element to it.");
}
else if (startRecoButton == null)
{
message = "startRecoButton property is null! Assign a UI Button to it.";
UnityEngine.Debug.LogError(message);
}
else
{
// Continue with normal initialization, Text and Button objects are present.
}
}
void Update()
{
lock (threadLocker)
{
if (startRecoButton != null)
{
startRecoButton.interactable = !waitingForReco && micPermissionGranted;
}
}
}
I've tried removing the Button object, but then the speech-to-text won't run.
Any tips or advice would be amazing. Thank you.
回答1:
Per the comments in the script of the tutorial your referenced:
// Starts speech recognition, and returns after a single utterance is recognized. The end of a
// single utterance is determined by listening for silence at the end or until a maximum of 15
// seconds of audio is processed. The task returns the recognition text as result.
// Note: Since RecognizeOnceAsync() returns only a single utterance, it is suitable only for single
// shot recognition like command or query.
// For long-running multi-utterance recognition, use StartContinuousRecognitionAsync() instead.
But it's not as simple as replacing 'RecognizeOnceAsync
' with 'StartContinuousRecognitionAsync
', because the behaviours are different. RecognizeOnceAsync
will basically turn on your mic for a maximum of 15 seconds, and then stop listening.
Instead, make the button into 'should I listen continuously or not?' using StartContinuousRecognitionAsync
and StopContinuousRecognitionAsync
, and then change your Start
function to simply start up a new recognizer and have it waiting for the Speech Recognizer event to come through. Below is the script I used to enable this functionality:
using UnityEngine;
using UnityEngine.UI;
using Microsoft.CognitiveServices.Speech;
public class HelloWorld : MonoBehaviour
{
public Text outputText;
public Button startRecordButton;
// PULLED OUT OF BUTTON CLICK
SpeechRecognizer recognizer;
SpeechConfig config;
private object threadLocker = new object();
private bool speechStarted = false; //checking to see if you've started listening for speech
private string message;
private bool micPermissionGranted = false;
private void RecognizingHandler(object sender, SpeechRecognitionEventArgs e)
{
lock (threadLocker)
{
message = e.Result.Text;
}
}
public async void ButtonClick()
{
if (speechStarted)
{
await recognizer.StopContinuousRecognitionAsync().ConfigureAwait(false); // this stops the listening when you click the button, if it's already on
lock(threadLocker)
{
speechStarted = false;
}
}
else
{
await recognizer.StartContinuousRecognitionAsync().ConfigureAwait(false); // this will start the listening when you click the button, if it's already off
lock (threadLocker)
{
speechStarted = true;
}
}
}
void Start()
{
startRecordButton.onClick.AddListener(ButtonClick);
config = SpeechConfig.FromSubscription("KEY", "REGION");
recognizer = new SpeechRecognizer(config);
recognizer.Recognizing += RecognizingHandler;
}
void Update()
{
lock (threadLocker)
{
if (outputText != null)
{
outputText.text = message;
}
}
}
}
And below is a gif of me using this functionality. You'll not that I don't click the button at all (and it was only clicked once, prior to the gif being recorded)(also, sorry for the strange sentences, my coworkers kept interrupting asking who I was talking to)
来源:https://stackoverflow.com/questions/57845649/how-to-get-microsoft-azure-speech-to-text-to-start-transcribing-when-program-is