问题
I have a requirement to add speech-to-text from a web page. I cannot find enough documentation to tell me what the best approach would be. I have read posts and reviewed samples Stream Audio from C#.
UPDATE: I have converted to WAV format on the client before sending to the server. The result is a file that is recognized as WAV with no output.
UPDATE[20181004]: I can successfully send record and save a full WAV file and then send it to Google Speech API branch_Record-in-browser-before-sending-all
. The branch also includes code that successfully sends chunks of WAV data to the server but Google Speech API does not return results for these audio parts
UPDATE[20181005]: Attempted to use a ConcurrentQueue to implement streaming without success. Moved the latest code to the develop
branch and set it as default.
UPDATE[20181012] Tried to copy and convert code from https://github.com/googlecodelabs/speaking-with-a-webpage
My approach: Sample AspnetCore Source + WebPage
I am recording the audio from the microphone and sending the buffer over web sockets to the AspNetCore web application. In the web application, I tried to buffer the data and post chunks of 32k. The local buffer code was introduced to see if that would fix the 400 errors.
The code is working up to sending the data to Google Speech API. In the Google console, I can see 100% of my "WriteAsync" calls returned a 400 error, but I have no idea where to get the detail. I am also not receiving any responses from the API allowing me to print the error detail.
I presume the data being sent to the speech API is either an incorrect format or size.
[20181004] Removed outdated code snippets
来源:https://stackoverflow.com/questions/52518467/streaming-audio-buffer-from-a-web-page-to-c-sharp-google-cloud-speech-to-text-sd