问题
We need to streaming live audio (from a medical device) to web browsers with no more than 3-5s of end-to-end delay (assume 200mS or less network latency). Today we use a browser plugin (NPAPI) for decoding, filtering (high, low, band), and playback of the audio stream (delivered via Web Sockets).
We want to replace the plugin.
I was looking at various Web Audio API demos and the most of our required functionality (playback, gain control, filtering) appears to be available in Web Audio API. However, it is not clear to me if Web Audio API can be used for streamed sources as most of the Web Audio API makes use of short sounds and/or audio clips.
Can Web Audio API be used to play live streamed audio?
Update (11-Feb-2015):
After a bit more research and local prototyping, I am not sure live audio streaming with Web Audio API is possible. As Web Audio API's decodeAudioData isn't really designed to handle random chunks of audio data (in our case delivered via WebSockets). It appears to need the whole 'file' in order to process it correctly.
See stackoverflow:
- How to stream MP3 data via WebSockets with node.js and socket.io?
- Define 'valid mp3 chunk' for decodeAudioData (WebAudio API)
Now it is possible with createMediaElementSource to connect an <audio>
element to Web Audio API, but it has been my experience that the <audio>
element induces a huge amount of end-to-end delay (15-30s) and there doesn't appear to be any means to reduce the delay to below 3-5 seconds.
I think the only solution is to use WebRTC with Web Aduio API. I was hoping to avoid WebRTC as it will require significant changes to our server-side implementation.
Update (12-Feb-2015) Part I:
I haven't completely eliminated the <audio>
tag (need to finish my prototype). Once I have ruled it out, I suspect the createScriptProcessor (deprecated but still supported) will be a good choice for our environment as I could 'stream' (via WebSockets) our ADPCM data to the browser and then (in JavaScript) convert it to PCM. Similar to what to Scott's library (see below) does using the createScriptProcessor. This method doesn't require the data to be in properly sized 'chunks' and critical timing as the decodeAudioData approach.
Update (12-Feb-2015) Part II:
After more testing, I eliminated the <audio>
to Web Audio API interface because, depending on source type, compression and browser, the end-to-end delay can be 3-30s. That leaves the createScriptProcessor method (See Scott's post below) or WebRTC. After talking discussing with our decision makers, it has been decided we will take the WebRTC approach. I assume it will work. But it will require changes to our server side code.
I'm going to mark the first answer, just so the 'question' is closed.
Thanks for listening. Feel free to add comments as needed.
回答1:
Yes, the Web Audio API (along with AJAX or Websockets) can be used for streaming.
Basically, you pull down (or send, in the case of Websockets) some chunks of n
length. Then you decode them with the Web Audio API and queue them up to be played, one after the other.
Because the Web Audio API has high-precision timing, you won't hear any "seams" between the playback of each buffer if you do the scheduling correctly.
回答2:
I wrote a streaming Web Audio API system where I used web workers to do all the web socket management to communicate with node.js such that the browser thread simply renders audio ... works just fine on laptops, since mobiles are behind on their implementation of web sockets inside web workers you need no less than lollipop for it to run as coded ... I posted full source code here
回答3:
To elaborate on the comments on how to play a bunch of separate buffers stored in an array by shifting the latest one out everytime:
If you create a buffer through createBufferSource()
then it has an onended
event to which you can attach a callback, which will fire when the buffer has reached its end. You can do something like this to play the various chunks in the array one after the other:
function play() {
//end of stream has been reached
if (audiobuffer.length === 0) { return; }
let source = context.createBufferSource();
//get the latest buffer that should play next
source.buffer = audiobuffer.shift();
source.connect(context.destination);
//add this function as a callback to play next buffer
//when current buffer has reached its end
source.onended = play;
source.start();
}
Hope that helps. I'm still experimenting on how to get this all smooth and ironed out, but this is a good start and missing in a lot of the online posts.
来源:https://stackoverflow.com/questions/28440262/web-audio-api-for-live-streaming