问题
I have a server sending chunks of raw audio over a websocket. The idea is to retrieve those and play them in a way to have the smoothest playback possible.
Here is the most important piece of code:
ws.onmessage = function (event) {
var view = new Int16Array(event.data);
var viewf = new Float32Array(view.length);
audioBuffer = audioCtx.createBuffer(1, viewf.length, 22050);
audioBuffer.getChannelData(0).set(viewf);
source = audioCtx.createBufferSource();
source.buffer = audioBuffer;
source.connect(audioCtx.destination);
source.start(0);
};
This works decently well, but there are some cracks in the playback: the network latency is not always constant, so the newest chunk of data doesn't arrive exactly at the end of the previous one being played, so I can end up with either two buffers playing together for a short amount of time or none playing at all.
I tried:
- to hook the
source.onended
on playing the next one but it's not seamless: there is a crack at the end of every chunk and each seam is accumulating overall so the playback is getting more and more late compared to the stream. - to append the new data to the currently playing buffer, but this seem to be forbidden: buffers are of fixed size.
Is there a proper solution to fix that playback? The only requirement is to play the uncompressed audio coming from a websocket.
EDIT: Solution: Given I know my buffers lengths, I can schedule the playback this way:
if(nextStartTime == 0) nextStartTime = audioCtx.currentTime + (audioBuffer.length / audioBuffer.sampleRate)/2;
source.start(nextStartTime);
nextStartTime += audioBuffer.length / audioBuffer.sampleRate;
The first time, I schedule the beginning of the playback to half-a-buffer-later to allow that maximum unexpected latency. Then, I store the next buffer start time at the very end of my buffer end.
回答1:
You should probably start with https://www.html5rocks.com/en/tutorials/audio/scheduling/ which explains very well how to schedule things in WebAudio.
For your use case, you should also take advantage of the fact that you know the sample rate of the PCM samples and you know how many sample you've read. This determines how long it will take to play out the buffer. Use that to figure out when to schedule the next buffer.
(But note that if the PCM sample rate is not the same as audioCtx.sampleRate
, the data will be resampled, which might mess up your timing.
回答2:
There's a better way to handle this these days... consider using the Media Source Extensions.
- Spec: https://www.w3.org/TR/media-source/
- MDN: https://developer.mozilla.org/en-US/docs/Web/API/MediaSource
Instead of having to schedule buffers and do it all yourself, you basically dump your received data into a buffer and let the browser worry about buffered playback, as if it were a file over HTTP.
Chrome supports playback of WAV files. Since your data is in raw PCM, you'll need to spoof a WAV file header. Fortunately, this isn't too difficult: http://soundfile.sapp.org/doc/WaveFormat/
来源:https://stackoverflow.com/questions/43366627/cracks-in-webaudio-playback-during-streaming-of-raw-audio-data