问题
I am using OfflineAudioContext to do waveform analysis in the background.
All works fine in Chrome, Firefox and Opera but in Safari I get a very dodgy behaviour. The waveform should be composed by many samples (329), but in Safari the samples are only ~38.
window.AudioContext = window.AudioContext || window.webkitAudioContext;
window.OfflineAudioContext = window.OfflineAudioContext ||
window.webkitOfflineAudioContext;
const sharedAudioContext = new AudioContext();
const audioURL = 'https://s3-us-west-2.amazonaws.com/s.cdpn.io/1141585/song.mp3';
const audioDidLoad = ( buffer ) =>
{
console.log("audio decoded");
var samplesCount = 0;
const context = new OfflineAudioContext(1, buffer.length, 44100);
const source = context.createBufferSource();
const processor = context.createScriptProcessor(2048, 1, 1);
const analyser = context.createAnalyser();
analyser.fftSize = 2048;
analyser.smoothingTimeConstant = 0.25;
source.buffer = buffer;
source.connect(analyser);
analyser.connect(processor);
processor.connect(context.destination);
var freqData = new Uint8Array(analyser.frequencyBinCount);
processor.onaudioprocess = () =>
{
analyser.getByteFrequencyData(freqData);
samplesCount++;
};
source.start(0);
context.startRendering();
context.oncomplete = (e) => {
document.getElementById('result').innerHTML = 'Read ' + samplesCount + ' samples';
source.disconnect( analyser );
processor.disconnect( context.destination );
};
};
var request = new XMLHttpRequest();
request.open('GET', audioURL, true);
request.responseType = 'arraybuffer';
request.onload = () => {
var audioData = request.response;
sharedAudioContext.decodeAudioData(
audioData,
audioDidLoad,
e => { console.log("Error with decoding audio data" + e.err); }
);
};
request.send();
See Codepen.
回答1:
I think here, Safari has the correct behavior, not the others. The way onaudioprocess works is like this: you give a buffer size (first parameter when you create your scriptProcessor, here 2048 samples), and each time this buffer will be processed, the event will be triggered. So you take your sample rate (which by default is 44.1 kHz, meaning 44100 sample per second), then divide by the buffer size, which is the number of sample that will be processed each time, and you get the number of time per second that an audioprocess event will be triggered. See https://webaudio.github.io/web-audio-api/#OfflineAudioContext-methods
This value controls how frequently the onaudioprocess event is dispatched and how many sample-frames need to be processed each call.
That's true when you're actually playing the sound. You need to prcess the proper amount in the proper time so that the sounds is played correctly. But offlineAudioContext processes the audio without caring about the real playback time.
It does not render to the audio hardware, but instead renders as quickly as possible, fulfilling the returned promise with the rendered result as an AudioBuffer
So with OfflineAudioContext, there's no need to have a time calculation. Chrome and others seem to trigger onaudioprocess each time a buffer is processed, but with offline audio context, it shouldn't really be necessary.
That being said, there's also normally no need to use onaudioprocess with offlineAudioContext, except maybe to have a sense of the performance. All data is available from the context. Also, the 329 samples doesn't mean much, it's basically only the number of samples divided by the buffer size. In your example you have a source of 673830 samples, at 44100 samples per second. So your audio is 15,279 seconds. If you process 2048 samples at a time, you process audio about 329 times, which is your 329 that you get with Chrome. No need to use onaudioprocess to get this number.
And since you use the offline audio context, there's no need to process these samples in real time, or even to call the onaudioprocess at each 2048 samples.
来源:https://stackoverflow.com/questions/46621954/offlineaudiocontext-and-fft-in-safari