Web Audio API Analyser Node Not Working With Microphone Input

你离开我真会死。 提交于 2019-12-20 20:35:46

问题


The bug preventing getting microphone input per http://code.google.com/p/chromium/issues/detail?id=112367 for Chrome Canary is now fixed. This part does seem to be working. I can assign the mic input to an audio element and hear the results through the speaker.

But I'd like to connect an analyser node in order to do FFT. The analyser node works fine if I set the audio source to a local file. The problem is that when connected to the mic audio stream, the analyser node just returns the base value as if it doesn't have an audio stream at all. (It's -100 over and over again if you're curious.)

Anyone know what's up? Is it not implemented yet? Is this a chrome bug? I'm running 26.0.1377.0 on Windows 7 and have the getUserMedia flag enabled and am serving through localhost via python's simpleHTTPServer so it can request permissions.

Code:

var aCtx = new webkitAudioContext();
var analyser = aCtx.createAnalyser();
if (navigator.getUserMedia) {
  navigator.getUserMedia({audio: true}, function(stream) {
    // audio.src = "stupid.wav"
    audio.src = window.URL.createObjectURL(stream);
  }, onFailure);
}
$('#audio').on("loadeddata",function(){
    source = aCtx.createMediaElementSource(audio);
    source.connect(analyser);
    analyser.connect(aCtx.destination);
    process();
});

Again, if I set audio.src to the commented version, it works, but with microphone it is not. Process contains:

FFTData = new Float32Array(analyser.frequencyBinCount);
analyser.getFloatFrequencyData(FFTData);
console.log(FFTData[0]);

I've also tried using the createMediaStreamSource and bypassing the audio element - example 4 - https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/webrtc-integration.html. Also unsuccessful. :(

    if (navigator.getUserMedia) {
        navigator.getUserMedia({audio: true}, function(stream) {
        var microphone = context.createMediaStreamSource(stream);
        microphone.connect(analyser);
        analyser.connect(aCtx.destination);
        process();
    }

I imagine it might be possible to write the mediasteam to a buffer and then use dsp.js or something to do fft, but I wanted to check first before I go down that road.


回答1:


It was a variable scoping issue. For the second example, I was defining the microphone locally and then trying to access its stream with the analyser in another function. I just made all the Web Audio API nodes globals for peace of mind. Also it takes a few seconds for the analyser node to start reporting non -100 values. Working code for those interested:

// Globals
var aCtx;
var analyser;
var microphone;
if (navigator.getUserMedia) {
    navigator.getUserMedia({audio: true}, function(stream) {
        aCtx = new webkitAudioContext();
        analyser = aCtx.createAnalyser();
        microphone = aCtx.createMediaStreamSource(stream);
        microphone.connect(analyser);
        // analyser.connect(aCtx.destination);
        process();
    });
};
function process(){
    setInterval(function(){
        FFTData = new Float32Array(analyser.frequencyBinCount);
        analyser.getFloatFrequencyData(FFTData);
        console.log(FFTData[0]);
    },10);
}

If you would like to hear the live audio, you can connect the analyser to destination (speakers) as commented out above. Watch out for some lovely feedback though!



来源:https://stackoverflow.com/questions/14231265/web-audio-api-analyser-node-not-working-with-microphone-input

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!