Using web audio api for analyzing input from microphone (convert MediaStreamSource to BufferSource)

余生颓废 提交于 2020-01-06 18:08:13

问题


I am trying to get the beats per Minute (BPM) using the Web Audio Api like it is done in the following links (http://joesul.li/van/beat-detection-using-web-audio/ or https://github.com/JMPerez/beats-audio-api/blob/gh-pages/script.js) but from an audio stream (microphone). Unfortunately, I don´t get it running. Does somebody know how I can convert the microphone MediaStreamSource to a BufferSource and continue like on the first linked Website? Here´s the Code I´ve come to this Point:

navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(function(stream) {
    /* use the stream */

    var OfflineContext = window.OfflineAudioContext || window.webkitOfflineAudioContext;
    var source = OfflineContext.createMediaStreamSource(stream);
    source.connect(OfflineContext);
    var offlineContext = new OfflineContext(2, 30 * 44100, 44100);

    offlineContext.decodeAudioData(stream, function(buffer) {
      // Create buffer source
      var source = offlineContext.createBufferSource();
      source.buffer = buffer;
      // Beats, or kicks, generally occur around the 100 to 150 hz range.
      // Below this is often the bassline.  So let's focus just on that.
      // First a lowpass to remove most of the song.
      var lowpass = offlineContext.createBiquadFilter();
      lowpass.type = "lowpass";
      lowpass.frequency.value = 150;
      lowpass.Q.value = 1;
      // Run the output of the source through the low pass.
      source.connect(lowpass);
      // Now a highpass to remove the bassline.
      var highpass = offlineContext.createBiquadFilter();
      highpass.type = "highpass";
      highpass.frequency.value = 100;
      highpass.Q.value = 1;
      // Run the output of the lowpass through the highpass.
      lowpass.connect(highpass);
      // Run the output of the highpass through our offline context.
      highpass.connect(offlineContext.destination);
      // Start the source, and render the output into the offline conext.
      source.start(0);
      offlineContext.startRendering();
    });
})
.catch(function(err) {
    /* handle the error */
    alert("Error");
});

Thank you!


回答1:


Those articles are great. There are a few things wrong with your current approach:

  1. You don't need to decode the stream - you need to connect it to a web audio context with a MediaStreamAudioSourceNode, and then use a ScriptProcessor (deprecated) or an AudioWorker (not implemented everywhere yet) to grab the bits and do detection. decodeAudioData takes an encoded buffer - i.e. the contents of an MP3 file - not a stream object.
  2. Keep in mind this is a STREAM, not a single file - you can't really just hand an entire song audio file to the beat detector. Well, you CAN - but if you're streaming, then you need to wait until the whole file comes in, which would be bad. You'll have to work in chunks, and the bpm may change during the song. So collect a chunk at a time - probably a second or more of audio at a time - to pass to the beat detection code.
  3. Although it may be a good idea to lowpass-filter the data, it's probably not worthwhile to high-pass filter it. Remember that filters aren't brick wall filters - they don't slice out everything above or below their frequency, they just attentuate it.


来源:https://stackoverflow.com/questions/51879587/using-web-audio-api-for-analyzing-input-from-microphone-convert-mediastreamsour

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!