Overlay two audio buffers into one buffer source

ぃ、小莉子 提交于 2019-12-21 04:19:16

问题


Trying to merge two buffers into one; I have been able to create the two buffers from the audio files and load and play them. Now I need to merge the two buffers into one buffer. How can they get merged?

  context = new webkitAudioContext();
  bufferLoader = new BufferLoader(
    context,
    [
      'audio1.mp3',
      'audio2.mp3',
    ],
    finishedLoading
    );

  bufferLoader.load();

function finishedLoading(bufferList) {
  // Create the two buffer sources and play them both together.
  var source1 = context.createBufferSource();
  var source2 = context.createBufferSource();
  source1.buffer = bufferList[0];
  source2.buffer = bufferList[1];

  source1.connect(context.destination);
  source2.connect(context.destination);
  source1.start(0);
  source2.start(0);  
}

Now these sources are loaded separately and are played at the same time; but how do I merge these two sources into one buffer source? I do NOT want to append them, I want to overlay/merge them.

explanations and/or snippets would be great.


回答1:


In audio, to mix two audio stream (here, buffers) into one, you can simply add each sample value together. Practically, here is we can do this, building on your snippet:

/* `buffers` is a javascript array containing all the buffers you want
 * to mix. */
function mix(buffers) {
  /* Get the maximum length and maximum number of channels accros all buffers, so we can
   * allocate an AudioBuffer of the right size. */
  var maxChannels = 0;
  var maxDuration = 0;
  for (var i = 0; i < buffers.length; i++) {
    if (buffers[i].numberOfChannels > maxChannels) {
      maxChannels = buffers[i].numberOfChannels;
    }
    if (buffers[i].duration > maxDuration) {
      maxDuration = buffers[i].duration;
    }
  }
  var out = context.createBuffer(maxChannels,
                                 context.sampleRate * maxDuration,
                                 context.sampleRate);

  for (var j = 0; j < buffers.length; j++) {
    for (var srcChannel = 0; srcChannel < buffers[j].numberOfChannels; srcChannel++) {
      /* get the channel we will mix into */
      var out = mixed.getChanneData(srcChannel);
      /* Get the channel we want to mix in */
      var in = buffers[i].getChanneData(srcChannel);
      for (var i = 0; i < in.length; i++) {
        out[i] += in[i];
      }
    }
  }
  return out;
}

Then, simply affect the return from this function to a new AudioBufferSourceNode.buffer, and play it like usual.

A couple notes: my snippet assumes, for simplicity, that:

  • If you have a mono buffer and a stereo buffer, you will only hear the mono buffer in the left channel of the mixed buffer. If you want it copied to the left and right, you will have to do we is called up-mixing ;
  • If you want a buffer to be quieter or louder than another buffer (like if you moved a volume fader on a mixing console), simply multiply the toMix[i] value by a number lesser than 1.0 to make it quiter, greater than 1.0 to make it louder.

Then again, the Web Audio API does all that for you, so I wonder why you need to do it yourself, but at least now you know how :-).




回答2:


@Padenot is right but there is some typo in his code so it can't works if you copy/past it. Below you can find the same code with corrections so you can use it. Thanks for your help @Padenot ;)

function mix(buffers) {

    var nbBuffer = buffers.length;// Get the number of buffer contained in the array buffers
    var maxChannels = 0;// Get the maximum number of channels accros all buffers
    var maxDuration = 0;// Get the maximum length

    for (var i = 0; i < nbBuffer; i++) {
        if (buffers[i].numberOfChannels > maxChannels) {
            maxChannels = buffers[i].numberOfChannels;
        }
        if (buffers[i].duration > maxDuration) {
            maxDuration = buffers[i].duration;
        }
    }

    // Get the output buffer (which is an array of datas) with the right number of channels and size/duration
    var mixed = context.createBuffer(maxChannels, context.sampleRate * maxDuration, context.sampleRate);        

    for (var j=0; j<nbBuffer; j++){

        // For each channel contained in a buffer...
        for (var srcChannel = 0; srcChannel < buffers[j].numberOfChannels; srcChannel++) {

            var _out = mixed.getChannelData(srcChannel);// Get the channel we will mix into
            var _in = buffers[j].getChannelData(srcChannel);// Get the channel we want to mix in

            for (var i = 0; i < _in.length; i++) {
                _out[i] += _in[i];// Calculate the new value for each index of the buffer array
            }
        }
    }

    return mixed;
}


来源:https://stackoverflow.com/questions/22135056/overlay-two-audio-buffers-into-one-buffer-source

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!