Phonegap mixing audio files

假如想象 提交于 2019-11-30 13:21:56

问题


I'm building a karaoke app using Phonegap for Ios.

I have audio files in the www/assets folder that I am able to play using the media.play()function

This allows the user to listen to the backing track. While the media is playing another Media instance is recording.

Once the recording has finished I need to lay the voice recording file over the backing track and I have no idea of how I might go about doing this.

One approach I thought might work is to use the WEb Audio API - I have the following code which I took from HTML5 Rocks Which loads up the two files into an AudioContext and allows me to play both simultaneously. However, what I would like to do is write the two buffers into a single .wav file. Is there any way I can combine source1 and source2 into a single new file?

var context;
var bufferLoader;

function init() {
    // Fix up prefixing
    window.AudioContext = window.AudioContext || window.webkitAudioContext;
    context = new AudioContext();

    bufferLoader = new BufferLoader(
        context,
        [
            'backingTrack.wav',
            'voice.wav',
        ],
        finishedLoading
    );

    bufferLoader.load();
}

function finishedLoading(bufferList) {
    // Create two sources and play them both together.
    var source1 = context.createBufferSource();
    var source2 = context.createBufferSource();
    source1.buffer = bufferList[0];
    source2.buffer = bufferList[1];

    source1.connect(context.destination);
    source2.connect(context.destination);
    source1.start(0);
    source2.start(0);
}


function BufferLoader(context, urlList, callback) {
    this.context = context;
    this.urlList = urlList;
    this.onload = callback;
    this.bufferList = new Array();
    this.loadCount = 0;
}

BufferLoader.prototype.loadBuffer = function(url, index) {
    // Load buffer asynchronously
    var request = new XMLHttpRequest();
    request.open("GET", url, true);
    request.responseType = "arraybuffer";

    var loader = this;

    request.onload = function() {
        // Asynchronously decode the audio file data in request.response
        loader.context.decodeAudioData(
            request.response,
            function(buffer) {
                if (!buffer) {
                    alert('error decoding file data: ' + url);
                    return;
                }
                loader.bufferList[index] = buffer;
                if (++loader.loadCount == loader.urlList.length)
                    loader.onload(loader.bufferList);
            },
            function(error) {
                console.error('decodeAudioData error', error);
            }
        );
    }

    request.onerror = function() {
        alert('BufferLoader: XHR error');
    }

    request.send();
}

BufferLoader.prototype.load = function() {
    for (var i = 0; i < this.urlList.length; ++i)
        this.loadBuffer(this.urlList[i], i);
}

There might be something in this solution How do I convert an array of audio data into a wav file? As far as I can make out they are interleaving the two buffers and encoding them as a .wav but I can't figure out where they are writing them to a file (saving the new wav file) any ideas?

The answer below - doesn't really help as I'm using Web Audio Api (javascript) not IOS


回答1:


The solution was to use the offlineAudioContext

The steps were: 1. Load the two files as buffers using the BufferLoader 2. Create an OfflineAudioContext 3. connect the two buffers to the OfflineAudioContext 4. start the two buffers 5. use the offline startRendering function 6. Set the offfline.oncomplete function to get a handle on the renderedBuffer.

Here's the code:

offline = new webkitOfflineAudioContext(2, voice.buffer.length, 44100);
vocalSource = offline.createBufferSource();
vocalSource.buffer = bufferList[0];
vocalSource.connect(offline.destination);

backing = offline.createBufferSource();
backing.buffer = bufferList[1];
backing.connect(offline.destination);

vocalSource.start(0);
backing.start(0);

offline.oncomplete = function(ev){
    alert(bufferList);
    playBackMix(ev);
    console.log(ev.renderedBuffer);
    sendWaveToPost(ev);
}
offline.startRendering();



回答2:


I would suggest to mix the PCM directly. If you initialize a buffer that overlaps the time frames of both tracks, then the formula is additive:

mix(a,b) = a+b - a*b/65535.

This formula depends on unsigned 16bit integers. Here's an example:

SInt16 *bufferA, SInt16 *bufferB;
NSInteger bufferLength;
SInt16 *outputBuffer;

for ( NSInteger i=0; i<bufferLength; i++ ) {
  if ( bufferA[i] < 0 && bufferB[i] < 0 ) {
    // If both samples are negative, mixed signal must have an amplitude between 
    // the lesser of A and B, and the minimum permissible negative amplitude
    outputBuffer[i] = (bufferA[i] + bufferB[i]) - ((bufferA[i] * bufferB[i])/INT16_MIN);
  } else if ( bufferA[i] > 0 && bufferB[i] > 0 ) {
    // If both samples are positive, mixed signal must have an amplitude between the greater of
    // A and B, and the maximum permissible positive amplitude
    outputBuffer[i] = (bufferA[i] + bufferB[i]) - ((bufferA[i] * bufferB[i])/INT16_MAX);
  } else {
    // If samples are on opposite sides of the 0-crossing, mixed signal should reflect 
    // that samples cancel each other out somewhat
    outputBuffer[i] = bufferA[i] + bufferB[i];
  }
}

This can be a very effective way to handle signed 16 bit audio. Go here for the source.



来源:https://stackoverflow.com/questions/25040735/phonegap-mixing-audio-files

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!