WebAudio API: Is It Possible to Export an AudioBuffer with StereoPanner Node Data?

元气小坏坏 提交于 2021-01-29 11:50:40

问题


I'm looking to export an AudioBuffer to a wav file with a StereoPanner node i.e. I pan a sound all the left and export it panned to the left. I'm wondering if it is possible to export the StereoPanner data associated with an AudioContext?

I have built an AudioSource from an AudioContext, and I have attached an StereoPanner to my AudioSource. I'm able to pan my sound in-browser without issue, and I'm also able to export my AudioBuffer to a file (wav). Unfortunately, when I export my AudioBuffer, none of the StereoPanner data seems to come with it. Is it possible to export StereoPanner data? Or any Audio Node data?

Here is my sample code. I have glossed over some of the wiring details. The encodeWav function is being provided by the audiobuffer-to-wav library.

const audioContext = new AudioContext();

// file input code

const arrayBuffer = await new Promise(resolve => {
  const reader = new FileReader();

  reader.onload = res => {
    resolve(res.target.result);
  };

  reader.readAsArrayBuffer(e.target.files[0]);
});

audioContext.decodeAudioData(arrayBuffer, newAudioBuffer => {
  const newAudioSource = audioContext.createBufferSource();
  const newStereoPanNode = audioContext.createStereoPanner();

  newAudioSource.buffer = newAudioBuffer;

  newAudioSource.connect(newStereoPanNode);
  newAudioSource.connect(audioContext.destination);
  newStereoPanNode.connect(audioContext.destination);

  const wavBuffer = encodeWAV(audioBuffer);
  const blob = new Blob([wavBuffer], { type: 'audio/wav' });

  // download file code
});

回答1:


Thanks for updating the question with code.

Given the code example, the easiest way to get what you want is to replace the AudioContext with an OfflineAudioContext. Something like this:

const audioContext =
  new OfflineAudioContext({sampleRate: sampleRate,
                           numberOfChannels: 2,
                           length: lengthInFrames});

// Same stuff as above.


audioContext.decodeAudioData(arrayBuffer, newAudioBuffer => {
  const newAudioSource = audioContext.createBufferSource();
  const newStereoPanNode = audioContext.createStereoPanner();

  newAudioSource.buffer = newAudioBuffer;

  newAudioSource.connect(newStereoPanNode);
  newAudioSource.connect(audioContext.destination);
  newStereoPanNode.connect(audioContext.destination);

  audioContext.startRendering()
    .then(renderedBuffer => {
       const wavBuffer = encodeWAV(audioBuffer);
       const blob = new Blob([wavBuffer], { type: 'audio/wav' });
       // Download file
     });
});

If you must use an AudioContext, the solution is a little bit more complicated. You'll have to stick a ScriptProcessorNode or AudioWorkletNode after the stereo panner to capture the output and save it some where. When you're done playing the source, you can encode the save data as above.




回答2:


I was able to find an answer to my own question by pouring over the MDN Web Audio API documentation. It appears what I was looking for was the OfflineAudioContext with its startRendering() method.

Here is a working example:

const file = e.target.files[0]; // file input code
const panValue = -1; // pan input as a number between -1 and 1

const arrayBuffer = await new Promise(resolve => {
  const reader = new FileReader();

  reader.onload = res => {
    resolve(res.target.result);
  };

  reader.readAsArrayBuffer(e.target.files[0]);
});

audioContext.decodeAudioData(arrayBuffer, newAudioBuffer => {
  const offlineContext = new OfflineAudioContext(
    newAudioBuffer.numberOfChannels,
    newAudioBuffer.length,
    newAudioBuffer.sampleRate
  );
  const offlineSource = offlineContext.createBufferSource();
  const offlineStereoPanner = offlineContext.createStereoPanner();

  offlineSource.buffer = newAudioBuffer;
  offlineStereoPanner.pan.setValueAtTime(panValue, 0);

  offlineSource.connect(offlineStereoPanner);
  offlineStereoPanner.connect(offlineContext.destination);

  offlineSource.start();

  offlineContext.startRendering().then(renderedBuffer => {
    const wavBuffer = encodeWAV(renderedBuffer);
    const blob = new Blob([wavBuffer], { type: 'audio/wav' });

    downloadFile(blob);
  });
});


来源:https://stackoverflow.com/questions/64161691/webaudio-api-is-it-possible-to-export-an-audiobuffer-with-stereopanner-node-dat

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!