Generating a static waveform with webaudio

旧时模样 提交于 2019-12-05 01:13:14

问题


I'm trying to generate a static waveform like in audio editing apps with webaudio and canvas. Right now I'm loading an mp3, creating a buffer, iterating over the data returned by getChannelData.

The problem is.. I don't really understand what's being returned.

  1. What is being returned by getChannelData - is it appropriate for a waveform?
  2. How to adjust (sample size?) to get one peak == one second?
  3. Why are ~50% of the values are negative?

    ctx.decodeAudioData(req.response, function(buffer) {
      buf = buffer;
    
    src = ctx.createBufferSource();
    src.buffer = buf;
    
    //create fft
    fft = ctx.createAnalyser();
    
    var data = new Uint8Array(samples);
      fft.getByteFrequencyData(data);
    
    bufferL = buf.getChannelData(0)
      for(var i = 0; i<buf.length; i++){
        n = bufferL[i*(1000)]
          gfx.beginPath();
          gfx.moveTo(i +0.5, 300);
          gfx.lineTo(i +0.5, 300 + (-n*100));
          gfx.stroke();
    

What I'm generating:

What I'd like to generate:

Thanks


回答1:


I wrote a sample to do precise this - https://github.com/cwilso/Audio-Buffer-Draw. It's a pretty simplistic demo - you'll have to do the zooming yourself, but the idea's there.

1) Yes, getChannelData returns the audio buffer samples for that channel. 2) Well, that's dependent on how frequent the peaks in your sample are, and that's not necessarily consistent. The draw sample I did does zoom out (that's the "step" bit of the method), but you'll likely want to optimize for your scenario. 3) Half the values are negative because sound samples go between -1 and +1. Sound waves are a positive and negative pressure wave; that's why "silence" is a flat line in the middle, not at the bottom.

Code:

var audioContext = new AudioContext();

function drawBuffer( width, height, context, buffer ) {
    var data = buffer.getChannelData( 0 );
    var step = Math.ceil( data.length / width );
    var amp = height / 2;
    for(var i=0; i < width; i++){
        var min = 1.0;
        var max = -1.0;
        for (var j=0; j<step; j++) {
            var datum = data[(i*step)+j]; 
            if (datum < min)
                min = datum;
            if (datum > max)
                max = datum;
        }
        context.fillRect(i,(1+min)*amp,1,Math.max(1,(max-min)*amp));
    }
}

function initAudio() {
    var audioRequest = new XMLHttpRequest();
    audioRequest.open("GET", "sounds/fightclub.ogg", true);
    audioRequest.responseType = "arraybuffer";
    audioRequest.onload = function() {
        audioContext.decodeAudioData( audioRequest.response, 
            function(buffer) { 
                var canvas = document.getElementById("view1");
                drawBuffer( canvas.width, canvas.height, canvas.getContext('2d'), buffer ); 
            } );
    }
    audioRequest.send();
}

window.addEventListener('load', initAudio );


来源:https://stackoverflow.com/questions/25836447/generating-a-static-waveform-with-webaudio

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!