问题
I'm trying to generate a static waveform like in audio editing apps with webaudio and canvas. Right now I'm loading an mp3, creating a buffer, iterating over the data returned by getChannelData.
The problem is.. I don't really understand what's being returned.
- What is being returned by getChannelData - is it appropriate for a waveform?
- How to adjust (sample size?) to get one peak == one second?
Why are ~50% of the values are negative?
ctx.decodeAudioData(req.response, function(buffer) { buf = buffer; src = ctx.createBufferSource(); src.buffer = buf; //create fft fft = ctx.createAnalyser(); var data = new Uint8Array(samples); fft.getByteFrequencyData(data); bufferL = buf.getChannelData(0) for(var i = 0; i<buf.length; i++){ n = bufferL[i*(1000)] gfx.beginPath(); gfx.moveTo(i +0.5, 300); gfx.lineTo(i +0.5, 300 + (-n*100)); gfx.stroke();
What I'm generating:
What I'd like to generate:
Thanks
回答1:
I wrote a sample to do precise this - https://github.com/cwilso/Audio-Buffer-Draw. It's a pretty simplistic demo - you'll have to do the zooming yourself, but the idea's there.
1) Yes, getChannelData returns the audio buffer samples for that channel. 2) Well, that's dependent on how frequent the peaks in your sample are, and that's not necessarily consistent. The draw sample I did does zoom out (that's the "step" bit of the method), but you'll likely want to optimize for your scenario. 3) Half the values are negative because sound samples go between -1 and +1. Sound waves are a positive and negative pressure wave; that's why "silence" is a flat line in the middle, not at the bottom.
Code:
var audioContext = new AudioContext();
function drawBuffer( width, height, context, buffer ) {
var data = buffer.getChannelData( 0 );
var step = Math.ceil( data.length / width );
var amp = height / 2;
for(var i=0; i < width; i++){
var min = 1.0;
var max = -1.0;
for (var j=0; j<step; j++) {
var datum = data[(i*step)+j];
if (datum < min)
min = datum;
if (datum > max)
max = datum;
}
context.fillRect(i,(1+min)*amp,1,Math.max(1,(max-min)*amp));
}
}
function initAudio() {
var audioRequest = new XMLHttpRequest();
audioRequest.open("GET", "sounds/fightclub.ogg", true);
audioRequest.responseType = "arraybuffer";
audioRequest.onload = function() {
audioContext.decodeAudioData( audioRequest.response,
function(buffer) {
var canvas = document.getElementById("view1");
drawBuffer( canvas.width, canvas.height, canvas.getContext('2d'), buffer );
} );
}
audioRequest.send();
}
window.addEventListener('load', initAudio );
来源:https://stackoverflow.com/questions/25836447/generating-a-static-waveform-with-webaudio