web-audio-api

How to set up sample rate using web audio API?

人走茶凉 提交于 2019-11-30 09:29:42
问题 I have blob type generated by webaudio API, but the file that is saved have to high sample rate. How can I convert it to lower maybe something like https://developer.mozilla.org/en-US/docs/Web/API/OfflineAudioContext can help? Here is some sample of code: var xhr = new XMLHttpRequest(); /* HERE IS SOME CONVERTATION TO LOWER RATE */ var fd = new FormData(); fd.append("randomname", bigBlob); xhr.open("POST",url,false); xhr.send(fd); xhr.onload=function(e) { alert(e.target.responseText); }; 回答1:

Choppy/inaudible playback with chunked audio through Web Audio API

做~自己de王妃 提交于 2019-11-30 09:23:28
I brought this up in my last post but since it was off topic from the original question I'm posting it separately. I'm having trouble with getting my transmitted audio to play back through Web Audio the same way it would sound in a media player. I have tried 2 different transmission protocols, binaryjs and socketio, and neither make a difference when trying to play through Web Audio. To rule out the transportation of the audio data being the issue I created an example that sends the data back to the server after it's received from the client and dumps the return to stdout. Piping that into VLC

Mixing two audio buffers, put one on background of another by using web Audio Api

一个人想着一个人 提交于 2019-11-30 08:22:30
问题 I want to mix two audio sources by put one song as background of another into single source. for example, i have input : <input id="files" type="file" name="files[]" multiple onchange="handleFilesSelect(event)"/> And script to decode this files: window.AudioContext = window.AudioContext || window.webkitAudioContext; var context = new window.AudioContext(); var sources = []; var files = []; var mixed = {}; function handleFilesSelect(event){ if(event.target.files.length <= 1) return false;

Phonegap mixing audio files

被刻印的时光 ゝ 提交于 2019-11-30 07:21:01
I'm building a karaoke app using Phonegap for Ios. I have audio files in the www/assets folder that I am able to play using the media.play()function This allows the user to listen to the backing track. While the media is playing another Media instance is recording. Once the recording has finished I need to lay the voice recording file over the backing track and I have no idea of how I might go about doing this. One approach I thought might work is to use the WEb Audio API - I have the following code which I took from HTML5 Rocks Which loads up the two files into an AudioContext and allows me

Record audio on web, preset: 16000Hz 16bit

こ雲淡風輕ζ 提交于 2019-11-30 03:59:38
问题 function floatTo16BitPCM(output, offset, input){ for (var i = 0; i < input.length; i++, offset+=2){ var s = Math.max(-1, Math.min(1, input[i])); output.setInt16(offset, s < 0 ? s * 0x8000 : s * 0x7FFF, true); } } function writeString(view, offset, string){ for (var i = 0; i < string.length; i++){ view.setUint8(offset + i, string.charCodeAt(i)); } } function encodeWAV(samples){ var buffer = new ArrayBuffer(44 + samples.length * 2); var view = new DataView(buffer); /* RIFF identifier */

Playing PCM stream from Web Audio API on Node.js

≡放荡痞女 提交于 2019-11-30 03:19:09
I'm streaming recorded PCM audio from a browser with web audio api. I'm streaming it with binaryJS (websocket connection) to a nodejs server and I'm trying to play that stream on the server using the speaker npm module. This is my client. The audio buffers are at first non-interleaved IEEE 32-bit linear PCM with a nominal range between -1 and +1 . I take one of the two PCM channels to start off and stream it below. var client = new BinaryClient('ws://localhost:9000'); var Stream = client.send(); recorder.onaudioprocess = function(AudioBuffer){ var leftChannel = AudioBuffer.inputBuffer

Load audiodata into AudioBufferSourceNode from <audio/> element via createMediaElementSource?

两盒软妹~` 提交于 2019-11-30 01:51:42
问题 Is it possible to have an audiofile loaded from <audio/> -element via createMediaElementSource and then load the audio data into a AudioBufferSourceNode ? Using the audio-element as a source (MediaElementSource) seems not to be an option, as I want to use Buffer methods like noteOn and noteGrain . Loading the audiofile directly to the buffer via XHR unfortunately isn't an option neither ( see Open stream_url of a Soundcloud Track via Client-Side XHR?) Loading the buffer contents from the

How to get microphone input volume value with web audio api?

◇◆丶佛笑我妖孽 提交于 2019-11-29 21:35:05
I am using the microphone input with web audio api and need to get the volume value. Right now I have already got the microphone to work: http://updates.html5rocks.com/2012/09/Live-Web-Audio-Input-Enabled Also, i know there's a method manipulating the volume of audio file: http://www.html5rocks.com/en/tutorials/webaudio/intro/ // Create a gain node. var gainNode = context.createGain(); // Connect the source to the gain node. source.connect(gainNode); // Connect the gain node to the destination. gainNode.connect(context.destination); // Reduce the volume. gainNode.gain.value = 0.5; But how to

Web Audio to visualize and interact with waveforms

倾然丶 夕夏残阳落幕 提交于 2019-11-29 20:28:47
How do I write a JavaScript program to display a waveform from an audio file? I want to use Web Audio and Canvas. I tried this code: (new window.AudioContext).decodeAudioData(audioFile, function (data) { var channel = data.getChannelData(0); for (var i = 0; i < channel; i++) { canvas.getContext('2d').fillRect(i, 1, 40 - channel[i], 40); } }); But the result is far from what I want (namely, the image is not smooth since I'm drawing with rectangles). I want it to look smooth like this image: Any hints on how to implement the waveform? You may be interested in AudioJedit . This is an open source

What does the FFT data in the Web Audio API correspond to?

陌路散爱 提交于 2019-11-29 20:18:53
I've used the FFT data from the Analyser node using the getByteFrequencyData method in the Web Audio API to create a spectrum visualizer as shown below: In this instance I have 256 bins of data. What exactly do the numbers in this correspond to? Is it the decibel level of each frequency component. If so how do I know what the value of the frequency of each bin corresponds to? I would like to know so I can experiment in building a graphic eq and so would like to know at which points to indicate the filter bands. Ideally I'd like to represent frequencies from 20Hz to 20kHz and plot intervals