web-audio-api

Send MediaStream object with Web Audio effects over PeerConnection

让人想犯罪 __ 提交于 2019-12-05 18:04:19
I'm trying to send audio, obtained by getUserMedia() and altered with the Web Audio API, over a PeerConnection from WebRTC. The Web Audio API and WebRTC seem to have the ability to do this but I'm having trouble understanding how this can be done. Within the Web Audio API, the AudioContext object contains a method createMediaStreamSource() , which provides a way to connect the MediaStream obtained by getUserMedia(). Also, there is a createMediaStreamDestination() method, which seems to return an object with a stream attribute. I'm getting both audio and video from the getUserMedia() method.

Javascript: UInt8Array to Float32Array

落爺英雄遲暮 提交于 2019-12-05 16:36:38
I have some audio buffer in usigned 8bit PCM format need to play via web audio which only accept signed 32bit PCM. And now I have ArrayBuffer for pieces of pcm_u8 data(come from Uint8array). How can I convert it to Float32Array? This function converts an ArrayBuffer to Float32Array var convertBlock(buffer) { // incoming data is an ArrayBuffer var incomingData = new Uint8Array(buffer); // create a uint8 view on the ArrayBuffer var i, l = incomingData.length; // length, we need this for the loop var outputData = new Float32Array(incomingData.length); // create the Float32Array for output for (i

Open stream_url of a Soundcloud Track via Client-Side XHR?

 ̄綄美尐妖づ 提交于 2019-12-05 16:22:33
Since you can call the the Soundcloud API via XHR (because of the CORS headers it sends http://backstage.soundcloud.com/2010/08/of-cors-we-do/ , right?) I was wondering if this was possible with the audio data itself, like a tracks' stream_url for example. When trying to open the stream_url with a XHR (from the Client Side) using the Web Audio API, i get a Origin is not allowed by Access-Control-Allow-Origin. error. Is there a way to load Audio resources via XHttpRequest from Client-Side-Javascript, or is it impossible ( https://stackoverflow.com/questions/10871882/audio-data-api-and-streaming

Pause Web Audio API sound playback

五迷三道 提交于 2019-12-05 14:25:13
How can i create a pause function for my audio? I already have a play function in my script below. http://pastebin.com/uRUQsgbh function loadSound(url) { var request = new XMLHttpRequest(); request.open('GET', url, true); request.responseType = 'arraybuffer'; // When loaded decode the data request.onload = function() { // decode the data context.decodeAudioData(request.response, function(buffer) { // when the audio is decoded play the sound playSound(buffer); }, onError); } request.send(); } function playSound(buffer) { sourceNode.buffer = buffer; sourceNode.noteOn(0); } But how can i pause or

How to release memory using Web Audio API?

泄露秘密 提交于 2019-12-05 13:09:57
var context = new window.AudioContext() var request = cc.loader.getXMLHttpRequest(); request.open("GET", 'res/raw-assets/resources/audio/bgm.mp3', true); request.responseType = "arraybuffer"; request.onload = function () { context["decodeAudioData"](request.response, function(buffer){ //success cc.log('success') window.buffer = buffer playBgm() }, function(){ //error }); }; request.onerror = function(){ //error }; request.send(); function playBgm(){ var audio = context["createBufferSource"](); audio.buffer = buffer; var _volume = context['createGain'](); _volume['gain'].value = 1; _volume[

How can I detect the number of audio channels in an mp3 in an <audio> tag?

六眼飞鱼酱① 提交于 2019-12-05 07:24:22
From what I've read I would expect the following JavaScript code to log "All is well", but instead it hits the error case: var audio = document.createElement('audio'); var ctx = new window.AudioContext(); var source = ctx.createMediaElementSource(audio); audio.src = 'http://www.mediacollege.com/audio/tone/files/440Hz_44100Hz_16bit_30sec.mp3'; // As @padenot mentioned, this is the number of channels in the source node, not the actual media file var chans = source.channelCount; if(chans == 1) { snippet.log("All is well"); } else { snippet.log("Expected to see 1 channel, instead saw: " + chans) }

Creating a custom echo node with web-audio

谁都会走 提交于 2019-12-05 04:36:55
I'm playing with the webkit Audio API and I'm trying to create an Echo effect, to accomplish that I've connected a DelayNode with a GainNode in a loop (The output of one is the input of the other, and viceversa). The effect works fine, but now I want to create an EchoNode Object that I can just plug-in and connect with the other AudioNode objects. Something like: myEchoNode = new EchoNode(); myConvolverNode = context.createConvolver(); myConvolverNode.connect(myEchoNode); I think that I should make my EchoNode inherit from AudioNode, so that the connect function of every other AudioNode would

Connect analyzer to Howler sound

旧时模样 提交于 2019-12-05 01:47:25
问题 I have been trying for a while to connect an analyser to a Howler sound without any success. I create my Howler sound like this: var sound = new Howl({ urls: [ '/media/sounds/genesis.mp3', ] }); And then I create my analyser using Howler global context like this: var ctx = Howler.ctx; var analyser = ctx.createAnalyser(); var dataArray = new Uint8Array(analyser.frequencyBinCount); analyser.getByteTimeDomainData(dataArray); I am quite new to the web audio API. I think I am missing a connection

Generating a static waveform with webaudio

旧时模样 提交于 2019-12-05 01:13:14
问题 I'm trying to generate a static waveform like in audio editing apps with webaudio and canvas. Right now I'm loading an mp3, creating a buffer, iterating over the data returned by getChannelData. The problem is.. I don't really understand what's being returned. What is being returned by getChannelData - is it appropriate for a waveform? How to adjust (sample size?) to get one peak == one second? Why are ~50% of the values are negative? ctx.decodeAudioData(req.response, function(buffer) { buf =

Web audio API equalizer

大憨熊 提交于 2019-12-04 21:37:33
问题 I have been looking around for creating an audio equalizer using the Web audio API: http://webaudio.github.io/web-audio-api/ I found a lot of threads about creating a visualizer, but that is of course not what I want to do. I simply want to be able to alter the sound using frequency sliders. I found that the biquadFilter should do the work, but I can't get a good result. The sound is altered consistently when I change any frequency value, but it just lowers the quality of the sound while it