web-audio-api

Animate object by sound amplitude?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-04 07:20:17
I know it's possible to animate an object with sound via Actionscript. I am really hoping it is also possible to animate an object with JavaScript since they are very similar. Maybe it could be done with jQuery or HTML5. I am just hoping to find a way to do it outside of flash. Does anyone know if this is possible in any of these formats? I have done lots of research and can't seem to find any forms or tutorials that say it is possible or not. BASICALLY, I am trying to achieve this same effect that I coded in Actionscript but I want code it using another language so no flash views can also see

Record Sounds from AudioContext (Web Audio API)

丶灬走出姿态 提交于 2019-12-04 06:52:25
Is there a way to record the audio data that's being sent to webkitAudioContext.destination ? The data that the nodes are sending there is being played by the browser, so there should be some way to store that data into a (.wav) file. Currently, there's not a native way to do that, but as Max said in the comment above, Recorderjs does essentially this (it doesn't chain onto the destination, but is a ScriptProcessorNode you can connect other nodes to, and have its input recorded. I built on Recorderjs to do a simple audio file recorder - https://github.com/cwilso/AudioRecorder . Sine to opus

Web Audio start and stop oscillator then start it again

99封情书 提交于 2019-12-04 03:12:51
I am trying to start and stop a sound. And that works. But I can't start the sound up again. Do i really have to make another oscillator again? This just seems extremely un-intuitive. There must be a better way. This is all i have that works: oscillator1.noteOn(0); oscillator1.noteOff(0); Calling noteOn again doesnt do anything. Why? Is beyond me. I also tried setting the volume, or in the context of the Web Audio people, "gain", equal to zero. But for some reason, a gain of zero makes sound. What value of gain would not make any sound? man, i can't believe how difficult this is :/ Actually,

Distorted audio in iOS 7.1 with WebAudio API

淺唱寂寞╮ 提交于 2019-12-04 02:56:47
On iOS 7.1, I keep getting a buzzing / noisy / distorted sound when playing back audio using the Web Audio API. It sounds distorted like this , in place of normal like this . The same files are fine when using HTML5 audio. It all works fine on desktop (Firefox, Chrome, Safari.) EDIT: The audio is distorted in the iOS Simulator versions iOS 7.1, 8.1, 8.2. The buzzing sound often starts before I even playback anything. The audio is distorted on a physical iPhone running iOS 7.1, in both Chrome and Safari. The audio is fine on a physical iPhone running iOS 8.1, in both Chrome and Safari. i.e.:

Web Audio synthesis: how to handle changing the filter cutoff during the attack or release phase?

☆樱花仙子☆ 提交于 2019-12-04 02:30:59
I'm building an emulation of the Roland Juno-106 synthesizer using WebAudio. The live WIP version is here . I'm hung up on how to deal with updating the filter if the cutoff frequency or envelope modulation amount are changed during the attack or release while the filter is simultaneously being modulated by the envelope. That code is located around here . The current implementation doesn't respond the way an analog synth would, but I can't quite figure out how to calculate it. On a real synth the filter changes immediately as determined by the frequency cutoff, envelope modulation amount, and

Can Web Speech API used in conjunction with Web Audio API?

浪子不回头ぞ 提交于 2019-12-03 23:53:10
问题 Is it possible to use the synthesised speech from Web Speech API as a SourceNode inside Web Audio API's audio context? 回答1: I actually asked about adding this on the Web Speech mailing list, and was basically told "no". To be fair to people on that mailing list, I was unable to think of more than one or two specific use cases when prompted. So unless they've changed something in the past month or so, it sounds like this isn't a planned feature. 回答2: You can use Google's Web Speech API, you

Trim or cut audio recorded with mediarecorder JS

ぃ、小莉子 提交于 2019-12-03 17:26:47
Requested Knowledge How to shorten (from the front) an array of audio blobs and still have playable audio. Goal I am ultimately trying to record a continuous 45 second loop of audio using the JS MediaRecorder API. The user will be able to push a button and the last 45s of audio will be saved. I can record, playback, and download a single recording just fine. Issue When I have an array called chunks of say 1000 blobs from the MediaRecorder and use chunks.slice(500, 1000) the resulting blob array can't be used to playback or download audio. Oddly enough chunks.slice(0,500) still works fine. Code

merge multiple audio buffer sources

China☆狼群 提交于 2019-12-03 17:15:07
Question about html5 webaudio: is it possible to merge multiple songs together? I have different tracks that are all played at the same time using webaudio but I need to process the audio so I need all the audio inside one buffer in stead of each of the tracks having it's own buffer. i've tried merging them by adding their channel data but I always get "Uncaught RangeError: Index is out of range. " function mergeBuffers(recBuffers, recLength){ var result = new Float32Array(recLength*2); var offset = 0; for (var i = 0; i < recBuffers.length; i++){ result.set(recBuffers[0], offset); offset +=

Generate sine wave and play it in the browser [closed]

▼魔方 西西 提交于 2019-12-03 16:59:35
I need a sample code that could: generate sine wave (an array of samples) and then play it . All done in browser using some HTML5 API in JavaScript. (I am tagging this web-audio, although I am not 100% sure it is applicable) exebook This is how to play 441 Hertz sine wave tone in the browser using the cross-browser AudioContext . window.AudioContext = window.AudioContext || window.webkitAudioContext; var context = new AudioContext(); function playSound(arr) { var buf = new Float32Array(arr.length) for (var i = 0; i < arr.length; i++) buf[i] = arr[i] var buffer = context.createBuffer(1, buf

Connect analyzer to Howler sound

做~自己de王妃 提交于 2019-12-03 16:07:30
I have been trying for a while to connect an analyser to a Howler sound without any success. I create my Howler sound like this: var sound = new Howl({ urls: [ '/media/sounds/genesis.mp3', ] }); And then I create my analyser using Howler global context like this: var ctx = Howler.ctx; var analyser = ctx.createAnalyser(); var dataArray = new Uint8Array(analyser.frequencyBinCount); analyser.getByteTimeDomainData(dataArray); I am quite new to the web audio API. I think I am missing a connection somewhere but I don't know to what I have to connect it in Howler. avanwink Web Audio uses a sequence