web-audio-api

Using <audio> element for playback of raw audio

℡╲_俬逩灬. 提交于 2019-12-03 15:23:39
I'm working on a little project that decrypts (using openpgp.js) and decodes a server-side audio file using the Web Audio API. The decrypted file arrives to the client as raw audio. Currently I can play back audio files using source.start(0) but there doesn't seems to be an easy way to dump the audio to a GUI that would allow users to do things like adjust volume and seek through the audio. I have a AudioContext object that's decoded and buffered with createBufferSource function playSound(decodedAudio) { var source = context.createBufferSource(); source.buffer = decodedAudio; source.connect

Failed to construct 'AudioContext': number of hardware contexts reached maximum

谁说胖子不能爱 提交于 2019-12-03 15:22:25
问题 Is there a way to remove an AudioContext after I've created it? var analyzers = []; var contexts = []; try { for(var i = 0; i<20; i++) { contexts[i] = new AudioContext(); analyzers[i] = contexts[i].createAnalyser(); } }catch(e) { console.log(e); // too many contexts created -- how do I remove them? } I've tried this, but it doesn't let me create new contexts after the fact: analyzers.forEach(function(analyzer){analyzer.disconnect(analyzer.context.destination)}) I am using Chrome 36 on Ubuntu

Web audio API equalizer

余生长醉 提交于 2019-12-03 15:19:13
I have been looking around for creating an audio equalizer using the Web audio API: http://webaudio.github.io/web-audio-api/ I found a lot of threads about creating a visualizer, but that is of course not what I want to do. I simply want to be able to alter the sound using frequency sliders. I found that the biquadFilter should do the work, but I can't get a good result. The sound is altered consistently when I change any frequency value, but it just lowers the quality of the sound while it should alter the frequencies. I first load a sound: Audio.prototype.init = function(callback){ var $this

Get consistent audio quality with getUserMedia using different browsers

若如初见. 提交于 2019-12-03 13:33:51
问题 What I'm doing I'm using the getUserMedia API to record audio in the browser and then send this audio to a websocket server. Furthermore, to test the recordings, I use soundflower on a Mac as an input device, so I can play a wave file, instead of speaking into a microphone. client side (JavaScript) window.AudioContext = window.AudioContext || window.webkitAudioContext; navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia; var

Can I stream microphone audio from client to client using nodejs?

三世轮回 提交于 2019-12-03 13:20:18
I'm trying to create a realtime voice chat. once a client is holding a button and talks, I want the sound to be sent over the socket to the nodejs backend, then I want to stream this data to another client. here is the sender client code: socket.on('connect', function() { var session = { audio: true, video: false }; navigator.getUserMedia(session, function(stream){ var audioInput = context.createMediaStreamSource(stream); var bufferSize = 2048; recorder = context.createScriptProcessor(bufferSize, 1, 1); recorder.onaudioprocess = onAudio; audioInput.connect(recorder); recorder.connect(context

recording a remote webrtc stream with RecordRTC

被刻印的时光 ゝ 提交于 2019-12-03 13:13:18
I am using Opentok JavaScript WebRTC library to host a 1-to-1 video chat (peer-to-peer). I can see my peer's video and hear the audio flawlessly. My wish is to record audio / video of other chat party (remote). For this purpose, I'm using RecordRTC . I was able to record the video of other chat participant (video is outputted to HTML video element), but, so far, I have not succeeded in recording audio (a dead-silence .wav file is as far as I could get). Using Chrome Canary (30.0.1554.0). This is my method: var clientVideo = $('#peerdiv video')[0];//peer's video (html element) var serverVideo =

Overlay two audio buffers into one buffer source

末鹿安然 提交于 2019-12-03 12:48:06
Trying to merge two buffers into one; I have been able to create the two buffers from the audio files and load and play them. Now I need to merge the two buffers into one buffer. How can they get merged? context = new webkitAudioContext(); bufferLoader = new BufferLoader( context, [ 'audio1.mp3', 'audio2.mp3', ], finishedLoading ); bufferLoader.load(); function finishedLoading(bufferList) { // Create the two buffer sources and play them both together. var source1 = context.createBufferSource(); var source2 = context.createBufferSource(); source1.buffer = bufferList[0]; source2.buffer =

Web Audio Api : How do I add a working convolver?

不羁的心 提交于 2019-12-03 12:17:08
What I am trying to learn / do: How to set up a simple working convolver (reverb) into my code sandbox below using an impulse response. I thought it was similar to setting a filter but things seem quite different. What I tried: As with all new technologies things change at a fast pace making it difficult to know which implementation is correct and what is not. I looked at countless WebAudio Api Convolver Tutorials, many were old and others were working but far too "bloated" making it hard to understand what is going on. I tried to implement some of the examples from the mozilla documentation:

understanding getByteTimeDomainData and getByteFrequencyData in web audio

狂风中的少年 提交于 2019-12-03 11:53:31
问题 The documentation for both of these methods are both very generic wherever I look. I would like to know what exactly I'm looking at with the returned arrays I'm getting from each method. For getByteTimeDomainData, what time period is covered with each pass? I believe most oscopes cover a 32 millisecond span for each pass. Is that what is covered here as well? For the actual element values themselves, the range seems to be 0 - 255. Is this equivalent to -1 - +1 volts? For getByteFrequencyData

OfflineAudioContext FFT analysis with chrome

為{幸葍}努か 提交于 2019-12-03 11:36:31
i'm trying to build a waveform generator that get audiofile amplitudes values and display them to a canvas as quick as possible (faster than realtime) in javascript. so i use the OfflineAudioContext / webkitOfflineAudioContext , load the file and start the analyse. the waveform is to fill a wide canvas. i analyse buffer in a processor.onaudioprocess function. (i guess it's the way it works ?) it works fine in firefox but i've got an issue in chrome : it seems it "jumps" over much analyse to finish it works as soon as possible and only returns a few coordinates (something like 16). here is the