web-audio-api

Firefox WebAudio createMediaElementSource not working

99封情书 提交于 2019-12-03 11:02:19
Im using the WebAudio API with new Audio() object as a source. The following is a simplified version of what i am doing. This however, doesnt play any sounds in firefox 25.0.1. var context; if(window.webkitAudioContext) { context = new webkitAudioContext(); } else { context = new AudioContext(); } var audio = new Audio(); // This file does seem to have CORS Header audio.src = "http://upload.wikimedia.org/wikipedia/en/4/45/ACDC_-_Back_In_Black-sample.ogg"; var source; function onCanPlay() { console.log("can play called"); source = context.createMediaElementSource(audio); source.connect(context

Cut and Paste audio using web audio api and wavesurfer.js

五迷三道 提交于 2019-12-03 07:52:55
问题 I am currently trying to make a web editor allowing users to easily adjust basic settings to their audio files, as a plugin I've integrated wavesurfer.js as it has a very neat and cross-browser solution for it's waveform. After indexing a must-have list for the functionalities I've decided that the cut and paste are essential for making this product work, however after spending hours of trying to figure out how to implement this in the existing library and even starting to rebuild the

Web Audio API Analyser Node Not Working With Microphone Input

喜欢而已 提交于 2019-12-03 05:57:31
The bug preventing getting microphone input per http://code.google.com/p/chromium/issues/detail?id=112367 for Chrome Canary is now fixed. This part does seem to be working. I can assign the mic input to an audio element and hear the results through the speaker. But I'd like to connect an analyser node in order to do FFT. The analyser node works fine if I set the audio source to a local file. The problem is that when connected to the mic audio stream, the analyser node just returns the base value as if it doesn't have an audio stream at all. (It's -100 over and over again if you're curious.)

Web Audio API - record to MP3?

时光总嘲笑我的痴心妄想 提交于 2019-12-03 05:06:28
问题 I am asking because I couldn't find the answer anywhere. I have successfully implemented RecorderJS in order to record microphone input in JS. However, the recorded file is WAV which results in large files. I am looking for a way to record with JS directly to MP3, or encode the bits somehow to MP3 instead of WAV. How can it be done? Is there a Web Audio API function that can do that or JS MP3 encoder of some sort? 回答1: The only Javascript MP3 encoder I've seen is https://github.com/akrennmair

Failed to construct 'AudioContext': number of hardware contexts reached maximum

十年热恋 提交于 2019-12-03 05:01:21
Is there a way to remove an AudioContext after I've created it? var analyzers = []; var contexts = []; try { for(var i = 0; i<20; i++) { contexts[i] = new AudioContext(); analyzers[i] = contexts[i].createAnalyser(); } }catch(e) { console.log(e); // too many contexts created -- how do I remove them? } I've tried this, but it doesn't let me create new contexts after the fact: analyzers.forEach(function(analyzer){analyzer.disconnect(analyzer.context.destination)}) I am using Chrome 36 on Ubuntu Linux 14.04. You should really only have one AudioContext in the page. From the docs: "In most use

understanding getByteTimeDomainData and getByteFrequencyData in web audio

青春壹個敷衍的年華 提交于 2019-12-03 02:17:00
The documentation for both of these methods are both very generic wherever I look. I would like to know what exactly I'm looking at with the returned arrays I'm getting from each method. For getByteTimeDomainData, what time period is covered with each pass? I believe most oscopes cover a 32 millisecond span for each pass. Is that what is covered here as well? For the actual element values themselves, the range seems to be 0 - 255. Is this equivalent to -1 - +1 volts? For getByteFrequencyData the frequencies covered is based on the sampling rate, so each index is an actual frequency, but what

Cut and Paste audio using web audio api and wavesurfer.js

徘徊边缘 提交于 2019-12-02 22:54:30
I am currently trying to make a web editor allowing users to easily adjust basic settings to their audio files, as a plugin I've integrated wavesurfer.js as it has a very neat and cross-browser solution for it's waveform. After indexing a must-have list for the functionalities I've decided that the cut and paste are essential for making this product work, however after spending hours of trying to figure out how to implement this in the existing library and even starting to rebuild the wavesurfer.js functionalities from scratch to understand the logic I have yet to succeed. My question would be

Web Audio API - record to MP3?

本小妞迷上赌 提交于 2019-12-02 18:20:01
I am asking because I couldn't find the answer anywhere. I have successfully implemented RecorderJS in order to record microphone input in JS. However, the recorded file is WAV which results in large files. I am looking for a way to record with JS directly to MP3, or encode the bits somehow to MP3 instead of WAV. How can it be done? Is there a Web Audio API function that can do that or JS MP3 encoder of some sort? The only Javascript MP3 encoder I've seen is https://github.com/akrennmair/libmp3lame-js , which is a port using emscripten. It's supposed to be slow, and I've never used it. I don't

Web Audio API for live streaming?

两盒软妹~` 提交于 2019-12-02 15:07:28
We need to streaming live audio (from a medical device) to web browsers with no more than 3-5s of end-to-end delay (assume 200mS or less network latency). Today we use a browser plugin (NPAPI) for decoding , filtering (high, low, band), and playback of the audio stream (delivered via Web Sockets). We want to replace the plugin. I was looking at various Web Audio API demos and the most of our required functionality (playback, gain control, filtering) appears to be available in Web Audio API . However, it is not clear to me if Web Audio API can be used for streamed sources as most of the Web

htl5 voice recording with isualization [closed]

徘徊边缘 提交于 2019-12-02 14:30:42
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 5 years ago . I'm building a HTML5 software that records a voice and when playing that voice a visualizer should be in action. Here is my code: // variables var leftchannel = []; var rightchannel = []; var recorder = null; var recording = false; var recordingLength = 0; var volume = null; var audioInput = null; var sampleRate