web-audio-api

Get mic audio in android. AudioContext

…衆ロ難τιáo~ 提交于 2019-12-23 23:23:14
问题 (For some reason I can't get an answer to this problem...) Hello. I need to access android microphone. I made a tuner app in the web using this: https://github.com/cwilso/PitchDetect. Works just fine. However when I build the app to android using intelXDK and cordova plugins I can't get any mic input. I am not sure if I need to use this: https://github.com/edimuj/cordova-plugin-audioinput. Seems like the right way to get the audioContext in android. Plus it shows a warning when installing the

How to play a sound file Safari with web audio API?

主宰稳场 提交于 2019-12-23 22:05:03
问题 I'm modifying a script to play an mp3 that I found on Codepen to get it to work on Safari. In Firefox and Chrome it works fine, but Safari complains : "Unhandled Promise Rejection: TypeError: Not enough arguments index.html:25" I've read https://developer.apple.com/library/archive/documentation/AudioVideo/Conceptual/Using_HTML5_Audio_Video/PlayingandSynthesizingSounds/PlayingandSynthesizingSounds.html That goes into much more advanced stuff than I need. I just want to play the sound in my mp3

HTML5 audio recording not woorking in Google Nexus

江枫思渺然 提交于 2019-12-23 19:37:39
问题 I want to use HTML5 getusermedia to record audio, play back and save to server. I am trying with Google Nexus (Android 4+) and Google Chrome 29. Is it possible? When I try the codes I got from net, it asks permission to record using microphone and using the phone, I save a wav file. But no sound on playing this. The same site works from chromium browser in my PC. Am I doing anything wrong? or is it that audio recording is not actually working as it claims? Thanks 回答1: We now have Web Audio

How can I prevent breakup/choppiness/glitches when using an AudioWorklet to stream captured audio?

可紊 提交于 2019-12-23 15:44:50
问题 We've been working on a JavaScript-based audio chat client that runs in the browser and sends audio samples to a server via a WebSocket. We previously tried using the Web Audio API's ScriptProcessorNode to obtain the sample values. This worked well on our desktops and laptops, but we experienced poor audio quality when transmitting from a handheld platform we must support. We've attributed this to the documented script processor performance issues (https://developer.mozilla.org/en-US/docs/Web

Gradually Change Web Audio API Panner

拈花ヽ惹草 提交于 2019-12-23 14:58:17
问题 I'm trying to use a simple HTML range input to control the panning of my Web Audio API audio but I can only get 3 "positions" for my audio output: -Center -100% to the left -100% to the right. I would like to have something in between does positions, like 20% left and 80% right and so on... The code that I'm using is: //Creating the node var pannerNode = context.createPanner(); //Getting the value from the HTML input and using it on the position X value document.getElementById('panInput')

How can I connect two input channels to the ScriptProcessorNode? (Web Audio Api, JavaScript)

点点圈 提交于 2019-12-23 10:52:54
问题 I am trying to implement a ScriptProcessorNode with two input and one output channels. var source = new Array(2); source[0] = context.createBufferSource(); source[0].buffer = buffer[0]; source[1] = context.createBufferSource(); source[1].buffer = buffer[1]; var test = context.createScriptProcessor(4096, 2, 1); source[0].connect(test, 0, 0); source[1].connect(test, 0, 1); test.connect(context.destination); source[0].start(); source[1].start(); When I run this code in Google Chrome as well as

Chrome: onaudioprocess stops getting called after a while

£可爱£侵袭症+ 提交于 2019-12-23 08:49:12
问题 I'm using ScriptProcessorNode's onaudioprocess callback to process the microphone input. By connecting MediaStreamSourceNode to the ScriptProcessorNode, I can get the raw audio data within the onaudioprocess callback function. However, after about 30 seconds (this varies ranging from 10 to 35 sec,) the browser stops calling onaudioprocess. In the following code, the console.log output ('>>') always stops after about 30 sec. var ctx = new AudioContext(); var BUFFER_LENGTH = 4096; console.log(

Javascript Web Audio API AnalyserNode Not Working

我是研究僧i 提交于 2019-12-23 03:42:14
问题 The code is supposed to stream any url and provide a visualization of the audio. Unfortunately, the visualizer is not working. The visualization relies on the data from the AnalyzerNode, which is always returning empty data. Why doesn't the AnalyserNode in this code work? The numberOfOutputs on the source node increases after I .connect() them, but the numberOfInputs on the AnalyserNode does not change. <html> <head> <script> var context; var source; var analyser; var canvas; var

How do I load an AudioBuffer with Angular $http?

强颜欢笑 提交于 2019-12-23 03:15:46
问题 I am getting started with the Web Audio Api to experiment a little with it wanted to see how best to work with it in AngularJS. Other Web Audio stuff I have tried seems to work in Angular, such as creating a web audio sine wave etc, but I am just not sure of the best way to load audio in angular, if I then want to be able to manipulate it with the Web Audio API I naively tried to put some functions directly into a AngularJS Controller which doesn't seem right - code below. In the Chrome Dev

Need desired format of .wav In Recoder.js

眉间皱痕 提交于 2019-12-23 02:34:43
问题 I am using recorder.js to record audio. When I download .wav file it is 48KHz but I want in 16KHz, mono channel. function init(config) { sampleRate = config.sampleRate; //to 16000 numChannels = config.numChannels; //to 1 initBuffers(); } It changes the recorded voice (like a voice of the robot). Help me to get a voice in 16KHz, mono .wav file in a normal voice. (Thanx in Advance) My Code Is Here 来源: https://stackoverflow.com/questions/49043887/need-desired-format-of-wav-in-recoder-js