web-audio-api

Cracks in webaudio playback during streaming of raw audio data

爱⌒轻易说出口 提交于 2019-12-06 07:21:31
问题 I have a server sending chunks of raw audio over a websocket. The idea is to retrieve those and play them in a way to have the smoothest playback possible. Here is the most important piece of code: ws.onmessage = function (event) { var view = new Int16Array(event.data); var viewf = new Float32Array(view.length); audioBuffer = audioCtx.createBuffer(1, viewf.length, 22050); audioBuffer.getChannelData(0).set(viewf); source = audioCtx.createBufferSource(); source.buffer = audioBuffer; source

Sending Audio Blob to server

大兔子大兔子 提交于 2019-12-06 06:34:19
I am trying to send an audio blob produced by WebAudio API and Recorder.js to my Laravel Controller using jQuery's $.post method. Here is what I am trying. $('#save_oralessay_question_btn').click(function(e){ var question_content = $('#question_text_area').val(); var test_id_fr = parseInt($('#test_id_fr').val()); var question_type = parseInt($('#question_type').val()); var rubric_id_fr = $('#rubric_id_fr').val(); var reader = new FileReader(); reader.onload = function(event){ var form_data = new FormData(); form_data.append('audio_data', event.target.result); var result = $.post('../questions'

AudioContext gain Node does not mute audio source (Web Audio API)

百般思念 提交于 2019-12-06 05:30:11
I have some music visualizations I made with three.js and the Web Audio API, and I'm having issues muting the audio. I currently have an AudioContext object with an analyzer and source buffer. I'm working on adding a gain node to mute the audio, which is not currently working. When I click mute, the audio level changes (it actually gets louder), so I know the gain is affecting something. Code: // AudioHelper class constructor function AudioHelper() { this.javascriptNode; this.audioContext; this.sourceBuffer; this.analyser; this.gainNode; this.isMuted; } // Initialize context, analyzer etc

how I can get current time when use playbackRate.value in Web Audio API

故事扮演 提交于 2019-12-06 04:52:05
问题 I need to know the current time of a source that is playing, but I can't use context.currentTime because when I change the source.playbackRate.value the speed rate of the context don't change too, so I can't determinate where is the current position of sound. There isn't another way? Edit, some code: I use this functions to load and play an mp3 from the network function loadSoundFile(url) { source = null; var request = new XMLHttpRequest(); request.open('GET', url, true); request.responseType

Ask for microphone on onclick event

徘徊边缘 提交于 2019-12-06 04:31:07
问题 The other day I stumbled upon with this example of a Javascript audio recorder: http://webaudiodemos.appspot.com/AudioRecorder/index.html Which I ended up using for implementing my own. The problem I'm having is that in this file: var audioContext = new webkitAudioContext(); var audioInput = null, realAudioInput = null, inputPoint = null, audioRecorder = null; var rafID = null; var analyserContext = null; var canvasWidth, canvasHeight; var recIndex = 0; /* TODO: - offer mono option - "Monitor

How to seamlessly loop sound with web audio api

一个人想着一个人 提交于 2019-12-06 02:25:56
问题 I can't find a clear answer to this question anywhere. I'm looking for the easiest way to seamlessly loop a .wav file automatically on document load in chrome. It seems that the webaudio api is the best practice, but I can't find simple documentation. Support for safari and others would be great too but not as important. I have looked at the w3.org example but it didn't help I think this is the closest to what I want besides the on.click for the buttons: https://forestmist.org/blog/web-audio

Record Sounds from AudioContext (Web Audio API)

半腔热情 提交于 2019-12-06 02:23:34
问题 Is there a way to record the audio data that's being sent to webkitAudioContext.destination ? The data that the nodes are sending there is being played by the browser, so there should be some way to store that data into a (.wav) file. 回答1: Currently, there's not a native way to do that, but as Max said in the comment above, Recorderjs does essentially this (it doesn't chain onto the destination, but is a ScriptProcessorNode you can connect other nodes to, and have its input recorded. I built

Safari 6.0.2 not calling onaudioprocess

喜你入骨 提交于 2019-12-06 02:19:00
I've earlier successfully used the JavaScriptAudioNode in the Web Audio API to synthesize and mix audio both in Chrome and Safari 6.0. However, the latest version of Safari no longer appears to work, because it does not call onaudioprocess to fill the source buffers. This is a simplified example which plays only silence and appends text to the document body on each call to onaudioprocess : <html> <head> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1.9.0/jquery.min.js"></script> <script type="text/javascript"> $(document).ready(function() { $("a").click(function() { var context =

Record audio on multiple tracks simultaneously from multiple devices

徘徊边缘 提交于 2019-12-06 01:26:53
I'm currenly developing an audio web application using Web Audio API in javascript, but I found a problem: I need to record simultaneously from different devices to different tracks (imagine for example a sound card with 8 inputs recording in 8 buffers independently in orderto record a drummer), but I have not found any way to tell the AudioContext from which device you must record :( Can anybody help me? Thanks a lot :) Well you can have multiple microphones - but only if they're plugged into a multi-channel interface. Separate devices would be addressed by calling getUserMedia multiple times

speex splitted audio data - WebAudio - VOIP

只谈情不闲聊 提交于 2019-12-05 20:03:22
Im running a little app that encodes and decodes an audio array with the speex codec in javascript: https://github.com/dbieber/audiorecorder with a small array filled with a sin waveform for(var i=0;i<16384;i++) data.push(Math.sin(i/10)); this works. But I want to build a VOIP application and have more than one array. So if I split my array up in 2 parts encode>decode>merge, it doesn't sound the same as before. Take a look at this: fiddle: http://jsfiddle.net/exh63zqL/ Both buttons should give the same audio output. How can i get the same output in both ways ? Is their a special mode in speex