web-audio-api

Distorted audio in iOS 7.1 with WebAudio API

寵の児 提交于 2019-12-21 09:13:52
问题 On iOS 7.1, I keep getting a buzzing / noisy / distorted sound when playing back audio using the Web Audio API. It sounds distorted like this, in place of normal like this. The same files are fine when using HTML5 audio. It all works fine on desktop (Firefox, Chrome, Safari.) EDIT: The audio is distorted in the iOS Simulator versions iOS 7.1, 8.1, 8.2. The buzzing sound often starts before I even playback anything. The audio is distorted on a physical iPhone running iOS 7.1, in both Chrome

Distorted audio in iOS 7.1 with WebAudio API

こ雲淡風輕ζ 提交于 2019-12-21 09:13:13
问题 On iOS 7.1, I keep getting a buzzing / noisy / distorted sound when playing back audio using the Web Audio API. It sounds distorted like this, in place of normal like this. The same files are fine when using HTML5 audio. It all works fine on desktop (Firefox, Chrome, Safari.) EDIT: The audio is distorted in the iOS Simulator versions iOS 7.1, 8.1, 8.2. The buzzing sound often starts before I even playback anything. The audio is distorted on a physical iPhone running iOS 7.1, in both Chrome

merge multiple audio buffer sources

走远了吗. 提交于 2019-12-21 05:48:09
问题 Question about html5 webaudio: is it possible to merge multiple songs together? I have different tracks that are all played at the same time using webaudio but I need to process the audio so I need all the audio inside one buffer in stead of each of the tracks having it's own buffer. i've tried merging them by adding their channel data but I always get "Uncaught RangeError: Index is out of range. " function mergeBuffers(recBuffers, recLength){ var result = new Float32Array(recLength*2); var

Generate sine wave and play it in the browser [closed]

ぃ、小莉子 提交于 2019-12-21 05:21:34
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 3 years ago . I need a sample code that could: generate sine wave (an array of samples) and then play it . All done in browser using some HTML5 API in JavaScript. (I am tagging this web-audio, although I am not 100% sure it is applicable) 回答1: This is how to play 441 Hertz sine wave tone in the

Using <audio> element for playback of raw audio

纵然是瞬间 提交于 2019-12-21 05:14:13
问题 I'm working on a little project that decrypts (using openpgp.js) and decodes a server-side audio file using the Web Audio API. The decrypted file arrives to the client as raw audio. Currently I can play back audio files using source.start(0) but there doesn't seems to be an easy way to dump the audio to a GUI that would allow users to do things like adjust volume and seek through the audio. I have a AudioContext object that's decoded and buffered with createBufferSource function playSound

recording a remote webrtc stream with RecordRTC

一曲冷凌霜 提交于 2019-12-21 04:33:07
问题 I am using Opentok JavaScript WebRTC library to host a 1-to-1 video chat (peer-to-peer). I can see my peer's video and hear the audio flawlessly. My wish is to record audio / video of other chat party (remote). For this purpose, I'm using RecordRTC. I was able to record the video of other chat participant (video is outputted to HTML video element), but, so far, I have not succeeded in recording audio (a dead-silence .wav file is as far as I could get). Using Chrome Canary (30.0.1554.0). This

Overlay two audio buffers into one buffer source

ぃ、小莉子 提交于 2019-12-21 04:19:16
问题 Trying to merge two buffers into one; I have been able to create the two buffers from the audio files and load and play them. Now I need to merge the two buffers into one buffer. How can they get merged? context = new webkitAudioContext(); bufferLoader = new BufferLoader( context, [ 'audio1.mp3', 'audio2.mp3', ], finishedLoading ); bufferLoader.load(); function finishedLoading(bufferList) { // Create the two buffer sources and play them both together. var source1 = context.createBufferSource(

Web Audio API Analyser Node Not Working With Microphone Input

你离开我真会死。 提交于 2019-12-20 20:35:46
问题 The bug preventing getting microphone input per http://code.google.com/p/chromium/issues/detail?id=112367 for Chrome Canary is now fixed. This part does seem to be working. I can assign the mic input to an audio element and hear the results through the speaker. But I'd like to connect an analyser node in order to do FFT. The analyser node works fine if I set the audio source to a local file. The problem is that when connected to the mic audio stream, the analyser node just returns the base

Web Audio Api : How do I add a working convolver?

独自空忆成欢 提交于 2019-12-20 18:28:07
问题 What I am trying to learn / do: How to set up a simple working convolver (reverb) into my code sandbox below using an impulse response. I thought it was similar to setting a filter but things seem quite different. What I tried: As with all new technologies things change at a fast pace making it difficult to know which implementation is correct and what is not. I looked at countless WebAudio Api Convolver Tutorials, many were old and others were working but far too "bloated" making it hard to

resample audio buffer from 44100 to 16000

末鹿安然 提交于 2019-12-20 10:56:24
问题 I have audio data in format of data-uri, then I converted this data-uri into a buffer now I need this buffer data in new samplerate, currently audio data is in 44.1khz and I need data in 16khz, and If I recorded the audio using RecordRTC API and if I record audio in low sample rate then I got distorted audio voice, So I am not getting how to resample my audio buffer, If any of you any idea regarding this then please help me out. Thanks in advance :) 回答1: You can use an OfflineAudioContext to