web-audio-api

Creating a custom echo node with web-audio

浪子不回头ぞ 提交于 2019-12-07 01:21:32
问题 I'm playing with the webkit Audio API and I'm trying to create an Echo effect, to accomplish that I've connected a DelayNode with a GainNode in a loop (The output of one is the input of the other, and viceversa). The effect works fine, but now I want to create an EchoNode Object that I can just plug-in and connect with the other AudioNode objects. Something like: myEchoNode = new EchoNode(); myConvolverNode = context.createConvolver(); myConvolverNode.connect(myEchoNode); I think that I

Has IOS13 broken <audio> tags used as audio buffers connected to the audio context?

落爺英雄遲暮 提交于 2019-12-07 00:16:39
问题 We are currently developing a website that allows users to play simple audio tags connected to the audiocontext. We are aware of technical issues with IOS such as playback initiated by user gestures. Everything is working fine up to IOS12. Now that IOS13 is out, nothing works anymore. It works on all desktops, android and IOS up to IOS13. Any idea on what is going on? There are no error messages in the console when debugging with Safari on Desktop connected to the iphone. https://codepen.io

Recording audio from multiple microphones simultaneously with getUserMedia()

本秂侑毒 提交于 2019-12-06 21:50:48
问题 it is possible to access different microphones at the same time using getUserMedia() ? This whould be useful to filter out background noise; create some sort of stereoscopic effect; make available multiple audio tracks for an international streaming conference. Apparently, it is quite tricky for video source: Capture video from several webcams with getUserMedia I was wondering if, for the audio source, the problem was different. 回答1: You should be able to do this but I imagine the browser

How to play audio stream chunks recorded with WebRTC?

你。 提交于 2019-12-06 15:41:46
I'm trying to create an experimental application that streams audio in real time from client 1 to client 2 . So following some tutorials and questions about the same subject, I used WebRTC and binaryjs . So far this is what I get 1- Client 1 and Client 2 have connected to BinaryJS to send/receive data chunks. 2- Client 1 used WebRTC to record audio and gradually send it to BinaryJS 3- Client 2 receives the chunks and try to play them. Well I'm getting an error in the last part. This is the error message I get: Uncaught RangeError: Source is too large at Float32Array.set (native) And this is

HTML5 Audio API inputBuffer.getChannelData to audio Array buffer

旧时模样 提交于 2019-12-06 15:09:26
I am making an application where I am taking mic data from the inputBuffer and I want to stream to another client and play it. However, I cannot get it wokring. My recording/capturing works fine so I will skip to relevant parts of the code function recorderProcess(e) { var left = e.inputBuffer.getChannelData(0); var convert = convertFloat32ToInt16(left); window.stream.write(convert); var src = window.URL.createObjectURL(lcm); playsound(convert); ss(socket).emit('file',convert, {size: src.size},currentgame); ss.createBlobReadStream(convert).pipe(window.stream); //ss.createReadStream(f).pipe

Using Web Audio API decodeAudioData with external binary data

半城伤御伤魂 提交于 2019-12-06 11:49:04
I've searched related questions but wasn't able to find any relevant info. I'm trying to get the Web Audio API to play an mp3 file which is encoded in another file container, so what I'm doing so far is parsing said container, and feeding the result binary data (arraybuffer) to the audioContext.decodeAudioData method, which supposedly accepts any kind of arraybuffer containing audio data. However, it always throws the error callback. I only have a faint grasp of what I'm doing so probably the whole approach is wrong. Or maybe it's just not possible. Has any of you tried something like this

iOS6/7 stop sound going to background using web audio API

不想你离开。 提交于 2019-12-06 10:37:35
There are different solutions for the issue when you go to the background in the iPhone or iPad and the sound continuous playing, the most of them for the HMTL5 audio tag , but are not relevant if you are using Web Audio API because there are not an event like "timeupdate" and is a different concept of course. The Page Visibility API works in iOS7 only if you change of tab, but doesn't if you go to the background, in iOS6 not at all. Someone knows any way to stop/mute a sound using Web Audio API if you go to the background in iOS6 or iOS7? To detect when safari is going background, you can use

How to set position correctly in SoundJS for Firefox, IE

末鹿安然 提交于 2019-12-06 09:14:04
问题 I'm trying to use SoundJS to play sn mp3 file and seek to a specific position. I'm using: instance.setPosition(10000); which works correctly in Google Chrome. But in Mozilla Firefox I hear the sound playing from the correct position, and a second instance of the sound also playing from another position. In Internet Explorer, the sound starts playing again from the beginning. Here's a jsFiddle (with autoplaying sound) and here is the complete javascript: createjs.Sound.registerPlugins(

visualize mediastream which is coming from a remote peer connection

假如想象 提交于 2019-12-06 08:22:08
Since some days I`m trying to visualize an audiostream which is coming over webrtc. We already wrote some visuals which are working fine for the normal local stream (webaudio microphone usage). Then I found some really interesting things on https://github.com/muaz-khan/WebRTC-Experiment/tree/master/ for streaming the microphone input between different browsers. We need this to have the same audio data from one backend for all clients in the frontend. Everything works fine and some tests showed that we can hear each other. So I thought that it is also not a problem to visualize the incoming

Capturing sound input with low latency in the browser

白昼怎懂夜的黑 提交于 2019-12-06 07:30:42
Is it possible to capture low-latency sound input in the browser? mainly for recording a guitar. (I know it depends on the hardware too, but let's assume the hardware is good enough). I tried to use the Web Audio API, but it had somewhat bad latency. Are there any other technologies out there that gives high performance sound-input capturing in the browser? Is it possible to use Unity3D for that? Thanks. "Web Audio API latency was bad" ignores a lot of potential issues. Low latency nearly always needs some tweaking. 1) Latency is considerably lower (3ms or so, commonly) on OSX than on Windows