web-audio-api

Use AudioWorklet within electron (DOMException: The user aborted a request)

寵の児 提交于 2019-12-08 06:25:54
问题 I am trying to use an AudioWorklet within my electron app for metering etc. which works fine when executed in dev mode where the worklet is being served by an express dev server like http://localhost:3000/processor.js. However if I try to run the app in prod mode the file is being served locally like file://tmp/etc/etc/build/processor.js and in the developer-console I can even see the file correctly being previewed but I get this error message: Uncaught (in promise) DOMException: The user

Access/process system audio with Javascript/Web Audio API

荒凉一梦 提交于 2019-12-08 06:03:38
问题 Is it possible to access system audio using the Web Audio API, in order to visualize or apply an equalizer to it? It looks like it's possible to hook up system audio to an input device that Web Audio API can access (i.e. Web Audio API, get the output from the soundcard); however ideally I would like to be able to process all sound output without making any local configuration changes. 回答1: No, this isn't possible. The closest you could get is installing a loopback audio device on the user's

Possible to create an AudioBuffer from memory ie an Int8Array

僤鯓⒐⒋嵵緔 提交于 2019-12-08 05:06:21
问题 Is there a way to add raw data from memory as a sample to the Web Audio Api? I would like to add a Int8Array (or Int16Array) as a buffer, the buffer just have the samples and no format as WAV or MP3. I have tried the audioContext.createBuffer and such without succes. Something like this: var buffer = audioContext.createBuffer(1,8192, 22000); var intArray = new Int8Array(....); // -- fill intarray buffer.buffer = intArray; ... var source = context.createBufferSource(); source.buffer = buffer;

How to play audio stream chunks recorded with WebRTC?

大城市里の小女人 提交于 2019-12-08 05:06:14
问题 I'm trying to create an experimental application that streams audio in real time from client 1 to client 2 . So following some tutorials and questions about the same subject, I used WebRTC and binaryjs . So far this is what I get 1- Client 1 and Client 2 have connected to BinaryJS to send/receive data chunks. 2- Client 1 used WebRTC to record audio and gradually send it to BinaryJS 3- Client 2 receives the chunks and try to play them. Well I'm getting an error in the last part. This is the

Possible to create an AudioBuffer from memory ie an Int8Array

£可爱£侵袭症+ 提交于 2019-12-08 04:47:29
Is there a way to add raw data from memory as a sample to the Web Audio Api? I would like to add a Int8Array (or Int16Array) as a buffer, the buffer just have the samples and no format as WAV or MP3. I have tried the audioContext.createBuffer and such without succes. Something like this: var buffer = audioContext.createBuffer(1,8192, 22000); var intArray = new Int8Array(....); // -- fill intarray buffer.buffer = intArray; ... var source = context.createBufferSource(); source.buffer = buffer; source.connect(context.destination); If that is not possible is there a sound format compatible with

WAVE file extended part of fmt chunk

旧街凉风 提交于 2019-12-08 04:46:49
问题 I have a WAVE file with a wFormatTag that is 3 ( WAVE_FORMAT_IEEE_FLOAT ). Firefox treats WAVE_FORMAT_IEEE_FLOAT files like WAVE_FORMAT_EXTENSIBLE , which means that it expects that a WAVE_FORMAT_IEEE_FLOAT file contains the extended part of the fmt chunk. My file doesn't contain the extended part of the fmt chunk, which results in an error when decoding the file in Firefox: The buffer passed to decodeAudioData contains invalid content which cannot be decoded successfully. This means I have

Extracting fragment of audio from a url and play it with pure Web Audio API

你说的曾经没有我的故事 提交于 2019-12-08 03:38:36
问题 On the following url: https://www.tophtml.com/snl/15.mp3 there is one audio I want to play using pure Web Audio API on the following range : range from: second: 306.6 range to: second: 311.8 total: 5.2 seconds I downloaded that file to my desktop (I'm using Windows 10 ), then opened it with VLC and got the following file info: number of channels: 2 sample rate: 44100 Hz bits per sample: 32 (float32) Here you have info about concepts on this: https://developer.mozilla.org/en-US/docs/Web/API

How can I capture the AudioDestinationNode value when headphones are plugged in with Web Audio API?

我是研究僧i 提交于 2019-12-08 02:55:33
I've been looking for a solution that detects the difference between the default speakers and headphones on a computer. I understand that with Web Audio API, AudioDestinationNode represents the output device, where users hear audio. My question (to be specific) is whether or not it is possible to detect a change in the users' audio output device (wired/wireless headphones). If this is not possible, is there a way to use phonegap to do so, for computers as well as mobile devices? My goal is to initiate an event only when the AudioDestinationNode maps to headphones or external speakers. There's

Capturing sound input with low latency in the browser

做~自己de王妃 提交于 2019-12-08 02:26:51
问题 Is it possible to capture low-latency sound input in the browser? mainly for recording a guitar. (I know it depends on the hardware too, but let's assume the hardware is good enough). I tried to use the Web Audio API, but it had somewhat bad latency. Are there any other technologies out there that gives high performance sound-input capturing in the browser? Is it possible to use Unity3D for that? Thanks. 回答1: "Web Audio API latency was bad" ignores a lot of potential issues. Low latency

AudioContext gain Node does not mute audio source (Web Audio API)

做~自己de王妃 提交于 2019-12-07 22:36:28
问题 I have some music visualizations I made with three.js and the Web Audio API, and I'm having issues muting the audio. I currently have an AudioContext object with an analyzer and source buffer. I'm working on adding a gain node to mute the audio, which is not currently working. When I click mute, the audio level changes (it actually gets louder), so I know the gain is affecting something. Code: // AudioHelper class constructor function AudioHelper() { this.javascriptNode; this.audioContext;