web-audio-api

<audio> tag to audioBuffer - is it possible?

泄露秘密 提交于 2020-02-01 05:09:09
问题 My javascript-webApp first reads a short mp3 file and finds silence-gaps in it (for navigational purposes), then it plays the same mp3 file cueing it to start where one silence or another finishes. This differs from the usual webAudio scenario designed to grant access to audio data currently being played in the stream (not to the whole track). To get my webApp to work I have to read/access the mp3 file twice : via XMLHttpRequest to read an entire MP3 file and put it in to an audioBuffer that

<audio> tag to audioBuffer - is it possible?

ぃ、小莉子 提交于 2020-02-01 05:09:04
问题 My javascript-webApp first reads a short mp3 file and finds silence-gaps in it (for navigational purposes), then it plays the same mp3 file cueing it to start where one silence or another finishes. This differs from the usual webAudio scenario designed to grant access to audio data currently being played in the stream (not to the whole track). To get my webApp to work I have to read/access the mp3 file twice : via XMLHttpRequest to read an entire MP3 file and put it in to an audioBuffer that

Web Audio Offline Context and Analyser Node

时光总嘲笑我的痴心妄想 提交于 2020-01-28 11:01:11
问题 Is it possible to use the Analyser node in the offlineAudioContext to do frequency analysis? I found out that ScriptProcessor 's onaudioprocess event still fires in the offlineAudioContext and this was the only event source I could use to call getByteFrequencyData of the Analyser Node. As below: var offline = new offlineAudioContext(1, buffer.length, 44100); var bufferSource = offline.createBufferSource(); bufferSource.buffer = buffer; var analyser = offline.createAnalyser(); var scp =

Reading output audio data from Spotify Web Playback stream

拥有回忆 提交于 2020-01-21 09:21:12
问题 I am currently playing around with audio visualization and I am trying to work with Spotify's Web Playback SDK to stream and analyze songs directly on my site. However, I am unsure what the limitations are when it comes to actually reading the streamed data. I've noticed that an iframe is generated for the Spotify player, and I've read that spotify uses the encrypted media extensions to stream the audio on chrome. Is it even possible to read the music data from the Spotify api? Maybe, I can

Web Audio Api: Proper way to play data chunks from a nodejs server via socket

好久不见. 提交于 2020-01-21 05:35:10
问题 I'm using the following code to decode audio chunks from nodejs's socket window.AudioContext = window.AudioContext || window.webkitAudioContext; var context = new AudioContext(); var delayTime = 0; var init = 0; var audioStack = []; var nextTime = 0; client.on('stream', function(stream, meta){ stream.on('data', function(data) { context.decodeAudioData(data, function(buffer) { audioStack.push(buffer); if ((init!=0) || (audioStack.length > 10)) { // make sure we put at least 10 chunks in the

how to export last 3s data of a web audio stream

孤街浪徒 提交于 2020-01-17 01:30:10
问题 Question: I am using web audio API. I need to buffer a non-stop audio stream, like a radio stream. and when I get a notification, I need to get the past 3s audio data and send it to server. How can I do achieve that? nodejs has a built in buffer, but it seems not a circular buffer, if I write a non-stop stream into it, it seems to be overflowed. Background to help u understand my question: I am implementing an ambient audio based web authentication method. Briefly, I need to compare two

Is it possible to enable WebAudio processing for Cross-Origin Resources with appropriate Access-Control-Allow-Origin headers?

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-16 18:40:50
问题 I am building an audio application that have two servers involved. Server A is dedicated for audio streaming, while B servers a HTML page that loads audio sources from A. The audios are OK to play. However, when I try to do some magic with WebAudio API, I got message saying "MediaElementAudioSource outputs zeroes due to CORS access restrictions for {{URL of audio src}}" This is fair because WebAudio spec said HTMLMediaElement allows the playback of cross-origin resources. Because Web Audio

createPanner with gainNode in safari

☆樱花仙子☆ 提交于 2020-01-16 13:13:08
问题 I want to pan left or right at a time and also set the volume for it, I have done it with other browsers but on safari createStereoPanner is not a function so i used createPanner for safari Now, Problem is that i want to use gain with panner to set volume currently its playing both gain and pan separately it should set gain for panner here is my code audioElement.setAttribute('src', '/Asset/sounds/calibrate.mp3'); audioElement.volume = 0.5; audioElement.play().then(function (d) { audioCtx =

Why the web audio output from oscillator is not working as expected?

时间秒杀一切 提交于 2020-01-16 09:12:27
问题 Here is the code: I want to create an audio program that can play audio from very low frequency to high frequency. However, this code results in different output (even with the same device): The sound comes out suddenly - the expected result is it comes out gradually. I am sure my hearing is okay because I've asked my friends to hear; The audio sounds different on the same frequency. WARNING: Please adjust your volume to very low in case of any hurting before running this script. var audioCtx

Does calling stop() on a source node trigger an ended event?

南楼画角 提交于 2020-01-16 03:28:06
问题 According to the web audio API specs http://webaudio.github.io/web-audio-api/ I can assign an event handler that runs when a source node is done playing (the onended attribute of the source node). However, if I call stop(0) on an audio source node, is that event triggered? The specs don't seem clear on that. I can try this out on various browsers, but I want to know the proper standard behavior for this. Does the ended event fire when a source node is proactively stop ped? Or does the ended