The Problem
I'm working on a web application where users can sequence audio samples and optionally apply effects to the musical patterns they create using the Web Audio API. The patterns are stored as JSON data, and I'd like to do some analysis of the rendered audio of each pattern server-side. This leaves me with two options, as far as I can see:
Run my own rendering code server-side, trying to make it as faithful as possible to the in-browser rendering. Maybe I could even pull out the Web Audio code from the Chromium project and modify that, but this seems like potentially a lot of work.
Do the rendering client-side, hopefully faster-than-realtime, and then send the rendered audio to the server. This is ideal (and DRY), because there's only one engine being used for pattern rendering.
The Possible Solution
This question lead me to this code sample in the Chromium repository, which seems to indicate that offline processing is a possibility. The trick seems to be constructing a webkitAudioContext
with some arguments (usually, a zero-argument constructor is used). The following are my guesses at what the parameters mean:
new webkitAudioContext(2, // channels
10 * 44100, // length in samples
44100); // sample rate
I adapted the sample slightly, and tested it in Chrome 23.0.1271.91 on Windows, Mac, and Linux. Here's the live example, and the results (open up the Dev Tools Javascript Console to see what's happening):
- Mac - It Works!!
- Windows - FAIL - SYNTAX_ERR: DOM Exception 12
- Linux - FAIL - SYNTAX_ERR: DOM Exception 12
The webkitAudioContext
constructor I described above causes the exception on Windows and Linux.
My Question
Offline rendering would be perfect for what I'm trying to do, but I can't find documentation anywhere, and support is less-than-ideal. Does anyone have more information about this? Should I be expecting support for this in Windows and/or Linux soon, or should I be expecting support to disappear soon on Mac?
I did some research on this a few months back, and there is a startRendering function on the audioContext, but I was told by Google people that the implementation was, at that time, due to change. I don't think this has happened yet, and it's still not a part of the official documentation, so I'd be careful building an app that depends on it.
The current implementation doesn't render any faster than realtime either (maybe slightly in very light applications), and sometimes even slower than realtime.
Your best bet is hitting the trenches and implement Web Audio server-side if you need non-realtime rendering. If you could live with realtime rendering there's a project at https://github.com/mattdiamond/Recorderjs which might be of interest.
Please note that I'm not a googler myself, and what I was told was not a promise in any way.
来源:https://stackoverflow.com/questions/13590999/offline-non-realtime-rendering-with-the-web-audio-api