问题
Requested Knowledge
How to shorten (from the front) an array of audio blobs and still have playable audio.
Goal
I am ultimately trying to record a continuous 45 second loop of audio using the JS MediaRecorder API. The user will be able to push a button and the last 45s of audio will be saved. I can record, playback, and download a single recording just fine.
Issue
When I have an array called chunks
of say 1000 blobs from the MediaRecorder and use chunks.slice(500, 1000)
the resulting blob array can't be used to playback or download audio.
Oddly enough chunks.slice(0,500)
still works fine.
Code
let chunks = [];
navigator.mediaDevices.getUserMedia({ audio: true })
.then((stream) => {
const mediaRecorder = new MediaRecorder(stream);
mediaRecorder.ondataavailable = (e) => chunks.push(e.data);
}
// At some later time, attempt to trim
const trimmedAudio = chunks.slice(500, 1000)
const blob = new Blob(chunks, { 'type' : 'audio/ogg; codecs=opus' });
const audioURL = URL.createObjectURL(blob);
// audio is an audio DOM element
audio.src = audioURL;
// At this point the audio won't play
Attempted solutions
Because slicing from the beginning to the middle works, I tried reversing, slicing, and reversing back to the original direction. Failed.
I tried leaving 16 blobs at the beginning and removing some of the middle. Failed.
Hunch
My hunch is that the blobs don't uniformly contain data and require some conversion to ArrayBuffer or a specific TypedArray. I have tried a number of these conversions, but still have not found a solution. Any guidance would be greatly appreciated as I can find any documentation at all for editing recorded blob arrays.
Update 2017-02-11
Based on Kaiido's advice to concatenate the chunks, convert to an ArrayBuffer, and then pass it to the web audio api, I tried the following.
const blob = new Blob(chunksFromMediaRecorder, { 'type' : 'audio/ogg codecs=opus' })
// blob is Blob size: 32714, type: "audio/ogg codecs=opus"}
function playFromBlob(blob) {
const aCtx = new (window.AudioContext || window.webkitAudioContext)()
const source = aCtx.createBufferSource()
const fileReader = new FileReader()
let arrayBuffer
fileReader.onload = function() {
arrayBuffer = this.result
console.log('arrayBuffer', arrayBuffer) // ArrayBuffer {} with byte length 32714
aCtx.decodeAudioData(arrayBuffer) // throws: Uncaught (in promise) DOMException: Unable to decode audio data
.then(decodedData => {
// use the decoded data here
source.buffer = decodedData
source.connect(audioCtx.destination)
})
}
fileReader.readAsArrayBuffer(blob);
}
The above is successful at creating an ArrayBuffer the same size as the Blob, but when I pass it to the decodeAudioData
function, it throws the Unable to decode audio data
error. Is there a step I am missing in the conversion process?
Update 2017-02-18
Kaiido pointed out that there is a known bug with chrome that would prevent me from shifting the array of audio data. As far as I can tell this makes any sort of audio trimming or cutting currently impossible in chrome (Firefox the audio array trimming works, see Kaiido's comment fiddle).
In order to achieve my ultimate goal of continuous 45s of retroactive recording, I am now instantiating two MediaRecorders
offset by 45s and swapping them out. So when one is at 45s and the other is at 90s I start a new recording and get rid of the 90s MediaRecorder
. Definitely suboptimal, but the best solution I can presently find.
来源:https://stackoverflow.com/questions/42127276/trim-or-cut-audio-recorded-with-mediarecorder-js