问题
Short
I would like to stream multiple overlapping audio files (some sound effects that play at certain random times). So kind of a generated audio stream that will NEVER repeat exactly the same way. Some audio files are looping, some are at specific times. Probably kind of realtime stream insertion would be good I guess.
What is the best way to write such a server software? What protocols should be used for streaming that (I prefer over HTTP). I would probably want to expose an url for each configuration (tracks & timing of sound effects).
Any pointers to code/libraries? Best if in any language like java/kotlin/go/rust/ruby/python/node/...
Example
Url: https://server.org/audio?file1=loop&file2=every30s&file2_volume=0.5
Response: Audio stream
(that plays on cast devices)
Stream loops the file1. At every 30s it plays file2 with 50% volume (overlayed over file1 which plays at 100%). File 1 is like 10m9s long. So the the combination never repeats really. So we can not just provide a pregenerated mp3 file.
Some background
I currently have an android application that plays different audio files at random. Some are looping, some play every x seconds. Sometimes as many as 10 at the same time.
Now I would like to add support for chromecast/chromecast audio/google home/... . I guess best would be to have a server that streams that. Every user would have his/her own stream when playing. No need for having multiple users listen to the same stream (even though it probably would be supported as well).
The server would basically read the url, get the configuration and then respond with a audio stream. The server opens one (or multiple audio files) that it then combines/overlays into a single stream. At certain times those audio files are looped. Some other audio files are opened at specific times and added/overlayed to the stream. Each audio file played is played at a different volume level (some are louder, some are quieter). The question is how to make such an audio stream and how to add the different files in in realtime.
回答1:
So there are two parts to your problem
- Mixing the audios using different options
- Stream that mixed response from a webserver
I can help you with the later part and you need to figure out the first part yourself
Below is a sample nodejs script. Run it create a directory and run
npm init
npm install fluent-ffmpeg express
and then save the below file
server.js
var ff = require('fluent-ffmpeg');
var express = require('express')
var app = express()
app.get('/merged', (req, res) => {
res.contentType('mp3');
// res.header("Transfer-Encoding", "chunked")
command = ff()
.input("1.mp3")
.input("2.mp3")
.input("3.mp3")
.input("4.mp3")
.complexFilter(`[1]adelay=2|5[b];
[2]adelay=10|12[c];
[3]adelay=4|6[d];
[0][b][c][d]amix=4`)
.outputOptions(["-f", "flv"])
;
command.on('end', () => {
console.log('Processed finished')
// res.end()
})
command.pipe(res, {end: true});
command.on('error', function(err, stdout, stderr) {
console.log('ffmpeg stdout: ' + stdout);
console.log('ffmpeg stderr: ' + stderr);
});
})
app.listen(9090)
Run it using below command
node server.js
Now in VLC
open http://localhost:9090/merged
Now for your requirement the below part will change
.complexFilter(`[1]adelay=2|5[b];
[2]adelay=10|12[c];
[3]adelay=4|6[d];
[0][b][c][d]amix=4`)
But I am no ffmpeg
expert to guide you around that area. Perhaps that calls for another question or taking lead from lot of existing SO threads
ffmpeg - how to merge multiple audio with time offset into a video?
How to merge two audio files while retaining correct timings with ffmpeg
ffmpeg mix audio at specific time
https://superuser.com/questions/850527/combine-three-videos-between-specific-time-using-ffmpeg
来源:https://stackoverflow.com/questions/49483191/stream-overlapping-audio-files-to-chromecast-audio