问题
I'm writing C++ code which plays both digital audio (synthesised music) and MIDI music at the same time (using the RtMidi library.) The digitised music will play out of the computer's audio device, but the MIDI music could play out of an external synthesiser. I want to play a song that uses both digitised instruments and MIDI instruments, and I am not sure of the best way to synchronise these two audio streams:
- It is not possible to use a function like Sleep() as the delay time is both uneven and too long for my needs (which are on the order of one millisecond.) If Sleep() regularly waits for 5ms when only 1ms was requested, the resulting song tempo will be off, and unless it's exact each time it's called, the tempo will be uneven.
- Counting the number of samples placed into the audio buffer provides super accurate timing between notes for the digital audio (minimum delay of one sample - 0.02ms at 48kHz), but this timing can't be used for MIDI. Because the audio is buffered the notes are synthesised in bursts (filling up one audio buffer at a time, as quickly as possible) so this results in a bunch of MIDI notes being played with no delay between them every time a digital audio buffer needs to be refilled.
- Playing live MIDI data has no timing information, so a note plays as soon as it is sent. Therefore a note can't be scheduled to play at a later time, so I am required to accurately send MIDI events at the correct time myself.
Currently I am using nanosleep() - which only works under Linux, not Windows - to wait for the correct time between notes. This allows both the digital audio and the MIDI data to remain synchronised, however nanosleep() is not very consistent so the resulting tempo is very uneven.
Can anyone think of a way to retain accurate timing between notes for both the digital audio as well as the MIDI data?
回答1:
If you are willing to use Boost, it has CPU-precision timers. If not, on Windows there are functions QueryPerformanceCounter and QueryPerformanceFrequency
, which can be used for CPU-based timing, which will certainly suit all of your needs. There are plenty of Timer classes implementations around the web, some of them working both on windows and *ix systems.
回答2:
The first issue is that you need to know how much audio has passed through the audio device. If your latency is low enough, you might be able to hazard a guess from the amount of data you've pushed through, but the latency between that and the playback is a moving target, so you should try to get that information from the audio hardware. That information is available, so use it because the "jitter" you will get from errors in latency measurement can effect the synchronization in a musically noticeable way.
If you must use sleep for timing, there are two issues that will make it sleep longer: 1. priority (if another process/thread has higher priority, it will run if if the timer has run out) and 2. system latency (if the system takes 5 milliseconds to swap processes/threads, it might add that to your requested delay time). These kinds of delays are musically relevant. Most midi APIs have a "sequencer" api that lets you queue data in advance to avoid having to use system timers.
You might find this document useful, even if you are not using portaudio for audio I/O.
http://www.portaudio.com/docs/portaudio_sync_acmc2003.pdf
回答3:
The answer to this lies not in small buffers, but in large ones.
Let's take an example of a 3-minute song.
One first renders the digital part, and "tags" it with MIDI notes. Then one starts it playing and triggers the MIDI notes when it's time, perhaps using an std::vector to hold an in-order list. The synchronization can be changed by using an overall time offset:
HORRIBLE incomplete but hopefully demonstrative pseudocode on the topic:
start_digital_playing_thread();
int midi_time_sync = 10; // ms
if (time >= (midi_note[50]->time + midi_time_sync)) // play note
来源:https://stackoverflow.com/questions/10969254/accurate-delays-between-notes-when-synthesising-a-song