How to program a real-time accurate audio sequencer on the iphone?

后端 未结 9 596
北荒
北荒 2021-01-30 02:56

I want to program a simple audio sequencer on the iphone but I can\'t get accurate timing. The last days I tried all possible audio techniques on the iphone, starting from Audio

相关标签:
9条回答
  • 2021-01-30 03:05

    NSTimer has absolutely no guarantees on when it fires. It schedules itself for a fire time on the runloop, and when the runloop gets around to timers, it sees if any of the timers are past-due. If so, it runs their selectors. Excellent for a wide variety of tasks; useless for this one.

    Step one here is that you need to move audio processing to its own thread and get off the UI thread. For timing, you can build your own timing engine using normal C approaches, but I'd start by looking at CAAnimation and especially CAMediaTiming.

    Keep in mind that there are many things in Cocoa that are designed only to run on the main thread. Don't, for instance, do any UI work on a background thread. In general, read the docs carefully to see what they say about thread-safety. But generally, if there isn't a lot of communication between the threads (which there shouldn't be in most cases IMO), threads are pretty easy in Cocoa. Look at NSThread.

    0 讨论(0)
  • 2021-01-30 03:08

    You've had a few good answers here, but I thought I'd offer some code for a solution that worked for me. When I began researching this, I actually looked for how run loops in games work and found a nice solution that has been very performant for me using mach_absolute_time.

    You can read a bit about what it does here but the short of it is that it returns time with nanosecond precision. However, the number it returns isn't quite time, it varies with the CPU you have, so you have to create a mach_timebase_info_data_t struct first, and then use it to normalize the time.

    // Gives a numerator and denominator that you can apply to mach_absolute_time to
    // get the actual nanoseconds
    mach_timebase_info_data_t info;
    mach_timebase_info(&info);
    
    uint64_t currentTime = mach_absolute_time();
    
    currentTime *= info.numer;
    currentTime /= info.denom;
    

    And if we wanted it to tick every 16th note, you could do something like this:

    uint64_t interval = (1000 * 1000 * 1000) / 16;
    uint64_t nextTime = currentTime + interval;
    

    At this point, currentTime would contain some number of nanoseconds, and you'd want it to tick every time interval nanoseconds passed, which we store in nextTime. You can then set up a while loop, something like this:

    while (_running) {
        if (currentTime >= nextTime) {
            // Do some work, play the sound files or whatever you like
            nextTime += interval;
        }
    
        currentTime = mach_absolute_time();
        currentTime *= info.numer;
        currentTime /= info.denom;
    }
    

    The mach_timebase_info stuff is a bit confusing, but once you get it in there, it works very well. It's been extremely performant for my apps. It's also worth noting that you won't want to run this on the main thread, so dishing it off to its own thread is wise. You could put all the above code in its own method called run, and start it with something like:

    [NSThread detachNewThreadSelector:@selector(run) toTarget:self withObject:nil];
    

    All the code you see here is a simplification of a project I open-sourced, you can see it and run it yourself here, if that's of any help. Cheers.

    0 讨论(0)
  • 2021-01-30 03:14

    I thought a better approach for the time management would be to have a bpm setting (120, for example), and go off of that instead. Measurements of minutes and seconds are near useless when writing/making music / music applications.

    If you look at any sequencing app, they all go by beats instead of time. On the opposite side of things, if you look at a waveform editor, it uses minutes and seconds.

    I'm not sure of the best way to implement this code-wise by any means, but I think this approach will save you a lot of headaches down the road.

    0 讨论(0)
  • 2021-01-30 03:16

    If constructing your sequence ahead of time is not a limitation, you can get precise timing using an AVMutableComposition. This would play 4 sounds evenly spaced over 1 second:

    // setup your composition
    
    AVMutableComposition *composition = [[AVMutableComposition alloc] init];
    NSDictionary *options = @{AVURLAssetPreferPreciseDurationAndTimingKey : @YES};
    
    for (NSInteger i = 0; i < 4; i++)
    {
      AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
      NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:@"sound_file_%i", i] withExtension:@"caf"];
      AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
      AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
      CMTimeRange timeRange = [assetTrack timeRange];
    
      Float64 t = i * 1.0;
      NSError *error;
      BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMake(t, 4) error:&error];
      NSAssert(success && !error, @"error creating composition");
    }
    
    AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
    self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
    
    // later when you want to play 
    
    [self.avPlayer seekToTime:kCMTimeZero];
    [self.avPlayer play];
    

    Original credit for this solution: http://forum.theamazingaudioengine.com/discussion/638#Item_5

    And more detail: precise timing with AVMutableComposition

    0 讨论(0)
  • 2021-01-30 03:19

    I opted to use a RemoteIO AudioUnit and a background thread that fills swing buffers (one buffer for read, one for write which then swap) using the AudioFileServices API. The buffers are then processed and mixed in the AudioUnit thread. The AudioUnit thread signals the bgnd thread when it should start loading the next swing buffer. All the processing was in C and used the posix thread API. All the UI stuff was in ObjC.

    IMO, the AudioUnit/AudioFileServices approach affords the greatest degree of flexibility and control.

    Cheers,

    Ben

    0 讨论(0)
  • 2021-01-30 03:23

    One additional thing that may improve real-time responsiveness is setting the Audio Session's kAudioSessionProperty_PreferredHardwareIOBufferDuration to a few milliseconds (such as 0.005 seconds) before making your Audio Session active. This will cause RemoteIO to request shorter callback buffers more often (on a real-time thread). Don't take any significant time in these real-time audio callbacks, or you will kill the audio thread and all audio for your app.

    Just counting shorter RemoteIO callback buffers is on the order of 10X more accurate and lower latency than using an NSTimer. And counting samples within an audio callback buffer for positioning the start of your sound mix will give you sub-millisecond relative timing.

    0 讨论(0)
提交回复
热议问题