Spectrogram from AVAudioPCMBuffer using Accelerate framework in Swift

前端 未结 2 1371
广开言路
广开言路 2020-12-28 09:16

I\'m trying to generate a spectrogram from an AVAudioPCMBuffer in Swift. I install a tap on an AVAudioMixerNode and receive a callback with the aud

相关标签:
2条回答
  • 2020-12-28 09:29
    1. Hacky way: you can just cast a float array. Where reals and imag values are going one after another.
    2. It depends on if audio is interleaved or not. If it's interleaved (most of the cases) left and right channels are in the array with STRIDE 2
    3. Lowest frequency in your case is frequency of a period of 1024 samples. In case of 44100kHz it's ~23ms, lowest frequency of the spectrum will be 1/(1024/44100) (~43Hz). Next frequency will be twice of this (~86Hz) and so on.
    0 讨论(0)
  • 2020-12-28 09:42

    4: You have installed a callback handler on an audio bus. This is likely run with real-time thread priority and frequently. You should not do anything that has potential for blocking (it will likely result in priority inversion and glitchy audio):

    1. Allocate memory (realp, imagp - [Float](.....) is shorthand for Array[float] - and likely allocated on the heap`. Pre-allocate these

    2. Call lengthy operations such as vDSP_create_fftsetup() - which also allocates memory and initialises it. Again, you can allocate this once outside of your function.

    0 讨论(0)
提交回复
热议问题