Creating an iPhone music Visualiser based on Fourier Transform

左心房为你撑大大i 提交于 2019-12-20 10:55:03

问题


I am designing a music visualiser application for the iPhone.

I was thinking of doing this by picking up data via the iPhone's mic, running a Fourier Transform on it and then creating visualisations.

The best example I have been able to get of this is aurioTuch which produces a perfect graph based on FFT data. However I have been struggling to understand / replicate aurioTouch in my own project.

I am unable to understand where exactly aurioTouch picks up the data from the microphone before it does the FFT?

Also is there any other examples of code that I could use to do this in my project? Or any other tips?


回答1:


Since I am planning myself to use the input of the mic, I thought your question is a good opportunity to get familiar with a relevant sample code.

I will trace back the steps of reading through the code:

  1. Starting off in SpectrumAnalysis.cpp (since it is obvious the audio has to get to this class somehow), you can see that the class method SpectrumAnalysisProcess has a 2nd input argument const int32_t* inTimeSig --- sounds a promising starting point, since the input time signal is what we are looking for.
  2. Using the right-click menu item Find in project on this method, you can see that except for the obvious definition & declaration, this method is used only inside the FFTBufferManager::ComputeFFT method, where it gets mAudioBuffer as its 2nd argument (the inTimeSig from step 1). Looking for this class data member gives more then 2 or 3 results, but most of them are again just definitions/memory alloc etc. The interesting search result is where mAudioBuffer is used as argument to memcopy, inside the method FFTBufferManager::GrabAudioData.
  3. Again using the search option, we see that FFTBufferManager::GrabAudioData is called only once, inside a method called PerformThru. This method has an input argument called ioData (sounds promising) of type AudioBufferList.
  4. Looking for PerformThru, we see it is used in the following line: inputProc.inputProc = PerformThru; - we're almost there:: it looks like registering a callback function. Looking for the type of inputProc, we indeed see it is AURenderCallbackStruct - that's it. The callback is called by the audio framework, who is responsible to feed it with samples.

You will probably have to read the documentation for AURenderCallbackStruct (or better off, the Audio Unit Hosting) to get a deeper understanding, but I hope this gave you a good starting point.



来源:https://stackoverflow.com/questions/4505694/creating-an-iphone-music-visualiser-based-on-fourier-transform

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!