AVAudioPlayer - Metering - Want to build a waveform (graph)

后端 未结 4 1818
闹比i
闹比i 2021-01-06 01:12

I need to build a visual graph that represents voice levels (dB) in a recorded file. I tried to do it this way:

NSError *error = nil;
AVAudioPlayer *meterPlay         


        
相关标签:
4条回答
  • 2021-01-06 01:40

    I haven't used it myself, but Apple's avTouch iPhone sample has bar graphs powered by AVAudioPlayer, and you can easily check to see how they do it.

    0 讨论(0)
  • 2021-01-06 01:40

    Ok guys, seems I'm going to answer my own question again: http://www.supermegaultragroovy.com/blog/2009/10/06/drawing-waveforms/ No a lot of concretics, but at least you will know what Apple docs to read.

    0 讨论(0)
  • 2021-01-06 01:41

    I just want to help the others who have come into this same question and used a lot of time to search. To save your time, I put out my answer. I dislike somebody here who treat this as kind of secret...

    After search around the articles about extaudioservice, audio queue and avfoundation.

    I realised that i should use AVFoundation, reason is simple, it is the latest bundle and it is Objective C but not so cpp style.

    So the steps to do it is not complicated:

    1. Create AVAsset from the audio file
    2. Create avassetreader from the avasset
    3. Create avassettrack from avasset
    4. Create avassetreadertrackoutput from avassettrack
    5. Add the avassetreadertrackoutput to the previous avassetreader to start reading out the audio data

    From the avassettrackoutput you can copyNextSampleBuffer one by one (it is a loop to read all data out).

    Each copyNextSampleBuffer gives you a CMSampleBufferRef which can be used to get AudioBufferList by CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer. AudioBufferList is array of AudioBuffer. AudioBuffer is the a bunch of audio data which is stored in its mData part.

    You can implement the above in extAudioService as well. But i think the above avfoundation approach is easier.

    So next question, what to do with the mData? Note that when you get the avassetreadertrackoutput, you can specify its output format, so we specify the output is lpcm.

    Then the mData you finally get is actually a float format amplitude value.

    Easy right? Though i used a lot of time to organise this from piece here and there.

    Two useful resource for share: Read this article to know basic terms and conceptions: https://www.mikeash.com/pyblog/friday-qa-2012-10-12-obtaining-and-interpreting-audio-data.html

    Sample code: https://github.com/iluvcapra/JHWaveform You can copy most of the above mentioned code from this sample directly and used for your own purpose.

    0 讨论(0)
  • 2021-01-06 01:47

    I don't think you can use AVAudioPlayer based on your constraints. Even if you could get it to "start" without actually playing the sound file, it would only help you build a graph as fast as the audio file would stream. What you're talking about is doing static analysis of the sound, which will require a much different approach. You'll need to read in the file yourself and parse it manually. I don't think there's a quick solution using anything in the SDK.

    0 讨论(0)
提交回复
热议问题