accelerate-framework

iOS Accelerate Framework vImage - Performance improvement?

て烟熏妆下的殇ゞ 提交于 2019-12-03 08:25:06
I've been working with OpenCV and Apple's Accelerate framework and find the performance of Accelerate to be slow and Apple's documentation limited. Let's take for example: void equalizeHistogram(const cv::Mat &planar8Image, cv::Mat &equalizedImage) { cv::Size size = planar8Image.size(); vImage_Buffer planarImageBuffer = { .width = static_cast<vImagePixelCount>(size.width), .height = static_cast<vImagePixelCount>(size.height), .rowBytes = planar8Image.step, .data = planar8Image.data }; vImage_Buffer equalizedImageBuffer = { .width = static_cast<vImagePixelCount>(size.width), .height = static

FFT Pitch Detection for iOS using Accelerate Framework?

家住魔仙堡 提交于 2019-12-03 00:25:34
I have been reading up on FFT and Pitch Detection for a while now, but I'm having trouble piecing it all together. I have worked out that the Accelerate framework is probably the best way to go with this, and I have read the example code from apple to see how to use it for FFTs. What is the input data for the FFT if I wanted to be running the pitch detection in real time? Do I just pass in the audio stream from the microphone? How would I do this? Also, after I get the FFT output, how can I get the frequency from that? I have been reading everywhere, and can't find any examples or explanations

how to read VBR audio in novacaine (as opposed to PCM)

最后都变了- 提交于 2019-12-02 20:52:36
问题 The creator of novacaine offered example code where audio data is read from a a file and fed to a ring buffer. When the file reader is created though, the output is forced to be PCM: - (id)initWithAudioFileURL:(NSURL *)urlToAudioFile samplingRate:(float)thisSamplingRate numChannels:(UInt32)thisNumChannels { ... // We're going to impose a format upon the input file // Single-channel float does the trick. _outputFormat.mSampleRate = self.samplingRate; _outputFormat.mFormatID =

ios basic image processing extract red channel

喜欢而已 提交于 2019-12-02 08:48:40
straight forward I need to extract color components from an image, usually in Matlab this is done choosing the first matrix for Red. In the realm of accelerate framework, which documentation is reference-based I can't find an easy way of doing this without resolving to graphics context. Thanks in advance!! UIImage* image = // An image CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); const UInt8* pixelBytes = CFDataGetBytePtr(pixelData); //32-bit RGBA for(int i = 0; i < CFDataGetLength(pixelData); i += 4) { pixelBytes[i] // red pixelBytes[i+1] // green

how to read VBR audio in novacaine (as opposed to PCM)

半城伤御伤魂 提交于 2019-12-02 08:14:24
The creator of novacaine offered example code where audio data is read from a a file and fed to a ring buffer. When the file reader is created though, the output is forced to be PCM: - (id)initWithAudioFileURL:(NSURL *)urlToAudioFile samplingRate:(float)thisSamplingRate numChannels:(UInt32)thisNumChannels { ... // We're going to impose a format upon the input file // Single-channel float does the trick. _outputFormat.mSampleRate = self.samplingRate; _outputFormat.mFormatID = kAudioFormatLinearPCM; _outputFormat.mFormatFlags = kAudioFormatFlagIsFloat; _outputFormat.mBytesPerPacket = 4*self

Using Apple's Accelerate framework, FFT, Hann windowing and Overlapping

笑着哭i 提交于 2019-12-01 21:14:00
I'm trying to setup FFT for a project and really didn't get a clear picture on things... Basically, I am using Audio Units to get the data from the device's microphone. I then want to do FFT on that data. This is what I understand so far: I need to setup a circular buffer for my data. On each filled buffer, I apply a Hann window then do an FFT . However, I still need some help on overlapping. To get more precise results, I understand I need to use this expecially since I am using windowing. However, I can't find anything on this... Here's what I have so far (used for pitch detection): // Setup

Sum array of unsigned 8-bit integers using the Accelerate framework

半城伤御伤魂 提交于 2019-11-30 15:55:42
Can I use the Accelerate Framework to sum an array of unsigned 8-bit integers without converting to an array of floats. My current approach is: vDSP_vfltu8(intArray, 1, floatArray, 1, size); vDSP_sve(floatArray, 1, &result, size); But vDSP_vfltu8 is quite slow. If it is important to you that vDSP_vfltu8( ) be fast, please file a bug report . If there's any question, file a bug report . Inadequate performance is a bug, and will be treated as such if you report it. Library writers use this sort of feedback to determine how to prioritize their work; your bug report is the difference between a

IIR coefficients for peaking EQ, how to pass them to vDSP_deq22?

牧云@^-^@ 提交于 2019-11-30 14:15:56
I have these 6 coefficients for peaking EQ: b0 = 1 + (α ⋅ A) b1 = −2⋅ωC b2 = 1 - (α ⋅ A) a0 = 1 + (α / A) a1 = −2 ⋅ ωC a2 = 1 − (α / A) With these intermediate variables: ωc = 2 ⋅ π ⋅ fc / fs ωS = sin(ωc) ωC = cos(ωc) A = sqrt(10^(G/20)) α = ωS / (2Q) The documentation of vDSP_deq22() states that "5 single-precision inputs, filter coefficients" should be passed but I have 6 coefficients! Also, in what order do I pass them to vDSP_deq22() ? Update (17/05): I recommend everyone to use my DSP class I released on github: https://github.com/bartolsthoorn/NVDSP It'll probably save you quite some

Spectrogram from AVAudioPCMBuffer using Accelerate framework in Swift

ε祈祈猫儿з 提交于 2019-11-30 05:30:44
I'm trying to generate a spectrogram from an AVAudioPCMBuffer in Swift. I install a tap on an AVAudioMixerNode and receive a callback with the audio buffer. I'd like to convert the signal in the buffer to a [Float:Float] dictionary where the key represents the frequency and the value represents the magnitude of the audio on the corresponding frequency. I tried using Apple's Accelerate framework but the results I get seem dubious. I'm sure it's just in the way I'm converting the signal. I looked at this blog post amongst other things for a reference. Here is what I have: self.audioEngine

How to implement fast image filters on iOS platform

社会主义新天地 提交于 2019-11-30 05:10:43
I am working on iOS application where user can apply a certain set of photo filters. Each filter is basically set of Photoshop actions with a specific parameters. This actions are: Levels adjustment Brightness / Contrast Hue / Saturation Single and multiple overlay I've repeated all this actions in my code using arithmetic expressions looping through the all pixels in image. But when I run my app on iPhone 4, each filter takes about 3-4 sec to apply which is quite a few time for the user to wait. The image size is 640 x 640 px which is @2x of my view size because it's displayed on Retina