accelerate-framework

How-to convert an iOS camera image to greyscale using the Accelerate Framework?

空扰寡人 提交于 2019-12-05 06:10:52
It seems like this should be simpler than I'm finding it to be. I have an AVFoundation frame coming back in the standard delegate method: - (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection where I would like to convert the frame to greyscale using the Accelerate.Framework . There is a family of conversion methods in the framework, including vImageConvert_RGBA8888toPlanar8() , which looks like it might be what I would like to see, however, I can't find any examples of how to use them! So

matrix multiplication in swift using Accelerate framework 32 bit vs 64 bit

谁都会走 提交于 2019-12-05 02:06:40
问题 I am trying to do matrix multiplication in Swift using the Accelerate framework. Used the vDSP_mmulD. This worked perfectly in the iPhone6 , 6 plus, iPad Air simulator (all 64 bit architecture) but did not work with any of the 32 bit architecture devices. It sees like vDSP_mmulD is not recognized by the 32 bit architecture and the program does not build. Error message displayed is "use of unresolved identifier 'vDSP_mmulD'" Has anybody else seen this error? Please let me know your thoughts. I

Objective-C Peak Detection Accelerate Framework

筅森魡賤 提交于 2019-12-04 18:37:55
I am a no math guru here, so I want to ask anyone familiar with Digital Signal Processing, what is the best way of detecting real time peaks. I get about 30 frames/values a second and I've tried to implement the slope algorithm for detecting peaks, it worked OK, about 80% of the cases, but its really not good enough :(. From what I've searched one should use the Fast Fourier Transform, but I have no idea how to get started with it, perhaps I'm missing the general idea of how I should use FFT in this case. In iOS we have this amazing Accelerate framework that should help me do the FFT stuff but

Symmetric Matrix Inversion in C using CBLAS/LAPACK

余生长醉 提交于 2019-12-04 16:17:29
I am writing an algorithm in C that requires Matrix and Vector multiplications. I have a matrix Q (W x W) which is created by multiplying the transpose of a vector J (1 x W) with itself and adding Identity matrix I , scaled using scalar a . Q = [(J^T) * J + aI]. I then have to multiply the inverse of Q with vector G to get vector M . M = (Q^(-1)) * G. I am using cblas and clapack to develop my algorithm. When matrix Q is populated using random numbers (type float) and inverted using the routines sgetrf_ and sgetri_ , the calculated inverse is correct . But when matrix Q is symmetrical , which

iOS Accelerate Framework vImage - Performance improvement?

孤人 提交于 2019-12-04 13:36:27
问题 I've been working with OpenCV and Apple's Accelerate framework and find the performance of Accelerate to be slow and Apple's documentation limited. Let's take for example: void equalizeHistogram(const cv::Mat &planar8Image, cv::Mat &equalizedImage) { cv::Size size = planar8Image.size(); vImage_Buffer planarImageBuffer = { .width = static_cast<vImagePixelCount>(size.width), .height = static_cast<vImagePixelCount>(size.height), .rowBytes = planar8Image.step, .data = planar8Image.data }; vImage

ios basic image processing extract red channel

两盒软妹~` 提交于 2019-12-04 06:27:48
问题 straight forward I need to extract color components from an image, usually in Matlab this is done choosing the first matrix for Red. In the realm of accelerate framework, which documentation is reference-based I can't find an easy way of doing this without resolving to graphics context. Thanks in advance!! 回答1: UIImage* image = // An image CFDataRef pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage)); const UInt8* pixelBytes = CFDataGetBytePtr(pixelData); //32-bit RGBA

matrix multiplication in swift using Accelerate framework 32 bit vs 64 bit

纵然是瞬间 提交于 2019-12-03 17:25:37
I am trying to do matrix multiplication in Swift using the Accelerate framework. Used the vDSP_mmulD. This worked perfectly in the iPhone6 , 6 plus, iPad Air simulator (all 64 bit architecture) but did not work with any of the 32 bit architecture devices. It sees like vDSP_mmulD is not recognized by the 32 bit architecture and the program does not build. Error message displayed is "use of unresolved identifier 'vDSP_mmulD'" Has anybody else seen this error? Please let me know your thoughts. I am using Xcode 6.1. Thanks. Simple solution: use cblas_dgemm instead (also part of Accelerate). It's

Reimplement vDSP_deq22 for Biquad IIR Filter by hand

一曲冷凌霜 提交于 2019-12-03 16:46:28
I'm porting a filterbank that currently uses the Apple-specific (Accelerate) vDSP function vDSP_deq22 to Android (where Accelerate is not available). The filterbank is a set of bandpass filters that each return the RMS magnitude for their respective band. Currently the code (ObjectiveC++, adapted from NVDSP) looks like this: - (float) filterContiguousData: (float *)data numFrames:(UInt32)numFrames channel:(UInt32)channel { // Init float to store RMS volume float rmsVolume = 0.0f; // Provide buffer for processing float tInputBuffer[numFrames + 2]; float tOutputBuffer[numFrames + 2]; // Copy the

Fastest YUV420P to RGBA conversion on iOS using the CPU

本小妞迷上赌 提交于 2019-12-03 13:00:20
问题 Can anyone recommend a really fast API, ideally NEON-optimized for doing YUV to RGB conversion at runtime on the iPhone using the CPU ? The accelerate framework's vImage doesn't provide anything suitable, sadly, and using vDSP, converting to floats and back seems suboptimal and almost as much work as writing NEON myself. I know how to use the GPU for this via a shader, and in fact already do so for displaying my main video plane. Unfortunately, I also need to create and save RGBA textures of

FFT Pitch Detection for iOS using Accelerate Framework?

放肆的年华 提交于 2019-12-03 09:55:55
问题 I have been reading up on FFT and Pitch Detection for a while now, but I'm having trouble piecing it all together. I have worked out that the Accelerate framework is probably the best way to go with this, and I have read the example code from apple to see how to use it for FFTs. What is the input data for the FFT if I wanted to be running the pitch detection in real time? Do I just pass in the audio stream from the microphone? How would I do this? Also, after I get the FFT output, how can I