core-video

Render dynamic text onto CVPixelBufferRef while recording video

北战南征 提交于 2019-12-21 01:12:30
问题 I'm recording video and audio using AVCaptureVideoDataOutput and AVCaptureAudioDataOutput and in the captureOutput:didOutputSampleBuffer:fromConnection: delegate method, I want to draw text onto each individual sample buffer I'm receiving from the video connection. The text changes with about every frame (it's a stopwatch label) and I want that to be recorded on top of the video data that's captured. Here's what I've been able to come up with so far: //1. CVPixelBufferRef pixelBuffer =

Knowing resolution of AVCaptureSession's session presets

泪湿孤枕 提交于 2019-12-20 09:56:30
问题 I'm accessing the camera in iOS and using session presets as so: captureSession.sessionPreset = AVCaptureSessionPresetMedium; Pretty standard stuff. However, I'd like to know ahead of time the resolution of the video I'll be getting due to this preset (especially because depending on the device it'll be different). I know there are tables online you can look this up (such as here: http://cmgresearch.blogspot.com/2010/10/augmented-reality-on-iphone-with-ios40.html ). But I'd like to be able to

How to correctly orient image generated from AVCaptureVideoDataOutputSampleBufferDelegate

≯℡__Kan透↙ 提交于 2019-12-12 14:51:09
问题 I'm using AVCaptureVideoDataOutputSampleBufferDelegate and I receive a CMSampleBufferRef wich I convert to a UIImage - but the resulting image isn't correctly oriented. // Get a CMSampleBuffer's Core Video image buffer for the media data CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer); // Lock the base address of the pixel buffer CVPixelBufferLockBaseAddress(imageBuffer, 0); // Get the number of bytes per row for the pixel buffer void *baseAddress =

How to choose the a pixel format type (kCVPixelBufferPixelFormatTypeKey) for use with AVAssetReader?

社会主义新天地 提交于 2019-12-12 10:39:51
问题 We are using AVAssetReader and AVAssetWriter somewhat in the style as noted in Video Encoding using AVAssetWriter - CRASHES basically to read a video what we got from the photo gallery / asset library then writing it at a different bit rate to reduce its size (for eventual network upload). The trick to getting this to work for us was to specify a kCVPixelBufferPixelFormatTypeKey key and value in the outputSettings on AVAssetReaderTrackOutput , something like this: NSDictionary *outputSettings

How to List all OpenGL ES Compatible PixelBuffer Formats

天涯浪子 提交于 2019-12-11 16:16:52
问题 Is there a way to list all CVPixelBuffer formats for CVPixelBufferCreate() that will not generate error -6683: kCVReturnPixelBufferNotOpenGLCompatible when used with CVOpenGLESTextureCacheCreateTextureFromImage() ? This lists all the supported CVPixelBuffer formats for CVPixelBufferCreate() , but does not guarantee that CVOpenGLESTextureCacheCreateTextureFromImage() will not return the error above. I guess my desired list should be a subset of this one. 回答1: Based on the answer from Adi

How to fix leak CVPixelBuffer

时光毁灭记忆、已成空白 提交于 2019-12-11 08:29:08
问题 please tell me where is leak in this code... //here I did video with images from Document Directory - (void) testCompressionSession:(NSString *)path { if ([[NSFileManager defaultManager] fileExistsAtPath:path]) { [[NSFileManager defaultManager] removeItemAtPath:path error:nil]; } NSArray *array = [dictInfo objectForKey:@"sortedKeys"]; NSString *betaCompressionDirectory = path; NSError *error = nil; unlink([betaCompressionDirectory UTF8String]); NSLog(@"array = %@",array); NSData *imgDataTmp =

iPhone: Real-time video color info, focal length, aperture?

一笑奈何 提交于 2019-12-11 06:58:27
问题 Is there any way using AVFoundation and CoreVideo to get color info, aperture and focal length values in real-time? Let me explain. Say when I am shooting video I want to sample the color in a small portion of the screen and output that in RGB values to the screen? Also, I would like to show what the current aperture is set at. Does anyone know if it is possible to gather these values? Currently I have only seen that this is possible with still images. Ideas? 回答1: AVCaptureStillImageOutput

Crop CMSampleBuffer and process it without converting to CGImage

痞子三分冷 提交于 2019-12-11 05:13:27
问题 I have been following the apple's live stream camera editor code to get the hold of live video editing. So far so good, but I need a way out to crop a sample buffer into 4 pieces and then process all four with different CIFilters. For instance, If the size of the image is 1000x1000, I want to crop the CMSampleBuffer into 4 images of size 250x250 and then apply unique filter to each, convert it back to CMSammpleBuffer and display on Metal View. Here is the code till which I could crop the

Cloning CVPixelBuffer - how to?

喜夏-厌秋 提交于 2019-12-10 22:16:46
问题 Say I have some pixel buffer associated with variable: CVPixelBufferRef a; I want to clone that buffer with all it contents and assign the cloned one to another variable. What is the most correct and fast way to do that? 回答1: So far I did not find a better solution than memcpy(). Hope it copies all the needed data. 来源: https://stackoverflow.com/questions/10677107/cloning-cvpixelbuffer-how-to

Filter Live camera feed

冷暖自知 提交于 2019-12-10 17:41:30
问题 So i've been using UIImagepickercontroller to access the camera for photo and video capture, then i wanted to apply filters on those 2 sources, i succeeded with filtering token photos but i'am having trouble finding the solution for the rest, all i need is to access the raw image data : the live image feed that the camera is showing , apply the filter and then show the filtered ones instead. Any help or advice will be appreciated. 回答1: UIImagePickerController doesn't give you low level access