gpuimage

Text overlay not showing in GPUImage iOS

六月ゝ 毕业季﹏ 提交于 2019-12-10 20:33:11
问题 I am trying to overlay some text on a video and have not had any success so far. videoCamera = [[GPUImageStillCamera alloc] initWithSessionPreset:AVCaptureSessionPreset640x480 cameraPosition:AVCaptureDevicePositionBack]; videoCamera.outputImageOrientation = UIInterfaceOrientationPortrait; cropFilter = [[GPUImageCropFilter alloc] initWithCropRegion:CGRectMake(0, 0, 1, 1)]; mCurrentImage = [UIImage imageNamed:@"tex16"]; sourcePicture = [[GPUImagePicture alloc] initWithImage:mCurrentImage

GPUImage terminates due to [AVAssetWriter startWriting] Cannot call method when status is 3'

柔情痞子 提交于 2019-12-10 19:19:28
问题 I am having an issue running GPUImage. I have modified SimpleVideoFileFilter program(replaced the filter with a chromakeyfilter) and am using my own video. My program is terminating due to the following error: [AVAssetWriter startWriting] Cannot call method when status is 3' I have gone through the forums but not sure why the moviewriter is closing and then someone is writing to it. I am using iPhone4 running iOS 7.0 Any clues are greatly appreciated. Thanks much! 回答1: Check whether your

GPUImage2: passing gl texture to shader

♀尐吖头ヾ 提交于 2019-12-10 17:53:38
问题 I'm trying to modify GPUImage2 Crosshair generator to replace default crosshairs with gl texture made of UIImage, my image is 128x128 png file. I'm using GPUImage2 PictureInput for converting UIImage to gl texture and it returns the instance of PictureInput without any issues/warnings. Next I'm modifying CrosshairVertexShader to add support of the new texture: uniform sampler2D inputImageTexture; attribute vec4 position; varying vec2 centerLocation; varying float pointSpacing; void main() {

No visible @interface for 'GPUImageOutput<GPUImageInput>' declares the selector 'imageFromCurrentlyProcessedOutputWithOrientation:'

人走茶凉 提交于 2019-12-10 17:11:50
问题 I'm in charge of an old project someone else created in my company some time ago, and now I have to make some changes using XCode 5.1 The thing is, even it compiled ok one year ago (spring of 2013) it doesn't compile right now. The project contains the GPUImage library subproject. This is the error XCode yields: No visible @interface for 'GPUImageOutput<GPUImageInput>' declares the selector 'imageFromCurrentlyProcessedOutputWithOrientation:' when I try to compile these 2 lines: if( self

Mysterious app crash with OpenGL

♀尐吖头ヾ 提交于 2019-12-10 13:18:15
问题 I'm using a GPUIImage library for developing an iOS camera app. Sometimes, when the app is suspended after 2-3 minutes the Xcode gives me a crash on the app, pointing to the lines in the method: - (void)presentBufferForDisplay; { [self.context presentRenderbuffer:GL_RENDERBUFFER]; } What might possibly be the reason of this crash? I've got a really long camera set up and the code itself is in the GPUImageContext class. What I might be doing wrong here? 回答1: You can't access OpenGL ES at all

iOS AVfoundation overlay video with video

人走茶凉 提交于 2019-12-10 12:24:47
问题 I have main video and I want to overlay it with another animated video with alpha channel like "Action Movie FX" application. How can I do it with AVfoundation, or can you suggest third-party framework? Thanks 回答1: GPUImage by Brad Larson is a great third-party framework for this kind of thing. It has many different blending algorithms you can choose from. This thread has code similar to what you want to do. 回答2: I would suggest that you take a look at my 3rd party framework to do this sort

Switching between filter chains using GPUImage Framework

戏子无情 提交于 2019-12-10 11:02:26
问题 I would like to switch between two filter chains as shown in case 1 and case 2 with the code below. When I initially select either cases, the output appears correct. However, when I switch to another the filter chain, the output flickers between current and prior filter chain. What is the recommended way to switch filter chains? -(void) updateFilter:(NSInteger) style { switch (style) { case 1: [kuwahara setRadius:5]; [videoCamera addTarget:kuwahara]; [kuwahara addTarget:grayscale]; [grayscale

How to reduce the memory consumption in GPUImageGaussianSelectiveBlurFilter effect?

若如初见. 提交于 2019-12-10 10:34:20
问题 I'm using GPUImage framework implementation GPUImageGaussianSelectiveBlurFilter effect. Now GPUImageGaussianSelectiveBlurFilter effect has been achieved,but there is too much memory consumption.I know through the forceProcessingAtSize method can reduce some memory consumption, but reduce memory consumption too little. If the processImage method is not called, the memory consumption will reduce a lot, so how to use another method to replace it? How to reduce memory consumption? #import

GPUImage2 : HarrisCornerDetector not calling callback

感情迁移 提交于 2019-12-10 00:36:19
问题 I have an odd problem implementing corner detection using GPUImage. I'm trying to use the filter template from Brad's download as a starting point, but although I can generate a composite view of the image and the corner points (as single white pixels), when I try to add in the crosshair generator, the callback function is never called. In my simplified example, I have an output view configured as RenderView and videoCamera defined as Camera? do { videoCamera = try Camera(sessionPreset

Color keying video with GPUImage on a SCNPlane in ARKit

南楼画角 提交于 2019-12-10 00:02:54
问题 I am trying to play a video, showing transparency in an ARSCNView . A SCNPlane is used as a projection space for the video and I am trying to color key this video with GPUImage . I followed this example here. Unfortunately, I have not found a way to project that video back on my videoSpriteKitNode . Because the filter is rendered in a GPUImageView , and the SKVideoNode takes a AVPlayer . I am not sure if it is possible at all, what I am trying to do, so if anyone could share their insight I'd