I\'m trying to fix a performance issue when creating GIFs with lots of frames. For example some GIFs could contain > 1200 frames. With my current code I run out of memory. I\'m
You can use AVFoundation to write a video with your images. I've uploaded a complete working test project to this github repository. When you run the test project in the simulator, it will print a file path to the debug console. Open that path in your video player to check the output.
I'll walk through the important parts of the code in this answer.
Start by creating an AVAssetWriter
. I'd give it the AVFileTypeAppleM4V
file type so that the video works on iOS devices.
AVAssetWriter *writer = [AVAssetWriter assetWriterWithURL:self.url fileType:AVFileTypeAppleM4V error:&error];
Set up an output settings dictionary with the video parameters:
- (NSDictionary *)videoOutputSettings {
return @{
AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: @((size_t)size.width),
AVVideoHeightKey: @((size_t)size.height),
AVVideoCompressionPropertiesKey: @{
AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline31,
AVVideoAverageBitRateKey: @(1200000) }};
}
You can adjust the bit rate to control the size of your video file. I've chosen the codec profile pretty conservatively here (it supports some pretty old devices). You might want to choose a later profile.
Then create an AVAssetWriterInput
with media type AVMediaTypeVideo
and your output settings.
NSDictionary *outputSettings = [self videoOutputSettings];
AVAssetWriterInput *input = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:outputSettings];
Set up a pixel buffer attribute dictionary:
- (NSDictionary *)pixelBufferAttributes {
return @{
fromCF kCVPixelBufferPixelFormatTypeKey: @(kCVPixelFormatType_32BGRA),
fromCF kCVPixelBufferCGBitmapContextCompatibilityKey: @YES };
}
You don't have to specify the pixel buffer dimensions here; AVFoundation will get them from the input's output settings. The attributes I've used here are (I believe) optimal for drawing with Core Graphics.
Next, create an AVAssetWriterInputPixelBufferAdaptor
for your input using the pixel buffer settings.
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:input
sourcePixelBufferAttributes:[self pixelBufferAttributes]];
Add the input to the writer and tell the writer to get going:
[writer addInput:input];
[writer startWriting];
[writer startSessionAtSourceTime:kCMTimeZero];
Next we'll tell the input how to get video frames. Yes, we can do this after we've told the writer to start writing:
[input requestMediaDataWhenReadyOnQueue:adaptorQueue usingBlock:^{
This block is going to do everything else we need to do with AVFoundation. The input calls it each time it's ready to accept more data. It might be able to accept multiple frames in a single call, so we'll loop as long is it's ready:
while (input.readyForMoreMediaData && self.frameGenerator.hasNextFrame) {
I'm using self.frameGenerator
to actually draw the frames. I'll show that code later. The frameGenerator
decides when the video is over (by returning NO from hasNextFrame
). It also knows when each frame should appear on screen:
CMTime time = self.frameGenerator.nextFramePresentationTime;
To actually draw the frame, we need to get a pixel buffer from the adaptor:
CVPixelBufferRef buffer = 0;
CVPixelBufferPoolRef pool = adaptor.pixelBufferPool;
CVReturn code = CVPixelBufferPoolCreatePixelBuffer(0, pool, &buffer);
if (code != kCVReturnSuccess) {
errorBlock([self errorWithFormat:@"could not create pixel buffer; CoreVideo error code %ld", (long)code]);
[input markAsFinished];
[writer cancelWriting];
return;
} else {
If we couldn't get a pixel buffer, we signal an error and abort everything. If we did get a pixel buffer, we need to wrap a bitmap context around it and ask frameGenerator
to draw the next frame in the context:
CVPixelBufferLockBaseAddress(buffer, 0); {
CGColorSpaceRef rgb = CGColorSpaceCreateDeviceRGB(); {
CGContextRef gc = CGBitmapContextCreate(CVPixelBufferGetBaseAddress(buffer), CVPixelBufferGetWidth(buffer), CVPixelBufferGetHeight(buffer), 8, CVPixelBufferGetBytesPerRow(buffer), rgb, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst); {
[self.frameGenerator drawNextFrameInContext:gc];
} CGContextRelease(gc);
} CGColorSpaceRelease(rgb);
Now we can append the buffer to the video. The adaptor does that:
[adaptor appendPixelBuffer:buffer withPresentationTime:time];
} CVPixelBufferUnlockBaseAddress(buffer, 0);
} CVPixelBufferRelease(buffer);
}
The loop above pushes frames through the adaptor until either the input says it's had enough, or until frameGenerator
says it's out of frames. If the frameGenerator
has more frames, we just return, and the input will call us again when it's ready for more frames:
if (self.frameGenerator.hasNextFrame) {
return;
}
If the frameGenerator
is out of frames, we shut down the input:
[input markAsFinished];
And then we tell the writer to finish. It'll call a completion handler when it's done:
[writer finishWritingWithCompletionHandler:^{
if (writer.status == AVAssetWriterStatusFailed) {
errorBlock(writer.error);
} else {
dispatch_async(dispatch_get_main_queue(), doneBlock);
}
}];
}];
By comparison, generating the frames is pretty straightforward. Here's the protocol the generator adopts:
@protocol DqdFrameGenerator <NSObject>
@required
// You should return the same size every time I ask for it.
@property (nonatomic, readonly) CGSize frameSize;
// I'll ask for frames in a loop. On each pass through the loop, I'll start by asking if you have any more frames:
@property (nonatomic, readonly) BOOL hasNextFrame;
// If you say NO, I'll stop asking and end the video.
// If you say YES, I'll ask for the presentation time of the next frame:
@property (nonatomic, readonly) CMTime nextFramePresentationTime;
// Then I'll ask you to draw the next frame into a bitmap graphics context:
- (void)drawNextFrameInContext:(CGContextRef)gc;
// Then I'll go back to the top of the loop.
@end
For my test, I draw a background image, and slowly cover it up with solid red as the video progresses.
@implementation TestFrameGenerator {
UIImage *baseImage;
CMTime nextTime;
}
- (instancetype)init {
if (self = [super init]) {
baseImage = [UIImage imageNamed:@"baseImage.jpg"];
_totalFramesCount = 100;
nextTime = CMTimeMake(0, 30);
}
return self;
}
- (CGSize)frameSize {
return baseImage.size;
}
- (BOOL)hasNextFrame {
return self.framesEmittedCount < self.totalFramesCount;
}
- (CMTime)nextFramePresentationTime {
return nextTime;
}
Core Graphics puts the origin in the lower left corner of the bitmap context, but I'm using a UIImage
, and UIKit likes to have the origin in the upper left.
- (void)drawNextFrameInContext:(CGContextRef)gc {
CGContextTranslateCTM(gc, 0, baseImage.size.height);
CGContextScaleCTM(gc, 1, -1);
UIGraphicsPushContext(gc); {
[baseImage drawAtPoint:CGPointZero];
[[UIColor redColor] setFill];
UIRectFill(CGRectMake(0, 0, baseImage.size.width, baseImage.size.height * self.framesEmittedCount / self.totalFramesCount));
} UIGraphicsPopContext();
++_framesEmittedCount;
I call a callback that my test program uses to update a progress indicator:
if (self.frameGeneratedCallback != nil) {
dispatch_async(dispatch_get_main_queue(), ^{
self.frameGeneratedCallback();
});
}
Finally, to demonstrate variable frame rate, I emit the first half of the frames at 30 frames per second, and the second half at 15 frames per second:
if (self.framesEmittedCount < self.totalFramesCount / 2) {
nextTime.value += 1;
} else {
nextTime.value += 2;
}
}
@end
If you set kCGImagePropertyGIFHasGlobalColorMap
to NO
then out of memory will not happen.