How do I export UIImage array as a movie?

后端 未结 10 1876
臣服心动
臣服心动 2020-11-22 02:11

I have a serious problem: I have an NSArray with several UIImage objects. What I now want to do, is create movie from those UIImages.

10条回答
  •  感情败类
    2020-11-22 02:18

    Take a look at AVAssetWriter and the rest of the AVFoundation framework. The writer has an input of type AVAssetWriterInput, which in turn has a method called appendSampleBuffer: that lets you add individual frames to a video stream. Essentially you’ll have to:

    1) Wire the writer:

    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
        [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
        error:&error];
    NSParameterAssert(videoWriter);
    
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
        AVVideoCodecH264, AVVideoCodecKey,
        [NSNumber numberWithInt:640], AVVideoWidthKey,
        [NSNumber numberWithInt:480], AVVideoHeightKey,
        nil];
    AVAssetWriterInput* writerInput = [[AVAssetWriterInput
        assetWriterInputWithMediaType:AVMediaTypeVideo
        outputSettings:videoSettings] retain]; //retain should be removed if ARC
    
    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];
    

    2) Start a session:

    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure
    

    3) Write some samples:

    // Or you can use AVAssetWriterInputPixelBufferAdaptor.
    // That lets you feed the writer input data from a CVPixelBuffer
    // that’s quite easy to create from a CGImage.
    [writerInput appendSampleBuffer:sampleBuffer];
    

    4) Finish the session:

    [writerInput markAsFinished];
    [videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifying endTime
    [videoWriter finishWriting]; //deprecated in ios6
    /*
    [videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
    */
    

    You’ll still have to fill-in a lot of blanks, but I think that the only really hard remaining part is getting a pixel buffer from a CGImage:

    - (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
    {
        NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
            [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
            [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
            nil];
        CVPixelBufferRef pxbuffer = NULL;
        CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
            frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
            &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    
        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
        NSParameterAssert(pxdata != NULL);
    
        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
            frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
            kCGImageAlphaNoneSkipFirst);
        NSParameterAssert(context);
        CGContextConcatCTM(context, frameTransform);
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
            CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);
    
        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    
        return pxbuffer;
    }
    

    frameSize is a CGSize describing your target frame size and frameTransform is a CGAffineTransform that lets you transform the images when you draw them into frames.

提交回复
热议问题