How do I export UIImage array as a movie?

后端 未结 10 1855
臣服心动
臣服心动 2020-11-22 02:11

I have a serious problem: I have an NSArray with several UIImage objects. What I now want to do, is create movie from those UIImages.

相关标签:
10条回答
  • 2020-11-22 02:18

    Take a look at AVAssetWriter and the rest of the AVFoundation framework. The writer has an input of type AVAssetWriterInput, which in turn has a method called appendSampleBuffer: that lets you add individual frames to a video stream. Essentially you’ll have to:

    1) Wire the writer:

    NSError *error = nil;
    AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
        [NSURL fileURLWithPath:somePath] fileType:AVFileTypeQuickTimeMovie
        error:&error];
    NSParameterAssert(videoWriter);
    
    NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
        AVVideoCodecH264, AVVideoCodecKey,
        [NSNumber numberWithInt:640], AVVideoWidthKey,
        [NSNumber numberWithInt:480], AVVideoHeightKey,
        nil];
    AVAssetWriterInput* writerInput = [[AVAssetWriterInput
        assetWriterInputWithMediaType:AVMediaTypeVideo
        outputSettings:videoSettings] retain]; //retain should be removed if ARC
    
    NSParameterAssert(writerInput);
    NSParameterAssert([videoWriter canAddInput:writerInput]);
    [videoWriter addInput:writerInput];
    

    2) Start a session:

    [videoWriter startWriting];
    [videoWriter startSessionAtSourceTime:…] //use kCMTimeZero if unsure
    

    3) Write some samples:

    // Or you can use AVAssetWriterInputPixelBufferAdaptor.
    // That lets you feed the writer input data from a CVPixelBuffer
    // that’s quite easy to create from a CGImage.
    [writerInput appendSampleBuffer:sampleBuffer];
    

    4) Finish the session:

    [writerInput markAsFinished];
    [videoWriter endSessionAtSourceTime:…]; //optional can call finishWriting without specifying endTime
    [videoWriter finishWriting]; //deprecated in ios6
    /*
    [videoWriter finishWritingWithCompletionHandler:...]; //ios 6.0+
    */
    

    You’ll still have to fill-in a lot of blanks, but I think that the only really hard remaining part is getting a pixel buffer from a CGImage:

    - (CVPixelBufferRef) newPixelBufferFromCGImage: (CGImageRef) image
    {
        NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
            [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
            [NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
            nil];
        CVPixelBufferRef pxbuffer = NULL;
        CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, frameSize.width,
            frameSize.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options, 
            &pxbuffer);
        NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
    
        CVPixelBufferLockBaseAddress(pxbuffer, 0);
        void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
        NSParameterAssert(pxdata != NULL);
    
        CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
        CGContextRef context = CGBitmapContextCreate(pxdata, frameSize.width,
            frameSize.height, 8, 4*frameSize.width, rgbColorSpace, 
            kCGImageAlphaNoneSkipFirst);
        NSParameterAssert(context);
        CGContextConcatCTM(context, frameTransform);
        CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), 
            CGImageGetHeight(image)), image);
        CGColorSpaceRelease(rgbColorSpace);
        CGContextRelease(context);
    
        CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
    
        return pxbuffer;
    }
    

    frameSize is a CGSize describing your target frame size and frameTransform is a CGAffineTransform that lets you transform the images when you draw them into frames.

    0 讨论(0)
  • 2020-11-22 02:22

    Here's a Swift 2.x version tested on iOS 8. It combines answers from @Scott Raposa and @Praxiteles along with code from @acj contributed for another question. The code from @acj is here: https://gist.github.com/acj/6ae90aa1ebb8cad6b47b. @TimBull also provided code as well.

    Like @Scott Raposa, I had never even heard of CVPixelBufferPoolCreatePixelBuffer and several other functions, let alone understood how to use them.

    What you see below was cobbled together mostly by trial and error and from reading Apple docs. Please use with caution, and provide suggestions if there are mistakes.

    Usage:

    import UIKit
    import AVFoundation
    import Photos
    
    writeImagesAsMovie(yourImages, videoPath: yourPath, videoSize: yourSize, videoFPS: 30)
    

    Code:

    func writeImagesAsMovie(allImages: [UIImage], videoPath: String, videoSize: CGSize, videoFPS: Int32) {
        // Create AVAssetWriter to write video
        guard let assetWriter = createAssetWriter(videoPath, size: videoSize) else {
            print("Error converting images to video: AVAssetWriter not created")
            return
        }
    
        // If here, AVAssetWriter exists so create AVAssetWriterInputPixelBufferAdaptor
        let writerInput = assetWriter.inputs.filter{ $0.mediaType == AVMediaTypeVideo }.first!
        let sourceBufferAttributes : [String : AnyObject] = [
            kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_32ARGB),
            kCVPixelBufferWidthKey as String : videoSize.width,
            kCVPixelBufferHeightKey as String : videoSize.height,
            ]
        let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: sourceBufferAttributes)
    
        // Start writing session
        assetWriter.startWriting()
        assetWriter.startSessionAtSourceTime(kCMTimeZero)
        if (pixelBufferAdaptor.pixelBufferPool == nil) {
            print("Error converting images to video: pixelBufferPool nil after starting session")
            return
        }
    
        // -- Create queue for <requestMediaDataWhenReadyOnQueue>
        let mediaQueue = dispatch_queue_create("mediaInputQueue", nil)
    
        // -- Set video parameters
        let frameDuration = CMTimeMake(1, videoFPS)
        var frameCount = 0
    
        // -- Add images to video
        let numImages = allImages.count
        writerInput.requestMediaDataWhenReadyOnQueue(mediaQueue, usingBlock: { () -> Void in
            // Append unadded images to video but only while input ready
            while (writerInput.readyForMoreMediaData && frameCount < numImages) {
                let lastFrameTime = CMTimeMake(Int64(frameCount), videoFPS)
                let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
    
                if !self.appendPixelBufferForImageAtURL(allImages[frameCount], pixelBufferAdaptor: pixelBufferAdaptor, presentationTime: presentationTime) {
                    print("Error converting images to video: AVAssetWriterInputPixelBufferAdapter failed to append pixel buffer")
                    return
                }
    
                frameCount += 1
            }
    
            // No more images to add? End video.
            if (frameCount >= numImages) {
                writerInput.markAsFinished()
                assetWriter.finishWritingWithCompletionHandler {
                    if (assetWriter.error != nil) {
                        print("Error converting images to video: \(assetWriter.error)")
                    } else {
                        self.saveVideoToLibrary(NSURL(fileURLWithPath: videoPath))
                        print("Converted images to movie @ \(videoPath)")
                    }
                }
            }
        })
    }
    
    
    func createAssetWriter(path: String, size: CGSize) -> AVAssetWriter? {
        // Convert <path> to NSURL object
        let pathURL = NSURL(fileURLWithPath: path)
    
        // Return new asset writer or nil
        do {
            // Create asset writer
            let newWriter = try AVAssetWriter(URL: pathURL, fileType: AVFileTypeMPEG4)
    
            // Define settings for video input
            let videoSettings: [String : AnyObject] = [
                AVVideoCodecKey  : AVVideoCodecH264,
                AVVideoWidthKey  : size.width,
                AVVideoHeightKey : size.height,
                ]
    
            // Add video input to writer
            let assetWriterVideoInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
            newWriter.addInput(assetWriterVideoInput)
    
            // Return writer
            print("Created asset writer for \(size.width)x\(size.height) video")
            return newWriter
        } catch {
            print("Error creating asset writer: \(error)")
            return nil
        }
    }
    
    
    func appendPixelBufferForImageAtURL(image: UIImage, pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor, presentationTime: CMTime) -> Bool {
        var appendSucceeded = false
    
        autoreleasepool {
            if  let pixelBufferPool = pixelBufferAdaptor.pixelBufferPool {
                let pixelBufferPointer = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
                let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(
                    kCFAllocatorDefault,
                    pixelBufferPool,
                    pixelBufferPointer
                )
    
                if let pixelBuffer = pixelBufferPointer.memory where status == 0 {
                    fillPixelBufferFromImage(image, pixelBuffer: pixelBuffer)
                    appendSucceeded = pixelBufferAdaptor.appendPixelBuffer(pixelBuffer, withPresentationTime: presentationTime)
                    pixelBufferPointer.destroy()
                } else {
                    NSLog("Error: Failed to allocate pixel buffer from pool")
                }
    
                pixelBufferPointer.dealloc(1)
            }
        }
    
        return appendSucceeded
    }
    
    
    func fillPixelBufferFromImage(image: UIImage, pixelBuffer: CVPixelBufferRef) {
        CVPixelBufferLockBaseAddress(pixelBuffer, 0)
    
        let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
    
        // Create CGBitmapContext
        let context = CGBitmapContextCreate(
            pixelData,
            Int(image.size.width),
            Int(image.size.height),
            8,
            CVPixelBufferGetBytesPerRow(pixelBuffer),
            rgbColorSpace,
            CGImageAlphaInfo.PremultipliedFirst.rawValue
        )
    
        // Draw image into context
        CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage)
    
        CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
    }
    
    
    func saveVideoToLibrary(videoURL: NSURL) {
        PHPhotoLibrary.requestAuthorization { status in
            // Return if unauthorized
            guard status == .Authorized else {
                print("Error saving video: unauthorized access")
                return
            }
    
            // If here, save video to library
            PHPhotoLibrary.sharedPhotoLibrary().performChanges({
                PHAssetChangeRequest.creationRequestForAssetFromVideoAtFileURL(videoURL)
            }) { success, error in
                if !success {
                    print("Error saving video: \(error)")
                }
            }
        }
    }
    
    0 讨论(0)
  • 2020-11-22 02:24

    I took Zoul's main ideas and incorporated the AVAssetWriterInputPixelBufferAdaptor method and made the beginnings of a little frameworks out of it.

    Feel free to check it out and improve upon it! CEMovieMaker

    0 讨论(0)
  • 2020-11-22 02:24

    Just translated @Scott Raposa answer to swift3 (with some very little changes):

    import AVFoundation
    import UIKit
    import Photos
    
    struct RenderSettings {
    
        var size : CGSize = .zero
        var fps: Int32 = 6   // frames per second
        var avCodecKey = AVVideoCodecH264
        var videoFilename = "render"
        var videoFilenameExt = "mp4"
    
    
        var outputURL: URL {
            // Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
            // Using the CachesDirectory ensures the file won't be included in a backup of the app.
            let fileManager = FileManager.default
            if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
                return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
            }
            fatalError("URLForDirectory() failed")
        }
    }
    
    
    class ImageAnimator {
    
        // Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
        static let kTimescale: Int32 = 600
    
        let settings: RenderSettings
        let videoWriter: VideoWriter
        var images: [UIImage]!
    
        var frameNum = 0
    
        class func saveToLibrary(videoURL: URL) {
            PHPhotoLibrary.requestAuthorization { status in
                guard status == .authorized else { return }
    
                PHPhotoLibrary.shared().performChanges({
                    PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: videoURL)
                }) { success, error in
                    if !success {
                        print("Could not save video to photo library:", error)
                    }
                }
            }
        }
    
        class func removeFileAtURL(fileURL: URL) {
            do {
                try FileManager.default.removeItem(atPath: fileURL.path)
            }
            catch _ as NSError {
                // Assume file doesn't exist.
            }
        }
    
        init(renderSettings: RenderSettings) {
            settings = renderSettings
            videoWriter = VideoWriter(renderSettings: settings)
    //        images = loadImages()
        }
    
        func render(completion: (()->Void)?) {
    
            // The VideoWriter will fail if a file exists at the URL, so clear it out first.
            ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)
    
            videoWriter.start()
            videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
                ImageAnimator.saveToLibrary(videoURL: self.settings.outputURL)
                completion?()
            }
    
        }
    
    //    // Replace this logic with your own.
    //    func loadImages() -> [UIImage] {
    //        var images = [UIImage]()
    //        for index in 1...10 {
    //            let filename = "\(index).jpg"
    //            images.append(UIImage(named: filename)!)
    //        }
    //        return images
    //    }
    
        // This is the callback function for VideoWriter.render()
        func appendPixelBuffers(writer: VideoWriter) -> Bool {
    
            let frameDuration = CMTimeMake(Int64(ImageAnimator.kTimescale / settings.fps), ImageAnimator.kTimescale)
    
            while !images.isEmpty {
    
                if writer.isReadyForData == false {
                    // Inform writer we have more buffers to write.
                    return false
                }
    
                let image = images.removeFirst()
                let presentationTime = CMTimeMultiply(frameDuration, Int32(frameNum))
                let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
                if success == false {
                    fatalError("addImage() failed")
                }
    
                frameNum += 1
            }
    
            // Inform writer all buffers have been written.
            return true
        }
    
    }
    
    
    class VideoWriter {
    
        let renderSettings: RenderSettings
    
        var videoWriter: AVAssetWriter!
        var videoWriterInput: AVAssetWriterInput!
        var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
    
        var isReadyForData: Bool {
            return videoWriterInput?.isReadyForMoreMediaData ?? false
        }
    
        class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize) -> CVPixelBuffer {
    
            var pixelBufferOut: CVPixelBuffer?
    
            let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
            if status != kCVReturnSuccess {
                fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
            }
    
            let pixelBuffer = pixelBufferOut!
    
            CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    
            let data = CVPixelBufferGetBaseAddress(pixelBuffer)
            let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
            let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
                                    bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
    
            context!.clear(CGRect(x:0,y: 0,width: size.width,height: size.height))
    
            let horizontalRatio = size.width / image.size.width
            let verticalRatio = size.height / image.size.height
            //aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
            let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
    
            let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)
    
            let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
            let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0
    
            context?.draw(image.cgImage!, in: CGRect(x:x,y: y, width: newSize.width, height: newSize.height))
            CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
    
            return pixelBuffer
        }
    
        init(renderSettings: RenderSettings) {
            self.renderSettings = renderSettings
        }
    
        func start() {
    
            let avOutputSettings: [String: Any] = [
                AVVideoCodecKey: renderSettings.avCodecKey,
                AVVideoWidthKey: NSNumber(value: Float(renderSettings.size.width)),
                AVVideoHeightKey: NSNumber(value: Float(renderSettings.size.height))
            ]
    
            func createPixelBufferAdaptor() {
                let sourcePixelBufferAttributesDictionary = [
                    kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
                    kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.size.width)),
                    kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.size.height))
                ]
                pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
                                                                          sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
            }
    
            func createAssetWriter(outputURL: URL) -> AVAssetWriter {
                guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileTypeMPEG4) else {
                    fatalError("AVAssetWriter() failed")
                }
    
                guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaTypeVideo) else {
                    fatalError("canApplyOutputSettings() failed")
                }
    
                return assetWriter
            }
    
            videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
            videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: avOutputSettings)
    
            if videoWriter.canAdd(videoWriterInput) {
                videoWriter.add(videoWriterInput)
            }
            else {
                fatalError("canAddInput() returned false")
            }
    
            // The pixel buffer adaptor must be created before we start writing.
            createPixelBufferAdaptor()
    
            if videoWriter.startWriting() == false {
                fatalError("startWriting() failed")
            }
    
            videoWriter.startSession(atSourceTime: kCMTimeZero)
    
            precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
        }
    
        func render(appendPixelBuffers: ((VideoWriter)->Bool)?, completion: (()->Void)?) {
    
            precondition(videoWriter != nil, "Call start() to initialze the writer")
    
            let queue = DispatchQueue(label: "mediaInputQueue")
            videoWriterInput.requestMediaDataWhenReady(on: queue) {
                let isFinished = appendPixelBuffers?(self) ?? false
                if isFinished {
                    self.videoWriterInput.markAsFinished()
                    self.videoWriter.finishWriting() {
                        DispatchQueue.main.async {
                            completion?()
                        }
                    }
                }
                else {
                    // Fall through. The closure will be called again when the writer is ready.
                }
            }
        }
    
        func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {
    
            precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
    
            let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size)
            return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
        }
    
    }
    
    0 讨论(0)
  • 2020-11-22 02:25

    Well this is a bit hard to be implemented in pure Objective-C....If you are developing for jailbroken devices , a good idea is to use the command-line tool ffmpeg from inside your app. it's quite easy to create a movie from images with a command like:

    ffmpeg -r 10 -b 1800 -i %03d.jpg test1800.mp4
    

    Note that the images have to be named sequentially , and also be placed in the same directory. For more information take a look at: http://electron.mit.edu/~gsteele/ffmpeg/

    0 讨论(0)
  • 2020-11-22 02:32

    Here's the swift3 version how to convert Images array to the Video

    import Foundation
    import AVFoundation
    import UIKit
    
    typealias CXEMovieMakerCompletion = (URL) -> Void
    typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?
    
    
    public class ImagesToVideoUtils: NSObject {
    
        static let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
        static let tempPath = paths[0] + "/exprotvideo.mp4"
        static let fileURL = URL(fileURLWithPath: tempPath)
    //    static let tempPath = NSTemporaryDirectory() + "/exprotvideo.mp4"
    //    static let fileURL = URL(fileURLWithPath: tempPath)
    
    
        var assetWriter:AVAssetWriter!
        var writeInput:AVAssetWriterInput!
        var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
        var videoSettings:[String : Any]!
        var frameTime:CMTime!
        //var fileURL:URL!
    
        var completionBlock: CXEMovieMakerCompletion?
        var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?
    
    
        public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
            if(Int(width) % 16 != 0){
                print("warning: video settings width must be divisible by 16")
            }
    
            let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecJPEG, //AVVideoCodecH264,
                                               AVVideoWidthKey: width,
                                               AVVideoHeightKey: height]
    
            return videoSettings
        }
    
        public init(videoSettings: [String: Any]) {
            super.init()
    
    
            if(FileManager.default.fileExists(atPath: ImagesToVideoUtils.tempPath)){
                guard (try? FileManager.default.removeItem(atPath: ImagesToVideoUtils.tempPath)) != nil else {
                    print("remove path failed")
                    return
                }
            }
    
    
            self.assetWriter = try! AVAssetWriter(url: ImagesToVideoUtils.fileURL, fileType: AVFileTypeQuickTimeMovie)
    
            self.videoSettings = videoSettings
            self.writeInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
            assert(self.assetWriter.canAdd(self.writeInput), "add failed")
    
            self.assetWriter.add(self.writeInput)
            let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
            self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
            self.frameTime = CMTimeMake(1, 5)
        }
    
        func createMovieFrom(urls: [URL], withCompletion: @escaping CXEMovieMakerCompletion){
            self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
                return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
        }
    
        func createMovieFrom(images: [UIImage], withCompletion: @escaping CXEMovieMakerCompletion){
            self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
                return inputObject as? UIImage}, withCompletion: withCompletion)
        }
    
        func createMovieFromSource(images: [AnyObject], extractor: @escaping CXEMovieMakerUIImageExtractor, withCompletion: @escaping CXEMovieMakerCompletion){
            self.completionBlock = withCompletion
    
            self.assetWriter.startWriting()
            self.assetWriter.startSession(atSourceTime: kCMTimeZero)
    
            let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
            var i = 0
            let frameNumber = images.count
    
            self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
                while(true){
                    if(i >= frameNumber){
                        break
                    }
    
                    if (self.writeInput.isReadyForMoreMediaData){
                        var sampleBuffer:CVPixelBuffer?
                        autoreleasepool{
                            let img = extractor(images[i])
                            if img == nil{
                                i += 1
                                print("Warning: counld not extract one of the frames")
                                //continue
                            }
                            sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
                        }
                        if (sampleBuffer != nil){
                            if(i == 0){
                                self.bufferAdapter.append(sampleBuffer!, withPresentationTime: kCMTimeZero)
                            }else{
                                let value = i - 1
                                let lastTime = CMTimeMake(Int64(value), self.frameTime.timescale)
                                let presentTime = CMTimeAdd(lastTime, self.frameTime)
                                self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
                            }
                            i = i + 1
                        }
                    }
                }
                self.writeInput.markAsFinished()
                self.assetWriter.finishWriting {
                    DispatchQueue.main.sync {
                        self.completionBlock!(ImagesToVideoUtils.fileURL)
                    }
                }
            }
        }
    
        func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
            let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
            var pxbuffer:CVPixelBuffer?
            let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
            let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
    
            let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
            assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
    
            CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
            let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
            let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
            let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
            assert(context != nil, "context is nil")
    
            context!.concatenate(CGAffineTransform.identity)
            context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
            CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
            return pxbuffer
        }
    }
    

    I use it together with screen capturing, to basically create a video of screen capturing, here's the full story/complete example.

    0 讨论(0)
提交回复
热议问题