Make an UIImage from a CMSampleBuffer

后端 未结 8 1109
温柔的废话
温柔的废话 2020-12-13 15:01

This is not the same as the countless questions about converting a CMSampleBuffer to a UIImage. I\'m simply wondering why I can\'t convert it like

相关标签:
8条回答
  • 2020-12-13 15:36

    I wrote a simple extension for use with Swift 4.x/3.x to produce a UIImage from a CMSampleBuffer.

    This also handles scaling and orientation, though you can just accept default values if they work for you.

    import UIKit
    import AVFoundation
    
    extension CMSampleBuffer {
        func image(orientation: UIImageOrientation = .up, 
                   scale: CGFloat = 1.0) -> UIImage? {
            if let buffer = CMSampleBufferGetImageBuffer(self) {
                let ciImage = CIImage(cvPixelBuffer: buffer)
    
                return UIImage(ciImage: ciImage, 
                               scale: scale,
                               orientation: orientation)
            }
    
            return nil
        }
    }
    
    1. If it can obtain buffer data from the image, it will proceed, otherwise nil is returned
    2. Using the buffer, it initializes a CIImage
    3. It returns a UIImage initialized with the ciImage value, along with the scale & orientation values. If none are provided, the defaults of up and 1.0 are used respectively
    0 讨论(0)
  • 2020-12-13 15:37

    This is going to come up a lot in connection with the iOS 10 AVCapturePhotoOutput class. Suppose the user wants to snap a photo and you call capturePhoto(with:delegate:) and your settings include a request for a preview image. This is a splendidly efficient way to get a preview image, but how are you going to display it in your interface? The preview image arrives as a CMSampleBuffer in your implementation of the delegate method:

    func capture(_ output: AVCapturePhotoOutput, 
        didFinishProcessingPhotoSampleBuffer buff: CMSampleBuffer?, 
        previewPhotoSampleBuffer: CMSampleBuffer?, 
        resolvedSettings: AVCaptureResolvedPhotoSettings, 
        bracketSettings: AVCaptureBracketedStillImageSettings?, 
        error: Error?) {
    

    You need to transform a CMSampleBuffer, previewPhotoSampleBuffer into a UIImage. How are you going to do that? Like this:

    if let prev = previewPhotoSampleBuffer {
        if let buff = CMSampleBufferGetImageBuffer(prev) {
            let cim = CIImage(cvPixelBuffer: buff)
            let im = UIImage(ciImage: cim)
            // and now you have a UIImage! do something with it ...
        }
    }
    
    0 讨论(0)
  • 2020-12-13 15:42

    Use following code to convert image from PixelBuffer Option 1:

    CIImage *ciImage = [CIImage imageWithCVPixelBuffer:pixelBuffer];
    
    CIContext *context = [CIContext contextWithOptions:nil];
    CGImageRef myImage = [context
                             createCGImage:ciImage
                             fromRect:CGRectMake(0, 0,
                                                 CVPixelBufferGetWidth(pixelBuffer),
                                                 CVPixelBufferGetHeight(pixelBuffer))];
    
    UIImage *uiImage = [UIImage imageWithCGImage:myImage];
    

    Option 2:

    int w = CVPixelBufferGetWidth(pixelBuffer);
    int h = CVPixelBufferGetHeight(pixelBuffer);
    int r = CVPixelBufferGetBytesPerRow(pixelBuffer);
    int bytesPerPixel = r/w;
    
    unsigned char *buffer = CVPixelBufferGetBaseAddress(pixelBuffer);
    
    UIGraphicsBeginImageContext(CGSizeMake(w, h));
    
    CGContextRef c = UIGraphicsGetCurrentContext();
    
    unsigned char* data = CGBitmapContextGetData(c);
    if (data != NULL) {
        int maxY = h;
        for(int y = 0; y<maxY; y++) {
            for(int x = 0; x<w; x++) {
                int offset = bytesPerPixel*((w*y)+x);
                data[offset] = buffer[offset];     // R
                data[offset+1] = buffer[offset+1]; // G
                data[offset+2] = buffer[offset+2]; // B
                data[offset+3] = buffer[offset+3]; // A
            }
        }
    }
    UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
    
    UIGraphicsEndImageContext();
    
    0 讨论(0)
  • 2020-12-13 15:42

    Swift 5.0

    if let cvImageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) {
       let ciimage = CIImage(cvImageBuffer: cvImageBuffer)
       let context = CIContext()
    
       if let cgImage = context.createCGImage(ciimage, from: ciimage.extent) {
          let uiImage = UIImage(cgImage: cgImage)
       }
    }
    
    0 讨论(0)
  • 2020-12-13 15:46

    TO ALL: don't use methods like:

        private let context = CIContext()
    
        private func imageFromSampleBuffer2(_ sampleBuffer: CMSampleBuffer) -> UIImage? {
            guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
            let ciImage = CIImage(cvPixelBuffer: imageBuffer)
            guard let cgImage = context.createCGImage(ciImage, from: ciImage.extent) else { return nil }
            return UIImage(cgImage: cgImage)
        }
    

    they eat much more cpu, more time to convert

    use solution from https://stackoverflow.com/a/40193359/7767664

    don't forget to set next setting for AVCaptureVideoDataOutput

        videoOutput = AVCaptureVideoDataOutput()
    
        videoOutput.videoSettings = [(kCVPixelBufferPixelFormatTypeKey as String) : NSNumber(value: kCVPixelFormatType_32BGRA as UInt32)]
        //videoOutput.alwaysDiscardsLateVideoFrames = true
    
        videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue(label: "MyQueue"))
    

    convert method

        func imageFromSampleBuffer(_ sampleBuffer : CMSampleBuffer) -> UIImage {
            // Get a CMSampleBuffer's Core Video image buffer for the media data
            let  imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
            // Lock the base address of the pixel buffer
            CVPixelBufferLockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
    
    
        // Get the number of bytes per row for the pixel buffer
        let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer!);
    
        // Get the number of bytes per row for the pixel buffer
        let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer!);
        // Get the pixel buffer width and height
        let width = CVPixelBufferGetWidth(imageBuffer!);
        let height = CVPixelBufferGetHeight(imageBuffer!);
    
        // Create a device-dependent RGB color space
        let colorSpace = CGColorSpaceCreateDeviceRGB();
    
        // Create a bitmap graphics context with the sample buffer data
        var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Little.rawValue
        bitmapInfo |= CGImageAlphaInfo.premultipliedFirst.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
        //let bitmapInfo: UInt32 = CGBitmapInfo.alphaInfoMask.rawValue
        let context = CGContext.init(data: baseAddress, width: width, height: height, bitsPerComponent: 8, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo)
        // Create a Quartz image from the pixel data in the bitmap graphics context
        let quartzImage = context?.makeImage();
        // Unlock the pixel buffer
        CVPixelBufferUnlockBaseAddress(imageBuffer!, CVPixelBufferLockFlags.readOnly);
    
        // Create an image object from the Quartz image
        let image = UIImage.init(cgImage: quartzImage!);
    
        return (image);
    }
    
    0 讨论(0)
  • 2020-12-13 15:53

    A Swift 4 / iOS 11 version of Popigny's answer:

    import Foundation
    import AVFoundation
    import UIKit
    
    class ViewController : UIViewController {
        let captureSession = AVCaptureSession()
        let photoOutput = AVCapturePhotoOutput()
        let cameraPreview = UIView(frame: .zero)
        let progressIndicator = ProgressIndicator()
    
        override func viewDidLoad() {
            super.viewDidLoad()
    
            setupVideoPreview()
    
            do {
                try setupCaptureSession()
            } catch {
                let errorMessage = String(describing:error)
                print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
                alert(title: "Error", message: errorMessage)
            }
        }
    
        private func setupCaptureSession() throws {
            let deviceDiscovery = AVCaptureDevice.DiscoverySession(deviceTypes: [AVCaptureDevice.DeviceType.builtInWideAngleCamera], mediaType: AVMediaType.video, position: AVCaptureDevice.Position.back)
            let devices = deviceDiscovery.devices
    
            guard let captureDevice = devices.first else {
                let errorMessage = "No camera available"
                print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
                alert(title: "Error", message: errorMessage)
                return
            }
    
            let captureDeviceInput = try AVCaptureDeviceInput(device: captureDevice)
            captureSession.addInput(captureDeviceInput)
            captureSession.sessionPreset = AVCaptureSession.Preset.photo
            captureSession.startRunning()
    
            if captureSession.canAddOutput(photoOutput) {
                captureSession.addOutput(photoOutput)
            }
        }
    
        private func setupVideoPreview() {
    
            let previewLayer = AVCaptureVideoPreviewLayer(session: captureSession)
            previewLayer.bounds = view.bounds
            previewLayer.position = CGPoint(x:view.bounds.midX, y:view.bounds.midY)
            previewLayer.videoGravity = AVLayerVideoGravity.resizeAspectFill
    
            cameraPreview.layer.addSublayer(previewLayer)
            cameraPreview.addGestureRecognizer(UITapGestureRecognizer(target: self, action:#selector(capturePhoto)))
    
            cameraPreview.translatesAutoresizingMaskIntoConstraints = false
    
            view.addSubview(cameraPreview)
    
            let viewsDict = ["cameraPreview":cameraPreview]
            view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "V:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))
            view.addConstraints(NSLayoutConstraint.constraints(withVisualFormat: "H:|-0-[cameraPreview]-0-|", options: [], metrics: nil, views: viewsDict))
    
        }
    
        @objc func capturePhoto(_ sender: UITapGestureRecognizer) {
            progressIndicator.add(toView: view)
            let photoOutputSettings = AVCapturePhotoSettings(format: [AVVideoCodecKey:AVVideoCodecType.jpeg])
            photoOutput.capturePhoto(with: photoOutputSettings, delegate: self)
        }
    
        func saveToPhotosAlbum(_ image: UIImage) {
            UIImageWriteToSavedPhotosAlbum(image, self, #selector(photoWasSavedToAlbum), nil)
        }
    
        @objc func photoWasSavedToAlbum(_ image: UIImage, _ error: Error?, _ context: Any?) {
            alert(message: "Photo saved to device photo album")
        }
    
        func alert(title: String?=nil, message:String?=nil) {
            let alert = UIAlertController(title: title, message: message, preferredStyle: .alert)
            alert.addAction(UIAlertAction(title: "OK", style: .default, handler: nil))
            present(alert, animated:true)
        }
    
    }
    
    extension ViewController : AVCapturePhotoCaptureDelegate {
        func photoOutput(_ output: AVCapturePhotoOutput, didFinishProcessingPhoto photo: AVCapturePhoto, error: Error?) {
    
            guard  let photoData = photo.fileDataRepresentation() else {
                let errorMessage = "Photo capture did not provide output data"
                print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
                alert(title: "Error", message: errorMessage)
                return
            }
    
            guard let image = UIImage(data: photoData) else {
                let errorMessage = "could not create image to save"
                print("[--ERROR--]: \(#file):\(#function):\(#line): " + errorMessage)
                alert(title: "Error", message: errorMessage)
                return
            }
    
            saveToPhotosAlbum(image)
    
            progressIndicator.hide()
        }
    }
    

    A full example project to see this in context: https://github.com/cruinh/CameraCapture

    0 讨论(0)
提交回复
热议问题