Decoding h264 in iOS 8 with video tool box

时光怂恿深爱的人放手 提交于 2019-12-31 12:09:21

问题


Need to decode h264 stream and get the pixel buffers

I know its possible with video tool box on iOS 8

1.How do I convert the h264 stream to CMSampleBufferRef ?

2.How do I use the video tool box to decode?


回答1:


I assume you get the stream in Annex B format, if it is already in AVCC format (read MP4), then you can you the AssetReader and do not need to do much.

For an Annex B stream (this is what ppl. often call raw h264 stream).

  1. extract SPS/PPS NAL units and create a parameter set from then. You receive them periodically. They contain information for the decode how a frame is supposed to be decoded.

  2. create the TimingInfo Array with the duration (you can take it from parsing the VUI part of SPS) and presentation time stamp and decoding timestamp.Iif the stream is received as MPEG2 TS take the timestamps from the PESr. If not just supply missing info based on your calculations.

  3. Wrap the VLC NAL units in a CMBlockBuffer. You can put more then one into them. If you receive your stream over RTP that might fragment the NAL units make sure every NAL unit is complete.

  4. When wrapping the NAL unit in a CMBlockbuffer replace the 3- or 4-byte start code with a length header.

  5. Supply the information to CMSampleBufferCreate and you can decode the frame in VTDecompressionSession

There is a presetation from WWDC available that explains these steps a bit more in detail ans also provide sample code.




回答2:


Try this working code.Supply the encoded CMSampleBufferRef to sampleBuffer.

if(!decompressionSession)
    {
    CMFormatDescriptionRef formatDescription=CMSampleBufferGetFormatDescription(sampleBuffer);
    decompressionSession = NULL;
    VTDecompressionOutputCallbackRecord callBackRecord;
    callBackRecord.decompressionOutputCallback=didDecompress;
    callBackRecord.decompressionOutputRefCon = (__bridge void *)self;
    OSStatus status1= VTDecompressionSessionCreate(kCFAllocatorDefault, formatDescription, NULL, NULL, &callBackRecord, &decompressionSession);
    }
    else
    {
        VTDecodeFrameFlags flags = kVTDecodeFrame_EnableAsynchronousDecompression;
        VTDecodeInfoFlags flagOut;
        VTDecompressionSessionDecodeFrame(decompressionSession, sampleBuffer, flags, NULL, &flagOut);
        VTDecompressionSessionWaitForAsynchronousFrames(decompressionSession);

    }



Decompression all back

static void didDecompress( void *decompressionOutputRefCon, void *sourceFrameRefCon, OSStatus status, VTDecodeInfoFlags infoFlags, CVImageBufferRef imageBuffer, CMTime presentationTimeStamp, CMTime presentationDuration ){
 if(status==noErr)
    {
        NSLog(@"SUCCESS PROCEED FROM HERE !!!!");
    }
}

// Keep in mind , you provide the correct presentation time while encoding.Here i am providing you the encoding details..

//ENCODING-------------------ENCODING---------------ENCODING
if(!_compression_session)
{
  NSDictionary* pixelBufferOptions = @{
                                          (NSString*) kCVPixelBufferWidthKey : @(widthOFCaptureImage),
                                          (NSString*) kCVPixelBufferHeightKey : @(heightOFCaptureImage),
                                          (NSString*) kCVPixelBufferOpenGLESCompatibilityKey : @YES,
                                          (NSString*) kCVPixelBufferIOSurfacePropertiesKey : @{}};
_compression_session=NULL;
 CFMutableDictionaryRef encoderSpecifications = NULL;
  err = VTCompressionSessionCreate(
                                     kCFAllocatorDefault,
                                     widthOFCaptureImage,
                                     heightOFCaptureImage,
                                     kCMVideoCodecType_H264,
                                     encoderSpecifications,
                                     (__bridge CFDictionaryRef)pixelBufferOptions,
                                     NULL,
                                     compressionCallback,
                                     (__bridge void *)self,
                                     &_compression_session);
}
else
{


 CMTime presentationTimeStamp = CMSampleBufferGetPresentationTimeStamp(sampleBufferIs);
  CVPixelBufferRef pixelbufferPassing= CMSampleBufferGetImageBuffer(sampleBufferIs);
          OSStatus status1= VTCompressionSessionEncodeFrame(_compression_session, pixelbufferPassing, presentationTimeStamp, kCMTimeInvalid, NULL, NULL, NULL);
 VTCompressionSessionEndPass(_compression_session, NO, NULL);
}

//ENCODING CALL BACK-----------------------------------------

  static void compressionCallback(void *outputCallbackRefCon,
                             void *sourceFrameRefCon,
                             OSStatus status,
                             VTEncodeInfoFlags infoFlags,
                             CMSampleBufferRef sampleBuffer ){
    }

//Best wishes :) happy coding :)



来源:https://stackoverflow.com/questions/26012146/decoding-h264-in-ios-8-with-video-tool-box

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!