问题
After having a detail review of WWDC2014,Session513, I try to write my app on IOS8.0 to decode and display one live H.264 stream. First of all, I construct a H264 parameter set successfully. When I get one I frame with a 4 bit start code,just like"0x00 0x00 0x00 0x01 0x65 ...", I put it into a CMblockBuffer. Then I construct a CMSampleBuffer using previews CMBlockBuffer. After that,I put the CMSampleBuffer into a AVSampleBufferDisplayLayer. Everything is OK(I checked the value returned ) except the AVSampleBufferDisplayLayer does not show any video image. Since these APIs are fairly new to everyone, I couldn't find any body who can resolve this problem.
I'll give the key codes as follows,and I do really appreciate it if you can help to figure out why the vide image can't be displayed. Thanks a lot.
(1) AVSampleBufferDisplayLayer initialised. dsplayer is a objc instance of my main view controller.
@property(nonatomic,strong)AVSampleBufferDisplayLayer *dspLayer;
if(!_dspLayer)
{
_dspLayer = [[AVSampleBufferDisplayLayer alloc]init];
[_dspLayer setFrame:CGRectMake(90,551,557,389)];
_dspLayer.videoGravity = AVLayerVideoGravityResizeAspect;
_dspLayer.backgroundColor = [UIColor grayColor].CGColor;
CMTimebaseRef tmBase = nil;
CMTimebaseCreateWithMasterClock(NULL,CMClockGetHostTimeClock(),&tmBase);
_dspLayer.controlTimebase = tmBase;
CMTimebaseSetTime(_dspLayer.controlTimebase, kCMTimeZero);
CMTimebaseSetRate(_dspLayer.controlTimebase, 1.0);
[self.view.layer addSublayer:_dspLayer];
}
(2)In another thread, I get one H.264 I frame. //construct h.264 parameter set ok
CMVideoFormatDescriptionRef formatDesc;
OSStatus formatCreateResult =
CMVideoFormatDescriptionCreateFromH264ParameterSets(NULL, ppsNum+1, props, sizes, 4, &formatDesc);
NSLog([NSString stringWithFormat:@"construct h264 param set:%ld",formatCreateResult]);
//construct cmBlockbuffer . //databuf points to H.264 data. starts with "0x00 0x00 0x00 0x01 0x65 ........"
CMBlockBufferRef blockBufferOut = nil;
CMBlockBufferCreateEmpty (0,0,kCMBlockBufferAlwaysCopyDataFlag, &blockBufferOut);
CMBlockBufferAppendMemoryBlock(blockBufferOut,
dataBuf,
dataLen,
NULL,
NULL,
0,
dataLen,
kCMBlockBufferAlwaysCopyDataFlag);
//construct cmsamplebuffer ok
size_t sampleSizeArray[1] = {0};
sampleSizeArray[0] = CMBlockBufferGetDataLength(blockBufferOut);
CMSampleTiminginfo tmInfos[1] = {
{CMTimeMake(5,1), CMTimeMake(5,1), CMTimeMake(5,1)}
};
CMSampleBufferRef sampBuf = nil;
formatCreateResult = CMSampleBufferCreate(kCFAllocatorDefault,
blockBufferOut,
YES,
NULL,
NULL,
formatDesc,
1,
1,
tmInfos,
1,
sampleSizeArray,
&sampBuf);
//put to AVSampleBufferdisplayLayer,just one frame. But I can't see any video frame in my view
if([self.dspLayer isReadyForMoreMediaData])
{
[self.dspLayer enqueueSampleBuffer:sampBuf];
}
[self.dspLayer setNeedsDisplay];
回答1:
Your NAL unit start codes 0x00 0x00 0x01 or 0x00 0x00 0x00 0x01 need to be replaced by a length header.
This was clearly stated in the WWDC session you are referring to that the Annex B start code needs to be replaced by a AVCC conform lengh header. You are basically remuxing to MP4 file format from Annex B stream format on the fly here (simplified description of course).
Your call when creating the Parameter Set is "4" for this, so you need to prefix your VCL NAL units with a 4 byte length prefix. That's why you specifiy it as in AVCC format the length header can be shorter.
Whatever you put inside CMSampleBuffer will be OK, there is no sanity check if the contents can be decoded, just that you met the required parameters for being arbitrary data combined with timing information and a parameter set.
Basically with the data you put in you said the the VCL NAL units are 1 byte long. The decoder doesn't get the full NAL unit and bails out on an error.
Also make sure that when you use create the parameter set that the PPS/SPS do not have a length byted added and that the Annex B start code is also stripped.
Also I recommend not to use AVSampleBufferDisplayLayer but go through a VTDecompressionSession, so you can do stuff like color correction or other things that are needed inside a pixel shader.
回答2:
It might be an idea to use DecompressionSessionDecode Frame initially as this will give you some feedback on the success of the decoding. If there is an issue with the decoding the AVSampleBufferDisplay layer doesn't tell you it just doesn't display anything. I can give you some code to help with this if required, let me know how you get on as I am attempting the same thing :)
来源:https://stackoverflow.com/questions/26080878/putting-an-h-264-i-frame-to-avsamplebufferdisplaylayer-but-no-video-image-is-dis