问题
I have a AVPlayerLayer (subclass of CALayer) and I need to get in into a image type that can be passed to a QCRenderer (QCRenderer accepts NSImages and CIImages.) I can convert the CALayer to a CGImageRef, and that to an NSImage, but the contents is always clear.
I've narrowed it down to one of two reasons:
- I am not creating the NSImage correctly.
- The AVPlayer is not rendering to the AVPlayerLayer.
I am not receiving any errors, and have found some documentation on converting CALayers. Also, I added the AVPlayerLayer to an NSView, which remains empty so I believe 2 is the problem.
I'm using a modified version of Apple's AVPlayerDemo's AVPlayerDemoPlaybackViewController. I turned it into an NSObject since I stripped all of the interface code out of it.
I create the AVPlayerLayer in the (void)prepareToPlayAsset:withKeys: method when I create the AVPlayer: (I'm only adding the layer to a NSView to test if it is working.)
if (![self player])
{
/* Get a new AVPlayer initialized to play the specified player item. */
[self setPlayer:[AVPlayer playerWithPlayerItem:self.mPlayerItem]];
/* Observe the AVPlayer "currentItem" property to find out when any
AVPlayer replaceCurrentItemWithPlayerItem: replacement will/did
occur.*/
[self.player addObserver:self
forKeyPath:kCurrentItemKey
options:NSKeyValueObservingOptionInitial | NSKeyValueObservingOptionNew
context:AVPlayerDemoPlaybackViewControllerCurrentItemObservationContext];
mPlaybackView = [AVPlayerLayer playerLayerWithPlayer:self.player];
[self.theView setWantsLayer:YES];
[mPlaybackView setFrame:self.theView.layer.bounds];
[self.theView.layer addSublayer:mPlaybackView];
}
I then create a NSRunLoop to grab a frame of the AVPlayerLayer 30 times per second:
framegrabTimer = [NSTimer timerWithTimeInterval:(1/30) target:self selector:@selector(grabFrameFromMovie) userInfo:nil repeats:YES];
[[NSRunLoop currentRunLoop] addTimer:framegrabTimer forMode:NSDefaultRunLoopMode];
Here is the code I use to grab the frame and pass it to the class that handles the QCRenderer:
-(void)grabFrameFromMovie {
CGColorSpaceRef colorSpace = CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB);
CGContextRef theContext = CGBitmapContextCreate(NULL, mPlaybackView.frame.size.width, mPlaybackView.frame.size.height, 8, 4*mPlaybackView.frame.size.width, colorSpace, kCGImageAlphaPremultipliedLast);
[mPlaybackView renderInContext:theContext];
CGImageRef CGImage = CGBitmapContextCreateImage(theContext);
NSImage *image = [[NSImage alloc] initWithCGImage:CGImage size:NSMakeSize(mPlaybackView.frame.size.width, mPlaybackView.frame.size.height)];
[[NSNotificationCenter defaultCenter] postNotificationName:@"AVPlayerLoadedNewFrame" object:[image copy]];
CGContextRelease(theContext);
CGColorSpaceRelease(colorSpace);
CGImageRelease(CGImage); }
I can't figure out why I'm only getting clear. Any help with this is greatly appreciated, as there is not enough AVFoundation documentation for OS X.
回答1:
its works for me:
AVAssetImageGenerator *gen = [[AVAssetImageGenerator alloc] initWithAsset:[[[self player] currentItem] asset]];
CGImageRef capture = [gen copyCGImageAtTime:self.player.currentTime actualTime:NULL error:NULL];
NSImage *img = [[NSImage alloc] initWithCGImage:capture size:self.playerView.frame.size];
回答2:
You can add a AVPlayerItemVideoOutput to AVPlayerItem, and then call copyPixelBufferForItemTime to querying CVPixelBufferRef object which contain the frame in specified time, here is the sample code:
NSDictionary *pixBuffAttributes = @{
(id)kCVPixelBufferWidthKey:@(nWidth),
(id)kCVPixelBufferHeightKey:@(nHeight),
(id)kCVPixelBufferCGImageCompatibilityKey:@YES,
};
m_output = [[AVPlayerItemVideoOutput alloc] initWithPixelBufferAttributes:pixBuffAttributes];
...
m_buffer = [m_output copyPixelBufferForItemTime:time itemTimeForDisplay:NULL];
CVPixelBufferLockBaseAddress(m_buffer, 0);
auto *buffer = CVPixelBufferGetBaseAddress(m_buffer);
frame->width = CVPixelBufferGetWidth(m_buffer);
frame->height = CVPixelBufferGetHeight(m_buffer);
frame->widthbytes = CVPixelBufferGetBytesPerRow(m_buffer);
frame->bufferlen = frame->widthbytes * (uint32)CVPixelBufferGetHeight(m_buffer);
auto &videoInfo = m_info.video;
CGDataProviderRef dp = CGDataProviderCreateWithData(nullptr, buffer, frame->bufferlen, nullptr);
CGColorSpaceRef cs = CGColorSpaceCreateDeviceRGB();
m_image = CGImageCreate(frame->width,
frame->height,
8,
videoInfo.pixelbits,
frame->widthbytes,
cs,
kCGImageAlphaNoneSkipFirst,
dp,
nullptr,
true,
kCGRenderingIntentDefault);
CGColorSpaceRelease(cs);
CGDataProviderRelease(dp);
And you can checkout the apple's official sample :
Real-timeVideoProcessingUsingAVPlayerItemVideoOutput
回答3:
SWIFT 5.2 VERSION:
I think rozochkin's answer is correct and I found it really useful. I tested it myself and it works.
I just want to post an updated Swift 5.2 version, in case someone needs it.
func getCurrentFrame() -> CGImage? {
guard let player = self.player, let avPlayerAsset = player.currentItem?.asset else {return nil}
let assetImageGenerator = AVAssetImageGenerator(asset: avPlayerAsset)
assetImageGenerator.requestedTimeToleranceAfter = .zero
assetImageGenerator.requestedTimeToleranceBefore = .zero
assetImageGenerator.appliesPreferredTrackTransform = true
let imageRef = try! assetImageGenerator.copyCGImage(at: player.currentTime(), actualTime: nil)
return imageRef
}
IMPORTANT NOTES:
requestedTimeToleranceAfter and requestedTimeToleranceBefore should be set to .zero, because, according to source code, "The actual time of the generated images [...] may differ from the requested time for efficiency".
appliesPreferredTrackTransform must be set to TRUE (default is FALSE), otherwise you get a bad-rotated frame. With this property set to TRUE you get what you really see in the player.
来源:https://stackoverflow.com/questions/9677936/avplayer-not-rendering-to-its-avplayerlayer