问题
I create a SKScene with SKVideoNode, then apply to a sphere geometry.
Here is the key code:
// Create a SKScene to play video
NSString* filePath = [[NSBundle mainBundle] pathForResource:@"2222" ofType:@"mp4"];
NSURL* sourceMovieURL = [NSURL fileURLWithPath:filePath];
AVPlayer* player = [AVPlayer playerWithURL:sourceMovieURL];
SKVideoNode* videoNode = [SKVideoNode videoNodeWithAVPlayer:player];
//CGSize size = CGSizeMake(512, 512);
CGSize size = [UIScreen mainScreen].bounds.size;
videoNode.size = size;
videoNode.position = CGPointMake(size.width/2.0, size.height/2.0);
SKScene* spriteScene = [SKScene sceneWithSize:size];
[spriteScene addChild:videoNode];
// create a material with SKScene
SCNMaterial* material = [SCNMaterial material];
material.doubleSided = true;
material.diffuse.contents = spriteScene;
[sphereNode.geometry replaceMaterialAtIndex:0 withMaterial:material];
[videoNode play];
[_scnScene.rootNode addChildNode:sphereNode];
// create SCNRenderer to render the scene
_renderer = [SCNRenderer rendererWithContext:cardboardView.context options:nil];
_renderer.scene = _scnScene;
_renderer.pointOfView = _scnCameraNode;
In the drawEye function:
- (void)cardboardView:(GVRCardboardView *)cardboardView drawEye:(GVREye)eye withHeadTransform:(GVRHeadTransform *)headTransform
{
//CGRect viewport = [headTransform viewportForEye:eye];
// Get the head matrix.
const GLKMatrix4 head_from_start_matrix = [headTransform headPoseInStartSpace];
// Get this eye's matrices.
GLKMatrix4 projection_matrix = [headTransform projectionMatrixForEye:eye near:_scnCamera.zNear far:_scnCamera.zFar];
GLKMatrix4 eye_from_head_matrix = [headTransform eyeFromHeadMatrix:eye];
// Compute the model view projection matrix.
GLKMatrix4 view_projection_matrix = GLKMatrix4Multiply(
projection_matrix, GLKMatrix4Multiply(eye_from_head_matrix, head_from_start_matrix));
// Set the projection matrix to camera
[_scnCamera setProjectionTransform:SCNMatrix4FromGLKMatrix4(view_projection_matrix)];
// Render the scene
[_renderer renderAtTime:0];
}
When run the code, it will break with
[_renderer renderAtTime:0]
and the output is:
Failed to create IOSurface image (texture)
Assertion failed: (result), function create_texture_from_IOSurface, file /BuildRoot/Library/Caches/com.apple.xbs/Sources/Jet/Jet-2.6.1/Jet/jet_context_OpenGL.mm, line 570.
When I remove the SKVideoNode from the SKScene, everything is OK.
Any help? Thanks.
回答1:
Update: This works OK for low quality videos but high quality ones don't perform great copying so many bytes. I've not tried it yet but my next approach is using an OpenGL texture with a CVPixelBuffer for better performance.
-
I don't know if you're still looking for a solution but I've had some luck getting a video sphere working with SceneKit / GVR on iOS 9.3 on a 6s and iOS 8 sim. I can't vouch for all platforms though!
I've completely dropped SpriteKit and moved instead to use a CALayer whose contents I set as CGImage from a CVPixelBuffer.
The video code: (initially based on this OpenGL 360 video tutorial)
init(url: NSURL)
{
self.videoURL = url
super.init()
self.configureVideoPlayback()
}
private override init()
{
self.videoURL = nil
super.init()
}
deinit
{
self.playerItem.removeOutput(self.videoOutput)
}
private func configureVideoPlayback()
{
let pixelBufferAttributes = [
kCVPixelBufferPixelFormatTypeKey as String : NSNumber(unsignedInt: kCVPixelFormatType_32ARGB),
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferOpenGLESCompatibilityKey as String: true
]
self.videoOutput = AVPlayerItemVideoOutput(pixelBufferAttributes: pixelBufferAttributes)
self.playerItem = AVPlayerItem(URL: self.videoURL)
self.playerItem.addOutput(self.videoOutput)
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(VideoReader.playerItemDidPlayToEndTime(_:)), name: AVPlayerItemDidPlayToEndTimeNotification, object: self.playerItem)
self.player = AVPlayer(playerItem: self.playerItem)
self.player.play()
}
func currentPixelBuffer() - > CVPixelBuffer ? {
guard self.playerItem ? .status == .ReadyToPlay
else {
return nil
}
let currentTime = self.playerItem.currentTime()
return self.videoOutput.copyPixelBufferForItemTime(currentTime, itemTimeForDisplay: nil)
}
func playerItemDidPlayToEndTime(notification: NSNotification)
{
self.player.seekToTime(kCMTimeZero)
self.player.play()
}
Scene:
func setupScene() {
self.scene = SCNScene()
self.imageLayer = CALayer()
self.imageLayer.frame = CGRectMake(0, 0, 2048, 2048) //Doesn't work if not power of 2 or incremenets inbetween - need to investigate
let material = SCNMaterial()
material.doubleSided = true
material.diffuse.contents = self.imageLayer
let geometry = SCNSphere(radius: 10)
let sphere = SCNNode(geometry: geometry)
sphere.geometry ? .replaceMaterialAtIndex(0, withMaterial: material)
sphere.position = SCNVector3(0, 0, 0)
sphere.scale.y = 1
sphere.scale.z = -1
self.scene!.rootNode.addChildNode(sphere)
}
Cardboard Draw Frame Prep:
func cardboardView(cardboardView: GVRCardboardView!, prepareDrawFrame headTransform: GVRHeadTransform!) {
// .. boilerplate code
if let pixelBuffer = self.videoReader.currentPixelBuffer() {
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
let pixels = CVPixelBufferGetBaseAddress(pixelBuffer);
let pixelWrapper = CGDataProviderCreateWithData(nil, pixels, CVPixelBufferGetDataSize(pixelBuffer), nil);
// Get a color-space ref... can't this be done only once?
let colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Get a CGImage from the data (the CGImage is used in the drawLayer: delegate method above)
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.NoneSkipFirst.rawValue)
if let currentCGImage = CGImageCreate(width,
height,
8,
32,
4 * width,
colorSpaceRef, [.ByteOrder32Big, bitmapInfo],
pixelWrapper,
nil,
false,
.RenderingIntentDefault) {
self.imageLayer.contents = currentCGImage
}
// Clean up
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
}
My initial implementation of GVR and SceneKit is based on this boilerplate
The frame rate still seems to be high enough for me - it's not optimised yet but I'll update this answer when I do.
Note:
- The CALayer bounds need to be power of 2 it seems - or increments inbetween, it doesn't show any video when I go for values between like 1960.
- It runs really slow in Simulator but at 60fps on device for me
回答2:
I think the reason is the SDK set the SCNView
's render API to OpenGLES
. Just set the API to default(Metal) will fix this problem.
Check if there is some code like :
[[SVNView alloc] initWithFrame:CGRect() options:@{SCNPreferredRenderingAPIKey: [NSNumber numberWithInt:SCNRenderingAPIOpenGLES2]}]
Change this to :
[[SVNView alloc] initWithFrame:CGRect() options:nil] or [[SVNView alloc] initWithFrame:CGRect()]
来源:https://stackoverflow.com/questions/38215970/failed-to-create-iosurface-image-texture-with-gvrsdk