Best path from AVPlayerItemVideoOutput to openGL Texture

前端 未结 3 1910
醉话见心
醉话见心 2021-02-06 05:51

Been pulling my hair out trying to figure out the current best path from AVFoundation videos to an openGLTexture, most of what I find is related to iOS, and I can\'t seem to mak

相关标签:
3条回答
  • 2021-02-06 06:14

    It appears that the corresponding method of fast video texturing on OSX is to use BiPlanar IOSurfaces where surfacePlane[0] is the luma (Y) and surfacePlane[1] is the subsampled chroma (UV). My code runs on Core Profile, so the GLenum constants to CGLTexImageIOSurface2D reflect the same. Pretty sure that only rectangle textures are supported. I use a GLSL shader to combine them and it's working great on Sierra. Briefly summarized:

    NSDictionary* pixelBufferAttributesIOSurfaceBiPlanar = [NSDictionary dictionaryWithObjectsAndKeys:
    [NSNumber numberWithUnsignedInt:kPixelFormatTypeGL], ARC_BRIDGE(id)kCVPixelBufferPixelFormatTypeKey,
    [NSNumber numberWithBool:YES], ARC_BRIDGE(id)kCVPixelBufferOpenGLCompatibilityKey,
    [NSNumber numberWithBool:YES], ARC_BRIDGE(id)kCVPixelBufferIOSurfaceOpenGLTextureCompatibilityKey,
    [NSDictionary dictionary], ARC_BRIDGE(id)kCVPixelBufferIOSurfacePropertiesKey,
    nil];
    
    /* luma texture */
    CGLTexImageIOSurface2D(glContext,GL_TEXTURE_RECTANGLE,GL_R8,(GLsizei)textureSize.width,(GLsizei)textureSize.height,GL_RED,GL_UNSIGNED_BYTE,surfaceRef,0);
    /* chroma texture */
    CGLTexImageIOSurface2D(glContext,GL_TEXTURE_RECTANGLE,GL_RG8,(GLsizei)textureSize.width,(GLsizei)textureSize.height,GL_RG,GL_UNSIGNED_BYTE,surfaceRef,1);
    

    GLSL

    uniform sampler2DRect textureSampler0;
    uniform sampler2DRect textureSampler1;
    // ...
    vec3 YCrCb;
    vec2 lumaCoords = texCoord0;
    vec2 chromaCoords = lumaCoords*vec2(0.5,0.5);
    vec2 chroma = texture(textureSampler1,chromaCoords).xy;
    float luma = texture(textureSampler0,lumaCoords).x;
    YCrCb.x = (luma-(16.0/255.0)); // video range
    YCrCb.yz = (chroma-vec2(0.5,0.5));
    vec4 rgbA = vec4(colorConversionMatrix * YCrCb,1.0);
    

    The color conversion matrix should be generated from the CVPixelBufferRef

    CFTypeRef colorAttachments = CVBufferGetAttachment(pixelBuffer, kCVImageBufferYCbCrMatrixKey, NULL);

    IOSurfaceRef is a CFTypeRef derived object and I use CFRetain/CFRelease to hold onto the surface until I don't need the texture. If CGLTexImageIOSurface2D does a GPU blit to upload the texture data, probably only need to CFRetain/Release around the call to CGLTexImageIOSurface2D.

    I started with the Apple iOS sample code AVBasicVideoOutput. I ported it to Metal for both iOS & OSX in two days and then spent a week trying to figure out the OpenGL Core Profile version. If you can, use Metal: it's faster and the code is almost exactly the same on both iOS/OSX. Also, there is some good information in WWDC13 Session 507 What's New in OpenGL for OS X. In particular, that there is a GLSL shader compatibility flag that allows EAGL shaders to run mostly unmodified on OSX.

    0 讨论(0)
  • 2021-02-06 06:15

    I'm starting on the same journey and know as much about OpenGL as I do about sheep farming, but did notice that your pbOptions doesn't contain kCVPixelBufferOpenGLCompatibilityKey

    NSDictionary *pbOptions = [NSDictionary dictionaryWithObjectsAndKeys:
        [NSNumber numberWithInt:kCVPixelFormatType_422YpCbCr8], kCVPixelBufferPixelFormatTypeKey,
        [NSDictionary dictionary], kCVPixelBufferIOSurfacePropertiesKey,
        [NSNumber numberWithBool:YES], kCVPixelBufferOpenGLCompatibilityKey, nil];
    

    I'm requesting the pixel buffer as kCVPixelFormatType_32BGRA rather than component and this works for me with local variables for _currentSurface (IOSurfaceRef), textureName (GLuint), _sourceWidth (int) and _sourceHeight (int)

    IOSurfaceRef newSurface = CVPixelBufferGetIOSurface(pixelBuffer);
    if (_currentSurface != newSurface) {
        CGLContextObj  cgl_ctx = (CGLContextObj)[[self openGLContext] CGLContextObj];
        [[self openGLContext] makeCurrentContext];
    
        IOSurfaceDecrementUseCount(_currentSurface);
        _currentSurface = newSurface;
        IOSurfaceIncrementUseCount(_currentSurface);
        GLsizei texWidth = (int) IOSurfaceGetWidth(_currentSurface);
        GLsizei texHeight = (int) IOSurfaceGetHeight(_currentSurface);
    
        if (_sourceWidth == 0 && _sourceHeight == 0) {
            // used during drawing of texture
            _sourceWidth = texWidth;
            _sourceHeight = texHeight;
        }
    
        if (!_textureName) {
            GLuint name;
            glGenTextures(1, &name);
            _textureName = name;
        }
    
        glBindTexture(GL_TEXTURE_RECTANGLE_ARB, _textureName);
        CGLTexImageIOSurface2D(cgl_ctx, GL_TEXTURE_RECTANGLE_ARB, GL_RGBA, texWidth, texHeight, GL_BGRA, GL_UNSIGNED_INT_8_8_8_8_REV, self.currentSurface, 0);        
        glBindTexture(GL_TEXTURE_RECTANGLE_ARB, 0);
    }
    
    0 讨论(0)
  • 2021-02-06 06:21

    When using option #2, did you set the "retains backing" property of the EAGLLayer to NO? That may be why it appears to be using the same frame in different cycles. It would be beneficial to see how you configured, or whether you configured, the layer appropriately for the view to which you're rendering framebuffers...

    0 讨论(0)
提交回复
热议问题