How to render to OpenGL from Vulkan?

柔情痞子 提交于 2019-12-10 16:22:36

问题


Is it possible to render to OpenGL from Vulkan?

It seems nVidia has something: https://lunarg.com/faqs/mix-opengl-vulkan-rendering/

Can it be done for other GPU's?


回答1:


NVIDIA has created an OpenGL extension, NV_draw_vulkan_image, which can render a VkImage in OpenGL. It even has some mechanisms for interacting with Vulkan semaphores and the like.

However, according to the documentation, you must bypass all Vulkan layers, since layers can modify non-dispatchable handles and the OpenGL extension doesn't know about said modifications. And their recommended means of doing so is to use the glGetVkProcAddrNV for all of your Vulkan functions.

Which also means that you can't get access to any debugging that relies on Vulkan layers.




回答2:


Yes, it's possible if the Vulkan implementation and the OpenGL implementation both have the appropriate extensions available.

Here is a screenshot of an example app I wrote which uses OpenGL to render a simple shadertoy to a texture, and then uses that texture in a Vulkan rendered window.

Although your question seems to suggest you want to do the reverse (render to something using Vulkan and then display the results using OpenGL), the same concepts apply.... populate a texture in one API, use synchronization to ensure the GPU work is complete, and then use the texture in the other API. You can also do the same thing with buffers, so for instance you could use Vulkan for compute operations and then use the results in an OpenGL render.

Requirements

Doing this requires that both the OpenGL and Vulkan implementations support the required extensions, however, according to this site, these extensions are widely supported across OS versions and GPU vendors, as long as you're working with a recent (> 1.0.51) version of Vulkan.

You need the the External Objects extension for OpenGL and the External Memory/Fence/Sempahore extensions for Vulkan.

The Vulkan side of the extensions allow you to allocate memory, create semaphores or fences while marking the resulting objects as exportable. The corresponding GL extensions allow you to take the objects and manipulate them with new GL commands which allow you to wait on fences, signal and wait on semaphores, or use Vulkan allocated memory to back an OpenGL texture. By using such a texture in an OpenGL framebuffer, you can pretty much render whatever you want to it, and then use the rendered results in Vulkan.

Export / Import example code

For example, on the Vulkan side, when you're allocating memory for an image you can do this...

vk::Image image;
... // create the image as normal
vk::MemoryRequirements memReqs = device.getImageMemoryRequirements(image);
vk::MemoryAllocateInfo memAllocInfo;
vk::ExportMemoryAllocateInfo exportAllocInfo{
  vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32 
};
memAllocInfo.pNext = &exportAllocInfo;
memAllocInfo.allocationSize = memReqs.size;
memAllocInfo.memoryTypeIndex = context.getMemoryType(
  memReqs.memoryTypeBits, vk::MemoryPropertyFlagBits::eDeviceLocal);
vk::DeviceMemory memory;
memory = device.allocateMemory(memAllocInfo);
device.bindImageMemory(image, memory, 0);
HANDLE sharedMemoryHandle = device.getMemoryWin32HandleKHR({ 
  texture.memory, vk::ExternalMemoryHandleTypeFlagBits::eOpaqueWin32 
});

This is using the C++ interface and is using the Win32 variation of the extensions. For Posix platforms there are alternative methods for getting file descriptors instead of WIN32 handles.

The sharedMemoryHandle is the value that you'll need to pass to OpenGL, along with the actual allocation size. On the GL side you can then do this...

// These values should be populated by the vulkan code
HANDLE sharedMemoryHandle;
GLuint64 sharedMemorySize;

// Create a 'memory object' in OpenGL, and associate it with the memory 
// allocated in vulkan
GLuint mem;
glCreateMemoryObjectsEXT(1, mem);
glImportMemoryWin32HandleEXT(mem, sharedMemorySize,
  GL_HANDLE_TYPE_OPAQUE_WIN32_EXT, sharedMemoryHandle);

// Having created the memory object we can now create a texture and use
// the memory object for backing it
glCreateTextures(GL_TEXTURE_2D, 1, &color);
// The internalFormat here should correspond to the format of 
// the Vulkan image.  Similarly, the w & h values should correspond to 
// the extent of the Vulkan image
glTextureStorageMem2DEXT(color, 1, GL_RGBA8, w, h, mem, 0 );

Synchronization

The trickiest bit here is synchronization. The Vulkan specification requires images to be in certain states (layouts) before corresponding operations can be performed on them. So in order to do this properly (based on my understanding), you would need to...

  • In Vulkan, create a command buffer that transitions the image to ColorAttachmentOptimal layout
  • Submit the command buffer so that it signals a semaphore that has similarly been exported to OpenGL
  • In OpenGL, use the glWaitSemaphoreEXT function to cause the GL driver to wait for the transition to complete.
    • Note that this is a GPU side wait, so the function will not block at all. It's similar to glWaitSync (as opposed to glClientWaitSync)in this regard.
  • Execute your GL commands that render to the framebuffer
  • Signal a different exported Semaphore on the GL side with the glSignalSemaphoreEXT function
  • In Vulkan, execute another image layout transition from ColorAttachmentOptimal to ShaderReadOnlyOptimal
  • Submit the transition command buffer with the wait semaphore set to the one you just signaled from the GL side.

That's would be an optimal path. Alternatively, the quick and dirty method would be to do the vulkan transition, and then execute queue and device waitIdle commands to ensure the work is done, execute the GL commands, followed by glFlush & glFinish commands to ensure the GPU is done with that work, and then resume your Vulkan commands. This is more of a brute force approach and will likely produce poorer performance than doing the proper synchronization.




回答3:


There is some more information in this more recent slide deck from SIGGRAPH 2016. Slides 63-65 describe how to blit a Vulkan image to an OpenGL backbuffer. My opinion is that it may have been pretty easy for NVIDIA to support this since the Vulkan driver is contained in libGL.so (on Linux). So it may not have been that hard to give the Vulkan image handle to the GL side of the driver and have it be useful.

As another answer pointed out, there are still no official registered multi-vendor interop extensions. This approach just works on NVIDIA.



来源:https://stackoverflow.com/questions/38907764/how-to-render-to-opengl-from-vulkan

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!