compute-shader

OpenGL Compute Shader Invocations

倖福魔咒の 提交于 2019-12-06 02:49:56
问题 I got a question related to the new compute shaders. I am currently working on a particle system. I store all my particles in shader-storage-buffer to access them in the compute shader. Then I dispatch an one dimensional work group. #define WORK_GROUP_SIZE 128 _shaderManager->useProgram("computeProg"); glDispatchCompute((_numParticles/WORK_GROUP_SIZE), 1, 1); glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT); My compute shader: #version 430 struct particle{ vec4 currentPos; vec4 oldPos; };

Trouble with imageStore() (OpenGL 4.3)

◇◆丶佛笑我妖孽 提交于 2019-12-05 23:12:29
I'm trying to output some data from compute shader to a texture, but imageStore() seems to do nothing. Here's the shader: #version 430 layout(RGBA32F) uniform image2D image; layout (local_size_x = 1, local_size_y = 1) in; void main() { imageStore(image, ivec2(gl_GlobalInvocationID.xy), vec4(0.0f, 1.0f, 1.0f, 1.0f)); } and the application code is here: GLuint tex; glGenTextures(1, &tex); glBindTexture(GL_TEXTURE_2D, tex); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, WIDTH, HEIGHT, 0, GL_RGBA, GL_FLOAT, 0); glBindImageTexture(0, tex, 0, GL_FALSE, 0, GL_WRITE_ONLY, GL_RGBA32F); glUseProgram(program-

Compute Shader write to texture

断了今生、忘了曾经 提交于 2019-12-05 06:11:18
I have implemented CPU code that copies a projected texture to a larger texture on a 3d object, 'decal baking' if you will, but now I need to implement it on the GPU. To do this I hope to use compute shader as its quite difficult to add an FBO in my current setup. Example image from my current implementation This question is more about how to use Compute shaders but for anyone interested, the idea is based on an answer I got from user jozxyqk, seen here: https://stackoverflow.com/a/27124029/2579996 The texture that is written-to is in my code called _texture , whilst the one projected is

How can I feed compute shader results into vertex shader w/o using a vertex buffer?

霸气de小男生 提交于 2019-12-05 02:43:28
问题 Before I go into details I want outline the problem: I use RWStructuredBuffers to store the output of my compute shaders (CS). Since vertex and pixel shaders can’t read from RWStructuredBuffers, I map a StructuredBuffer onto the same slot (u0/t0) and (u4/t4): cbuffer cbWorld : register (b1) { float4x4 worldViewProj; int dummy; } struct VS_IN { float4 pos : POSITION; float4 col : COLOR; }; struct PS_IN { float4 pos : SV_POSITION; float4 col : COLOR; }; RWStructuredBuffer<float4>

Rendering to multiple textures with one pass in directx 11

亡梦爱人 提交于 2019-12-04 21:41:53
问题 I'm trying to render to two textures with one pass using C++ directx 11 SDK. I want one texture to contain the color of each pixel of the result image (what I normally see on the screen when rendering a 3D scene), and another texture to contain the normal of each pixel and depth (3 float for normal and 1 float for depth). Right now, what I can think of is to create two rendering targets and render the first pass as the colors and the second pass the normals and depth to each rendering target

DirectX 11 compute shader for ray/mesh intersect

风格不统一 提交于 2019-12-04 16:36:42
I recently converted a DirectX 9 application that was using D3DXIntersect to find ray/mesh intersections to DirectX 11. Since D3DXIntersect is not available in DX11, I wrote my own code to find the intersection, which just loops over all the triangles in the mesh and tests them, keeping track of the closest hit to the origin. This is done on the CPU side and works fine for picking via the GUI, but I have another part of the application that creates a new mesh from an existing one based on several different viewpoints, and I need to check line of sight for every triangle in the mesh many times.

OpenGL Compute Shader Invocations

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-04 05:38:15
I got a question related to the new compute shaders. I am currently working on a particle system. I store all my particles in shader-storage-buffer to access them in the compute shader. Then I dispatch an one dimensional work group. #define WORK_GROUP_SIZE 128 _shaderManager->useProgram("computeProg"); glDispatchCompute((_numParticles/WORK_GROUP_SIZE), 1, 1); glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT); My compute shader: #version 430 struct particle{ vec4 currentPos; vec4 oldPos; }; layout(std430, binding=0) buffer particles{ struct particle p[]; }; layout (local_size_x = 128, local_size_y

How can I feed compute shader results into vertex shader w/o using a vertex buffer?

半腔热情 提交于 2019-12-03 17:14:21
Before I go into details I want outline the problem: I use RWStructuredBuffers to store the output of my compute shaders (CS). Since vertex and pixel shaders can’t read from RWStructuredBuffers, I map a StructuredBuffer onto the same slot (u0/t0) and (u4/t4): cbuffer cbWorld : register (b1) { float4x4 worldViewProj; int dummy; } struct VS_IN { float4 pos : POSITION; float4 col : COLOR; }; struct PS_IN { float4 pos : SV_POSITION; float4 col : COLOR; }; RWStructuredBuffer<float4> colorOutputTable : register (u0); // 2D color data StructuredBuffer<float4> output2 : register (t0); // same as u0

Are there DirectX guidelines for binding and unbinding resources between draw calls?

蓝咒 提交于 2019-12-03 16:19:23
All DirectX books and tutorials strongly recommend reducing resource allocations between draw calls to a minimum – yet I can’t find any guidelines that get more into details. Reviewing a lot of sample code found in the web, I have concluded that programmers have completely different coding principles regarding this subject. Some even set and unset VS/PS VS/PS ResourceViews RasterizerStage DepthStencilState PrimitiveTopology ... before and after every draw call (although the setup remains unchanged), and others don’t. I guess that's a bit overdone... From my own experiments I have found that

Rendering to multiple textures with one pass in directx 11

六月ゝ 毕业季﹏ 提交于 2019-12-03 13:44:19
I'm trying to render to two textures with one pass using C++ directx 11 SDK. I want one texture to contain the color of each pixel of the result image (what I normally see on the screen when rendering a 3D scene), and another texture to contain the normal of each pixel and depth (3 float for normal and 1 float for depth). Right now, what I can think of is to create two rendering targets and render the first pass as the colors and the second pass the normals and depth to each rendering target respectively. However, this seems a waste of time because I can get the information of each pixel's