pixel-shader

Rendering to multiple textures with one pass in directx 11

亡梦爱人 提交于 2019-12-04 21:41:53
问题 I'm trying to render to two textures with one pass using C++ directx 11 SDK. I want one texture to contain the color of each pixel of the result image (what I normally see on the screen when rendering a 3D scene), and another texture to contain the normal of each pixel and depth (3 float for normal and 1 float for depth). Right now, what I can think of is to create two rendering targets and render the first pass as the colors and the second pass the normals and depth to each rendering target

Are there DirectX guidelines for binding and unbinding resources between draw calls?

蓝咒 提交于 2019-12-03 16:19:23
All DirectX books and tutorials strongly recommend reducing resource allocations between draw calls to a minimum – yet I can’t find any guidelines that get more into details. Reviewing a lot of sample code found in the web, I have concluded that programmers have completely different coding principles regarding this subject. Some even set and unset VS/PS VS/PS ResourceViews RasterizerStage DepthStencilState PrimitiveTopology ... before and after every draw call (although the setup remains unchanged), and others don’t. I guess that's a bit overdone... From my own experiments I have found that

Rendering to multiple textures with one pass in directx 11

六月ゝ 毕业季﹏ 提交于 2019-12-03 13:44:19
I'm trying to render to two textures with one pass using C++ directx 11 SDK. I want one texture to contain the color of each pixel of the result image (what I normally see on the screen when rendering a 3D scene), and another texture to contain the normal of each pixel and depth (3 float for normal and 1 float for depth). Right now, what I can think of is to create two rendering targets and render the first pass as the colors and the second pass the normals and depth to each rendering target respectively. However, this seems a waste of time because I can get the information of each pixel's

DirectX 11 Pixel Shader What Is SV_POSITION?

痞子三分冷 提交于 2019-12-01 22:24:38
问题 I am learning HLSL for DirectX 11, and I was wondering what exactly is the SV_POSITION that is the output for a Vertex Shader, and the input for a Pixel Shader. 1: Is this x,y,z of every pixel on your screen, or of the object? 2: Why is it 4 32bit floats? 3: Do you need this System Variable for the vertex output? Thank you! 回答1: The vertex shader stage only has one required output: the position of the vertex. This value is then used by the fixed-function rasterizer to compute which pixels are

DirectX 11 Pixel Shader What Is SV_POSITION?

扶醉桌前 提交于 2019-12-01 20:53:52
I am learning HLSL for DirectX 11, and I was wondering what exactly is the SV_POSITION that is the output for a Vertex Shader, and the input for a Pixel Shader. 1: Is this x,y,z of every pixel on your screen, or of the object? 2: Why is it 4 32bit floats? 3: Do you need this System Variable for the vertex output? Thank you! The vertex shader stage only has one required output: the position of the vertex. This value is then used by the fixed-function rasterizer to compute which pixels are being drawn and invoke the pixel shader for each one. That's what the system value semantic SV_Position

What kind of blurs can be implemented in pixel shaders?

巧了我就是萌 提交于 2019-11-30 08:34:38
问题 Gaussian, box, radial, directional, motion blur, zoom blur, etc. I read that Gaussian blur can be broken down in passes that could be implemented in pixel shaders, but couldn't find any samples. Is it right to assume that any effect that concerns itself with pixels other than itself, can't be implemented in pixel shaders? 回答1: You can implement everything, as long you are able to pass information to the shader. The trick, in this cases, is to perform a multiple pass rendering. The final

What is the relationship between gl_Color and gl_FrontColor in both vertex and fragment shaders

扶醉桌前 提交于 2019-11-30 06:45:40
I have pass-through vertex and fragment shaders. vertex shader void main(void) { gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } fragment shader void main(void) { gl_FragColor = gl_Color; } Those produce empty rendering (black not background color like glClearBuffer does). If I modify the vertex shader to set the gl_FrontColor to gl_Color it does render untouched OpenGl buffer ... with is the expected behavior of pass-through shaders. void main(void) { gl_FrontColor = gl_Color; //Added line gl_TexCoord[0] = gl_MultiTexCoord0; gl_Position = gl

Efficient pixel shader sum of all pixels

£可爱£侵袭症+ 提交于 2019-11-30 04:17:42
问题 How can I efficiently calculate the sum of all pixels in an image, by using a HSLS pixel shader? I'm interested in Pixel Shader 2.0, that I could invoke as a WPF shader effect. 回答1: There is a much simpler solution that doesn't use shaders: load the image as a texture, create a mipmap chain and read back the value of the last mipmap (1x1 pixel). This trick is used in games extensively to calculate, for example, the average brigthness of a scene (in order to apply HDR tonemapping). It's a

Pixel Shader Effect Examples

不羁岁月 提交于 2019-11-30 03:57:17
I've seen a number of pixel-shader effect examples, stuff like swirl on an image. But I'm wondering if anyone knows of any examples or tutorials for more practical uses of shader effects? I'm not saying that a swirl effect doesn't have it's uses, it's just that many of the examples I've found have the basic effect explained and don't go into how it might be used subtly with another effect or transition to produce a wonderful effect. There's a video here , that outlines all the WPF Effects Library, but I'm not sure how I would use some of them in a practical context. For example, when Flash 8

WebGL/GLSL - How does a ShaderToy work?

江枫思渺然 提交于 2019-11-29 19:40:35
I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular. From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles. However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to