问题
I'm using SlimDX, targeting DirectX 11 with shader model 4. I have a pixel shader "preProc" which processes my vertices and saves three textures of data. One for per-pixel normals, one for per-pixel position data and one for color and depth (color takes up rgb and depth takes the alpha channel).
I then later use these textures in a postprocessing shader in order to implement Screen Space Ambient Occlusion, however it seems none of the data is getting saved in the first shader.
Here's my pixel shader:
PS_OUT PS( PS_IN input )
{
PS_OUT output;
output.col = float4(0,0,0,0);
output.norm = float4(input.norm,1);
output.pos = input.pos;
return output;
}
which outputs the following struct:
struct PS_OUT
{
float4 col : SV_TARGET0;
float4 norm : SV_TARGET1;
float4 pos : SV_TARGET2;
};
and takes the following struct for input:
struct PS_IN
{
float4 pos : SV_POSITION;
float2 tex : TEXCOORD0;
float3 norm : TEXCOORD1;
};
However in my postprocessing shader:
Texture2D renderTex : register(t1);
Texture2D normalTex : register(t2);
Texture2D positionTex : register(t3);
Texture2D randomTex : register(t4);
SamplerState samLinear : register(s0);
float4 PS(PS_IN input) : SV_Target
{
return float4(getCol(input.tex));
}
It simply outputs a light-blue screen (the colour I reset my render targets to at the start of each frame). getCol has been tested to work and returns a colour from the renderTex material when only dealing with one render target. If I change the pixelshader to instead sample the randomTex texture (which my code previously loaded from a file and is not a render target) everything is rendered fine, so I am confident it is not my post processing shader.
In case it's my slimDX code that's failing here's what I do:
Creating my textures, shaderresourvecviews and rendertargetviews:
Texture2DDescription textureDescription = new Texture2DDescription()
{
Width=texWidth,
Height=texHeight,
MipLevels=1,
ArraySize=3,
Format=SlimDX.DXGI.Format.R32G32B32A32_Float,
SampleDescription = new SlimDX.DXGI.SampleDescription(1,0),
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
CpuAccessFlags= CpuAccessFlags.None,
OptionFlags = ResourceOptionFlags.None,
Usage= ResourceUsage.Default,
};
texture = new Texture2D(device, textureDescription);
renderTargetView = new RenderTargetView[3];
shaderResourceView = new ShaderResourceView[3];
for (int i = 0; i < 3; i++)
{
RenderTargetViewDescription renderTargetViewDescription = new RenderTargetViewDescription()
{
Format = textureDescription.Format,
Dimension = RenderTargetViewDimension.Texture2D,
MipSlice = 0,
};
renderTargetView[i] = new RenderTargetView(device, texture, renderTargetViewDescription);
ShaderResourceViewDescription shaderResourceViewDescription = new ShaderResourceViewDescription()
{
Format = textureDescription.Format,
Dimension = ShaderResourceViewDimension.Texture2D,
MostDetailedMip = 0,
MipLevels = 1
};
shaderResourceView[i] = new ShaderResourceView(device, texture, shaderResourceViewDescription);
}
Rendering to my multiple render targets:
private void renderToTexture(Shader shader)
{
//set the vertex and pixel shaders
context.VertexShader.Set(shader.VertexShader);
context.PixelShader.Set(shader.PixelShader);
//send texture data and a linear sampler to the shader
context.PixelShader.SetShaderResource(texture, 0);
context.PixelShader.SetSampler(samplerState, 0);
//set the input assembler
SetInputAssembler(shader);
//reset the camera's constant buffer
camera.ResetConstantBuffer();
//set the render targets to the textures we will render to
context.OutputMerger.SetTargets(depthStencilView, renderTargetViews);
//clear the render targets and depth stencil
foreach (RenderTargetView view in renderTargetViews)
{
context.ClearRenderTargetView(view, color);
}
context.ClearDepthStencilView(depthStencilView, DepthStencilClearFlags.Depth, 1.0f, 0);
//draw the scene
DrawScene();
}
and then the function when I render my postProcessing shader to the screen:
private void renderTexture(Shader shader)
{
//get a single quad to be the screen we render
Mesh mesh = CreateScreenFace();
//set vertex and pixel shaders
context.VertexShader.Set(shader.VertexShader);
context.PixelShader.Set(shader.PixelShader);
//set the input assembler
SetInputAssembler(shader);
//point the render target to the screen
context.OutputMerger.SetTargets(depthStencil, renderTarget);
//send the rendered textures and a linear sampler to the shader
context.PixelShader.SetShaderResource(renderTargetViews[0], 1);
context.PixelShader.SetShaderResource(renderTargetViews[1], 2);
context.PixelShader.SetShaderResource(renderTargetViews[2], 3);
context.PixelShader.SetShaderResource(random, 4);
context.PixelShader.SetSampler(samplerState, 0);
//clear the render targets and depth stencils
context.ClearRenderTargetView(renderTarget, new Color4(0.52734375f, 0.8046875f, 0.9765625f));
context.ClearDepthStencilView(depthStencil, DepthStencilClearFlags.Depth, 1, 0);
//set the vertex and index buffers from the quad
context.InputAssembler.SetVertexBuffers(0, new VertexBufferBinding(mesh.VertexBuffer, Marshal.SizeOf(typeof(Vertex)), 0));
context.InputAssembler.SetIndexBuffer(mesh.IndexBuffer, Format.R16_UInt, 0);
//draw the quad
context.DrawIndexed(mesh.indices, 0, 0);
//dispose of the buffers
mesh.VertexBuffer.Dispose();
mesh.IndexBuffer.Dispose();
}
EDIT: I've added the PIX function call output for a single frame of the current run:
Frame 40
//setup
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B66190, 0x0028F068)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028F010, 0x0028EFF8, 0x0028F00C --> 0x06BF8EE0)
CreateObject(D3D11 Buffer, 0x06BF8EE0)
<0x06BDA1D8> ID3D11DeviceContext::PSSetConstantBuffers(0, 1, 0x0028F084 --> { 0x06BF8EE0 })
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F8DEB58, 0x0F8DEB40, 0x0F8DEB54 --> 0x06BF8F68)
CreateObject(D3D11 Buffer, 0x06BF8F68)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F70EAD8, 0x0F70EAC0, 0x0F70EAD4 --> 0x06BF8FF0)
CreateObject(D3D11 Buffer, 0x06BF8FF0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FAAE9A8, 0x0FAAE990, 0x0FAAE9A4 --> 0x06BF9078)
CreateObject(D3D11 Buffer, 0x06BF9078)
<0x0059FF78> ID3D11Device::GetImmediateContext(0x06BDA1D8 --> 0x5BA8A8D8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F8DEB58, 0x0F8DEB40, 0x0F8DEB54 --> 0x06BF9100)
CreateObject(D3D11 Buffer, 0x06BF9100)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0F70EAD8, 0x0F70EAC0, 0x0F70EAD4 --> 0x06BF9188)
CreateObject(D3D11 Buffer, 0x06BF9188)
<0x06BDA1D8> ID3D11DeviceContext::Release()
<0x06BDA1D8> ID3D11DeviceContext::UpdateSubresource(0x06B59270, 0, NULL, 0x06287FA0, 0, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FAAE9A8, 0x0FAAE990, 0x0FAAE9A4 --> 0x06BF9210)
CreateObject(D3D11 Buffer, 0x06BF9210)
<0x06BDA1D8> ID3D11DeviceContext::VSSetShader(0x06B66298, NULL, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FC0E978, 0x0FC0E960, 0x0FC0E974 --> 0x06BF9298)
CreateObject(D3D11 Buffer, 0x06BF9298)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FE8EDE8, 0x0FE8EDD0, 0x0FE8EDE4 --> 0x06BF9320)
CreateObject(D3D11 Buffer, 0x06BF9320)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShader(0x06B666F8, NULL, 0)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FC0E978, 0x0FC0E960, 0x0FC0E974 --> 0x06BF93A8)
CreateObject(D3D11 Buffer, 0x06BF93A8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0FE8EDE8, 0x0FE8EDD0, 0x0FE8EDE4 --> 0x06BF9430)
CreateObject(D3D11 Buffer, 0x06BF9430)
<0x0059FF78> ID3D11Device::CreateInputLayout(0x0028EBE0, 3, 0x06286CB8, 152, 0x0028EBD8 --> 0x06BF9D68)
CreateObject(D3D11 Input Layout, 0x06BF9D68)
<0x06BDA1D8> ID3D11DeviceContext::IASetInputLayout(0x06BF9D68)
<0x06BDA1D8> ID3D11DeviceContext::IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
<0x0059FF78> ID3D11Device::GetImmediateContext(0x06BDA1D8 --> 0x5BA8A8D8)
<0x06BDA1D8> ID3D11DeviceContext::Release()
<0x06BDA1D8> ID3D11DeviceContext::VSSetConstantBuffers(0, 1, 0x0028F024 --> { 0x06B59270 })
<0x06BDA1D8> ID3D11DeviceContext::OMSetRenderTargets(3, 0x0028F004 --> { 0x06B65708, 0x06B657B8, 0x06B582E0 }, 0x06B66138)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B65708, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B657B8, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B582E0, 0x0028EFEC)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
//draw scene for preproc shader (this should output the three render targets)
//DRAW CALLS HIDDEN
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028EE04, 0x0028EDEC, 0x0028EE00 --> 0x06BF94B8)
CreateObject(D3D11 Buffer, 0x06BF94B8)
<0x0059FF78> ID3D11Device::CreateBuffer(0x0028EE04, 0x0028EDEC, 0x0028EE00 --> 0x06BF9540)
CreateObject(D3D11 Buffer, 0x06BF9540)
<0x06BDA1D8> ID3D11DeviceContext::VSSetShader(0x06B66BB8, NULL, 0)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShader(0x06B66E50, NULL, 0)
<0x0059FF78> ID3D11Device::CreateInputLayout(0x0028EB64, 3, 0x05E988E0, 120, 0x0028EB5C --> 0x06BF9E28)
CreateObject(D3D11 Input Layout, 0x06BF9E28)
<0x06BDA1D8> ID3D11DeviceContext::IASetInputLayout(0x06BF9E28)
<0x06BDA1D8> ID3D11DeviceContext::IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST)
<0x06BDA1D8> ID3D11DeviceContext::OMSetRenderTargets(1, 0x0028EFC0 --> { 0x06B66190 }, 0x06B66138)
<0x06BDA1D8> ID3D11DeviceContext::PSSetShaderResources(1, 3, 0x0028EF3C --> { 0x06B65760, 0x06B58288, 0x06B58338 })
<0x06BDA1D8> ID3D11DeviceContext::PSSetShaderResources(4, 1, 0x0028EFC0 --> { 0x06B66FA0 })
<0x06BDA1D8> ID3D11DeviceContext::ClearRenderTargetView(0x06B66190, 0x0028EFA4)
<0x06BDA1D8> ID3D11DeviceContext::ClearDepthStencilView(0x06B66138, 1, 1.000f, 0)
<0x06BDA1D8> ID3D11DeviceContext::IASetVertexBuffers(0, 1, 0x0028EFAC --> { 0x06BF94B8 }, 0x0028EFB0, 0x0028EFB4)
<0x06BDA1D8> ID3D11DeviceContext::IASetIndexBuffer(0x06BF9540, DXGI_FORMAT_R16_UINT, 0)
//draw quad for post proc shader. This shader takes the three textures in, as well as a random texture, which is added in the second PSSetShaderResources call. The random texture outputs fine.
<0x06BDA1D8> ID3D11DeviceContext::DrawIndexed(6, 0, 0)
<0x06BF94B8> ID3D11Buffer::Release()
<0x06BF9540> ID3D11Buffer::Release()
<0x06B65B00> IDXGISwapChain::Present(0, 0)
EDIT2: I've been doing some reading and perhaps I need to deallocate the textures as render targets after the preProc pass before I pass them in as ShaderResourceViews to my postProcess shader. I assumed calling context.OutputMerger.SetTargets() would deallocate all of the currently assigned render targets and then assign only the render targets specified in the function's parameters. If this isn't the case (I can't yet be sure if it is or isn't), then how would I go about unassigning the render targets in SlimDX?
EDIT3: Ah, according to this MSDN Page, calling OutputMerger.SetRenderTargets() "overrides all bounded render targets and the depth stencil target regardless of the number of render targets in ppRenderTargetViews." so all of my render targets are getting deallocated automatically when I tell the OutputMerger to render to the screen. This leaves me back to square one.
回答1:
Fixed it by discovering just how silly I am.
When I create my rendertarget I create a Texture2DArray but I'm treating it like an array of Texture2D objects, instead of one object. I have since altered my code to use an array of Texture2D objects and it works very well.
来源:https://stackoverflow.com/questions/6891194/multiple-render-targets-not-saving-data