directcompute

Do DirectX Compute Shaders support 2D arrays in shared memory?

…衆ロ難τιáo~ 提交于 2020-01-15 10:18:06
问题 I want to use groupshared memory in a DirectX Compute Shader to reduce global memory bandwidth and hopefully improve performance. My input data is a Texture2D and I can access it using 2D indexing like so: Input[threadID.xy] I would like to have a 2D array of shared memory for caching portions of the input data, so I tried the obvious: groupshared float SharedInput[32, 32]; It won't compile. The error message says syntax error: unexpected token ',' . Is there any way to have a 2D array of

DirectX 11 - Compute shader: Writing to an output resource

十年热恋 提交于 2020-01-12 02:21:13
问题 I've just started using the Compute shader stage in DirectX 11 and encountered some unwanted behaviour when writing to an output resource in the Compute shader. I seem to get only zeroes as output which, to my understanding, means that out-of-bound reads has been performed in the Compute shader. (Out-of-bound writes results in no-ops) Creating the Compute shader components Input resources First I create an ID3D11Buffer* for input data. This is passed as a resource when creating the SRV used

D3D12 Use backbuffer surface as unordered access view (UAV)

风流意气都作罢 提交于 2019-12-31 04:15:10
问题 Im making a simple raytracer for a schoolproject were a compute shader is supposed to be used to shade a triangle or some other primitive. For this I'd like to write to a backbuffer-surface directly in the compute shader, to then present the results imideatly. I know for certain that this is possible in DX11 though i can't seem to get it to work in DX12. I couldn't gather that much information about this, but i found this gamedev thread discussing the exact same problem I try to figure out

DirectX compute shader: how to write a function with variable array size argument?

纵然是瞬间 提交于 2019-12-13 03:07:58
问题 I'm trying to write a function within a compute shader (HLSL) that accept an argument being an array on different size. The compiler always reject it. Example (not working!): void TestFunc(in uint SA[]) { int K; for (K = 0; SA[K] != 0; K++) { // Some code using SA array } } [numthreads(1, 1, 1)] void CSMain( uint S1[] = {1, 2, 3, 4 }; // Compiler happy and discover the array size uint S2[] = {10, 20}; // Compiler happy and discover the array size TestFunc(S1); TestFunc(S2); } If I give an

DirectCompute CreateBuffer fails with error 0x80070057 (E_INVALIDARG)

本小妞迷上赌 提交于 2019-12-12 17:21:37
问题 I'm trying to create a buffer in GPU memory to upload data from CPU. GPU access will be readonly. Data will be used as an input buffer for a compute shader. CreateBuffer() fails with error 0x80070057 (E_INVALIDARG). I read the docs and read it again without discovering which argument cause the failure. Here is an extract from my code where I marked the failure: HRESULT hr = S_OK; RECT rc; GetClientRect( g_hWnd, &rc ); UINT width = rc.right - rc.left; UINT height = rc.bottom - rc.top; UINT

Convert SlimDX.Direct3D11 Texture2D to .Net Bitmap

走远了吗. 提交于 2019-12-11 13:19:59
问题 Converting an .Net Bitmap to a SlimDx Texture2D works very fast like this: http://www.rolandk.de/index.php?option=com_content&view=article&id=65:bitmap-from-texture-d3d11&catid=16:blog&Itemid=10 private Texture2D TextureFromBitmap(FastBitmapSingle fastBitmap) { Texture2D result = null; DataStream dataStream = new DataStream(fastBitmap.BitmapData.Scan0, fastBitmap.BitmapData.Stride * fastBitmap.BitmapData.Height, true, false); DataRectangle dataRectangle = new DataRectangle(fastBitmap

reading GPU resource data by CPU

偶尔善良 提交于 2019-12-11 07:49:35
问题 i am learning directx11 these days. and i have been stuck in compute shader section. so i made four resource and three corresponding view. immutable input buffer = {1,1,1,1,1} / SRV immutable input buffer = {2,2,2,2,2} / SRV output buffer / UAV staging buffer for reading / No View and i succeeded to create all things, and dispatch cs function, and copy data from output buffer to staging buffer, and i read/check data. // INPUT BUFFER1-------------------------------------------------- const int

DirectCompute versus OpenCL for GPU programming?

为君一笑 提交于 2019-12-07 11:43:06
问题 I have some (financial) tasks which should map well to GPU computing, but I'm not really sure if I should go with OpenCL or DirectCompute. I did some GPU computing, but it was a long time ago (3 years). I did it through OpenGL since there was not really any alternative back then. I've seen some OpenCL presentations and it looks really nice. I haven't seen anything about DirectCompute yet, but I expect it to also be good. I'm not interested at the moment in cross-platform compatibility, and

DirectCompute versus OpenCL for GPU programming?

。_饼干妹妹 提交于 2019-12-05 17:31:27
I have some (financial) tasks which should map well to GPU computing, but I'm not really sure if I should go with OpenCL or DirectCompute. I did some GPU computing, but it was a long time ago (3 years). I did it through OpenGL since there was not really any alternative back then. I've seen some OpenCL presentations and it looks really nice. I haven't seen anything about DirectCompute yet, but I expect it to also be good. I'm not interested at the moment in cross-platform compatibility, and besides, I expect the two models to be similar enough to not cause a big headache when trying to go from

How can I feed compute shader results into vertex shader w/o using a vertex buffer?

霸气de小男生 提交于 2019-12-05 02:43:28
问题 Before I go into details I want outline the problem: I use RWStructuredBuffers to store the output of my compute shaders (CS). Since vertex and pixel shaders can’t read from RWStructuredBuffers, I map a StructuredBuffer onto the same slot (u0/t0) and (u4/t4): cbuffer cbWorld : register (b1) { float4x4 worldViewProj; int dummy; } struct VS_IN { float4 pos : POSITION; float4 col : COLOR; }; struct PS_IN { float4 pos : SV_POSITION; float4 col : COLOR; }; RWStructuredBuffer<float4>