render-to-texture

Rendering to multiple textures with one pass in directx 11

亡梦爱人 提交于 2019-12-04 21:41:53
问题 I'm trying to render to two textures with one pass using C++ directx 11 SDK. I want one texture to contain the color of each pixel of the result image (what I normally see on the screen when rendering a 3D scene), and another texture to contain the normal of each pixel and depth (3 float for normal and 1 float for depth). Right now, what I can think of is to create two rendering targets and render the first pass as the colors and the second pass the normals and depth to each rendering target

Copy ffmpeg d3dva texture resource to shared rendering texture

安稳与你 提交于 2019-12-04 13:50:43
I'm using ffmpeg to decode video via d3dva based on this example https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.c . I'm able to succesfully decode video. What I need to do next is to render decoded NV12 frame. I have created directx rendering texture based on this example https://github.com/balapradeepswork/D3D11NV12Rendering and set it as shared. D3D11_TEXTURE2D_DESC texDesc; texDesc.Format = DXGI_FORMAT_NV12; // Pixel format texDesc.Width = width; // Width of the video frames texDesc.Height = height; // Height of the video frames texDesc.ArraySize = 1; // Number of

OpenGL ES render to texture, then draw texture

亡梦爱人 提交于 2019-12-04 08:41:21
问题 I'm trying to render to a texture, then draw that texture to the screen using OpenGL ES on the iPhone. I'm using this question as a starting point, and doing the drawing in a subclass of Apple's demo EAGLView. Instance variables: GLuint textureFrameBuffer; Texture2D * texture; To initialize the frame buffer and texture, I'm doing this: GLint oldFBO; glGetIntegerv(GL_FRAMEBUFFER_BINDING_OES, &oldFBO); // initWithData results in a white image on the device (works fine in the simulator) texture

OpenGL render-to-texture-via-FBO — incorrect display vs. normal Texture

安稳与你 提交于 2019-12-04 04:56:44
off-screen rendering to a texture-bound offscreen framebuffer object should be so trivial but I'm having a problem I cannot wrap my head around. My full sample program (2D only for now!) is here: http://pastebin.com/hSvXzhJT See below for some descriptions. I'm creating an rgba texture object 512x512, bind it to an FBO. No depth or other render buffers are needed at this point, strictly 2D. The following extremely simple shaders render to this texture: Vertex shader: varying vec2 vPos; attribute vec2 aPos; void main (void) { vPos = (aPos + 1) / 2; gl_Position = vec4(aPos, 0.0, 1.0); } In aPos

Multiple Render Targets not saving data

此生再无相见时 提交于 2019-12-03 15:58:08
I'm using SlimDX, targeting DirectX 11 with shader model 4. I have a pixel shader "preProc" which processes my vertices and saves three textures of data. One for per-pixel normals, one for per-pixel position data and one for color and depth (color takes up rgb and depth takes the alpha channel). I then later use these textures in a postprocessing shader in order to implement Screen Space Ambient Occlusion, however it seems none of the data is getting saved in the first shader. Here's my pixel shader: PS_OUT PS( PS_IN input ) { PS_OUT output; output.col = float4(0,0,0,0); output.norm = float4

Rendering to multiple textures with one pass in directx 11

六月ゝ 毕业季﹏ 提交于 2019-12-03 13:44:19
I'm trying to render to two textures with one pass using C++ directx 11 SDK. I want one texture to contain the color of each pixel of the result image (what I normally see on the screen when rendering a 3D scene), and another texture to contain the normal of each pixel and depth (3 float for normal and 1 float for depth). Right now, what I can think of is to create two rendering targets and render the first pass as the colors and the second pass the normals and depth to each rendering target respectively. However, this seems a waste of time because I can get the information of each pixel's

UIImage created from MTKView results in color/opacity differences

╄→гoц情女王★ 提交于 2019-12-03 09:15:18
When I capture the contents of an MTKView into a UIImage, the resulting image looks qualitatively different, as shown below: The code I use to generate the UIImage is as follows: let kciOptions = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!, kCIContextOutputPremultiplied: true, kCIContextUseSoftwareRenderer: false] as [String : Any] let lastDrawableDisplayed = self.currentDrawable! // needed to hold the last drawable presented to screen drawingUIView.image = UIImage(ciImage: CIImage(mtlTexture: lastDrawableDisplayed.texture, options: kciOptions)!) Since I don't modify

THREE.js: get data from THREE.WebGLRenderTarget

血红的双手。 提交于 2019-12-01 06:42:24
A THREE.Texture can be used as a map in a material and has a property called "image". A THREE.WebGLRenderTarget can be used as a map in a material but does not have a property called "image". How would I retrieve the texture-data from a WebGLRenderTarget ? I would like to save it to a file (or, if that is not possible, as a byte array). New function for this now: WebGLRenderer.readRenderTargetPixels ( renderTarget, x, y, width, height, buffer ) Reads the pixel data from the renderTarget into the buffer you pass in. Buffer should be a Javascript Uint8Array instantiated with new Uint8Array(

THREE.js: get data from THREE.WebGLRenderTarget

青春壹個敷衍的年華 提交于 2019-12-01 04:50:44
问题 A THREE.Texture can be used as a map in a material and has a property called "image". A THREE.WebGLRenderTarget can be used as a map in a material but does not have a property called "image". How would I retrieve the texture-data from a WebGLRenderTarget ? I would like to save it to a file (or, if that is not possible, as a byte array). 回答1: New function for this now: WebGLRenderer.readRenderTargetPixels ( renderTarget, x, y, width, height, buffer ) Reads the pixel data from the renderTarget

Rendering to texture on iOS OpenGL ES—works on simulator, but not on device

﹥>﹥吖頭↗ 提交于 2019-11-29 06:36:35
In order to improve the performance of my OpenGL ES application for the iPad, I was planning to draw a rarely updated but rendertime-heavy element to a texture, so I can just use the texture unless the element has to be redrawn. However, while the texture is mapped correctly on both the simulator and the device, only on the simulator is something actually rendered into the texture. The following is the code that I added to the project. While setting up the scene, I create the buffers and the texture needed: int width = 768; int height = 270; // Prepare texture for off-screen rendering.