texture

CVOpenGLESTextureCacheCreateTextureFromImage fails to create IOSurface

匿名 (未验证) 提交于 2019-12-03 01:49:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: For my current project I'm reading the main camera output of the iPhone. I'm then converting the pixelbuffer to a cached OpenGL texture through the method: CVOpenGLESTextureCacheCreateTextureFromImage . This works great when processing camera frames that are used for previewing. Tested on different combinations with the iPhone 3GS, 4, 4S, iPod Touch (4th gen) and IOS5, IOS6. But, for the actual final image, which has a very high resolution, this only works on these combinations: iPhone 3GS + IOS 5.1.1 iPhone 4 + IOS 5.1.1 iPhone 4S + IOS 6.0

opengl: how to avoid texture scaling

匿名 (未验证) 提交于 2019-12-03 01:48:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: How do I apply a repeating texture that always maintains its original scale (1 pixel in the texture = 1 pixel on screen), regardless of the vertex data it is applied with. I realize this is not the most usual task, but is it possible to easily set opengl to do this, or do I need to apply some kind of mask to vertex data that respects its original appearance? edit: in my specific case, I'm trying to draw 2D ellipses of different sizes, with the same pixel pattern. The ellipses are made of a triangle fan, and I'm having a hard time to draw a

Updating a texture in OpenGL with glTexImage2D

匿名 (未验证) 提交于 2019-12-03 01:47:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Are glTexImage2D and glTexSubImage2D the only ways to pass a buffer of pixels to a texture? At the moment I use in a setup function glTexImage2D passing null as the buffer, and then on the render loop I call glTexSubImage2D with the new buffer data on each iteration. But knowing that the texture will not change any property such as the dimensions, is there any more efficient way to pass the actual pixel data to the rendering texture? 回答1: In modern OpenGL there are 4 different methods to update 2D textures: 1) glTexImage2D - the slowest one,

How to read pixels from a rendered texture in OpenGL ES

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to read pixels from a texture which was generated on the fly (RTT, Render To Texture). I'm taking this snapshot by implementing Apple's suggested method listed here . This works fine for the "default" colorbuffer which gets presented to the screen but I can't get this to work for the pixels that were written to the texture. I'm having trouble with these lines: // Bind the color renderbuffer used to render the OpenGL ES view // If your application only creates a single color renderbuffer which is already bound at this point, //

Three.js r72 - Texture marked for update but image is undefined?

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Three.js r72 I'm trying to use the same texture with different offsets and repeats, but after I clone the texture and apply needsUpdate I keep getting an error "Texture marked for update but image is undefined". I check the cloned texture and the image property is undefined. Shouldn't the clone method have referenced the source image from the texture in the textures Object. var textures = {} var materials = {} textures.skin = THREE.ImageUtils.loadTexture('skin.png'); textures.skin.minFilter = THREE.NearestFilter; textures.skin.magFilter =

SDL - invalid texture error on SDL_DestroyTexture()

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm making a small "retro-style" 2D platformer game with SDL in C++. I figured that the best way to keep the game at a low resolution, while allowing people with different size monitors to stretch the game window to fit their setup, would be to render everything to a low-res texture and then render that texture to the whole window (with the window size/resolution set by the user). When I run this setup, the game works exactly as it should and renders fine (in both fullscreen and windowed modes). However, when I use SDL_DestroyTexture() to

Barycentric coordinates texture mapping

匿名 (未验证) 提交于 2019-12-03 01:45:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I want to map textures with correct perspective for 3D rendering. I am using barycentric coordinates to locate points on the faces of triangles. Simple affine transformation gave me that standard, weird looking result. This is what I did to correct my perspective, but it seems to have only made the distortion greater: three triangle vertices v1 v2 v3 vertex coordinates are v_.x v_.y v_.z texture coordinates are v_.u v_.v barycentric coordinates corresponding to vertices are b1 b2 b3 I am trying to get the correct texture

qt multiple QGLShaderProgram for one texture

匿名 (未验证) 提交于 2019-12-03 01:44:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I use two QGLShaderProgram for processing the texture. ShaderProgram1->bind(); // QGLShaderProgram ShaderProgram2->bind(); glBegin(GL_TRIANGLE_STRIP); ... glEnd(); ShaderProgram1->release(); ShaderProgram2->release(); The texture should be processed with Shaderprogram1 and then ShaderProgram2. But when I call ShaderProgram2->bind() automatically fires ShaderProgram1->release() and only one shader works. How do I bind both shaders? 回答1: You don't. Unless these are separate shaders (and even they don't work that way), each rendering operation

How to use GL_HALF_FLOAT_OES typed textures in iOS?

匿名 (未验证) 提交于 2019-12-03 01:39:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to create a float texture to store intermediate results of my rendering pipeline created by a fragment shader. I need the values of the fragments to be signed floats . I understand that there is the OES_texture_float extension which should be supported by all new iOS devices (i.e. beginning from iPhone 3GS/iPod Touch 3/iPad according to the Apple guide ). However, when I create such a texture using glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_HALF_FLOAT_OES, NULL); start my app and inspect it in Instruments,

Can i use fabric.js and three.js in the same canvas?

匿名 (未验证) 提交于 2019-12-03 01:38:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Is it possible to use three.js for 3d drawing and add to fabric.js for 2d drawing to the same canvas? I just like the way fabric.js handles manipulation and would like to make use of it. Before I start to experiment, I'm wondering if any one has tried this? Is it possible? 回答1: An alternative approach is to use FabricJS with an off-screen canvas, then use that canvas as a texture (image) source for your on-screen WebGL/tree.js canvas and the polygon you are rendering there. Is it possible to use a 2d canvas as a texture for a cube? var