texture

OpenGL ES 2d rendering into image

匿名 (未验证) 提交于 2019-12-03 10:24:21
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I need to write OpenGL ES 2-dimensional renderer on iOS. It should draw some primitives such as lines and polygons into 2d image (it will be rendering of vector map). Which way is the best for getting image from OpenGL context in that task? I mean, should I render these primitives into texture and then get image from it, or what? Also, it will be great if someone give examples or tutorials which look like the thing I need (2d GL rendering into image). Thanks in advance! 回答1: If you need to render an OpenGL ES 2-D scene, then extract an image

What is the interval when rasterizing primitives

匿名 (未验证) 提交于 2019-12-03 10:24:21
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: Usually in computer science when I have something from a to b the interval is [a, b) . Is this true when rasterizing geometric primitives? For example when I have a line that starts at position (0, 0) and ends at position (0, 10) will the line contain point (0, 10) when using parallel projection with 1 GPU unit mapped on 1 pixel on screen? EDIT: Same question in the same conditions, but for textures: If I have a texture 2x2 mapped on a quad from (0, 0) to (2, 2) using (0, 0) to (1, 1) mapping will it be "pixel perfect", one pixel

WebGL: Particle engine using FBO, how to correctly write and sample particle positions from a texture?

匿名 (未验证) 提交于 2019-12-03 10:10:24
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I suspect I'm not correctly rendering particle positions to my FBO, or correctly sampling those positions when rendering, though that may not be the actual problem with my code, admittedly. I have a complete jsfiddle here : http://jsfiddle.net/p5mdv/53/ A brief overview of the code: Initialization: Create an array of random particle positions in x,y,z Create an array of texture sampling locations (e.g. for 2 particles, first particle at 0,0, next at 0.5,0) Create a Frame Buffer Object and two particle position textures (one for input, one

C++ having trouble returning sf::Texture

匿名 (未验证) 提交于 2019-12-03 10:10:24
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to store all my textures in an array. but the images aren't getting returned it would seem here's my code I was under the assumption that my loadTex and loadSpri functions would return and store their data in the array that i made, but it doesn't seem to be working that way because when i run the game they don't show up, just white space. My code probably looks bad, but i'm quite the beginner still and trying to get the hang of things Here's render.cpp #include "render.h" texandsprite render::textures[5]; sf::Texture loadTex(std:

Implementing trapezoidal sprites in LibGDX

匿名 (未验证) 提交于 2019-12-03 09:58:14
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I'm trying to create a procedural animation engine for a simple 2D game, that would let me create nice looking animations out of a small number of images (similar to this approach, but for 2D: http://www.gdcvault.com/play/1020583/Animation-Bootcamp-An-Indie-Approach ) At the moment I have keyframes which hold data for different animation objects, the keyframes are arrays of floats representing the following: translateX, translateY, scaleX, scaleY, rotation (degrees) I'd like to add skewX, skewY, taperTop, and taperBottom to this

Is 1D texture memory access faster than 1D global memory access?

匿名 (未验证) 提交于 2019-12-03 09:18:39
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I am measuring the difference between the standard and 1Dtexture access to memory. To do so I have created two kernels __global__ void texture1D ( float * doarray , int size ) { int index ; //calculate each thread global index index = blockIdx . x * blockDim . x + threadIdx . x ; //fetch global memory through texture reference doarray [ index ]= tex1Dfetch ( texreference , index ); return ; } __global__ void standard1D ( float * diarray , float * doarray , int size ) { int index ; //calculate each thread global index index =

Loading textures at random place

匿名 (未验证) 提交于 2019-12-03 09:14:57
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm trying to load all the game data at the same time, in my game scene's constructor. But it fails, because texture loading works only in an opengl context, like if load method called from draw frame or surfacechanged. But i think it's ugly to load textures when the drawframe first called or something similar. So is it possible somehow to separate my loading part from opengl functions? 回答1: I have exactly the same problem. My solution is using the proxy textures . It means that when you're creating textures using some data from memory or

SpriteKit PhysicsBody: Could not create physics body

匿名 (未验证) 提交于 2019-12-03 09:14:57
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I have a player in my game, it has two states flying, falling. Each of them has an image: player_flying, player_falling correspondingly. I am also using a physical bodies to detect collision. It is completely normally functioning when I use one texture. But when I am trying to use both in different conditions using different textures, it shows me an error in the log. I am trying it like that: if ( self . player . physicsBody . velocity . dy > 30 ) { self . player . texture = [ SKTexture textureWithImageNamed :@ "player_flying" ];

Direct3D11: Flipping ID3D11Texture2D

匿名 (未验证) 提交于 2019-12-03 09:14:57
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I perform a capture of Direct3D back buffer. When I download the pixels the image frame is flipped along its vertical axis.Is it possible to "tell" D3D to flip the frame when copying resource,or when creating target ID3D11Texture2D ? Here is how I do it: The texture into which I copy the frame buffer is created like this: D3D11_TEXTURE2D_DESC description = { desc.BufferDesc.Width, desc.BufferDesc.Height, 1, 1, DXGI_FORMAT_R8G8B8A8_UNORM, { 1, 0 }, // DXGI_SAMPLE_DESC D3D11_USAGE_STAGING,//transder from GPU to CPU 0, D3D11_CPU_ACCESS_READ, 0

How many textures can I use in a webgl fragment shader?

匿名 (未验证) 提交于 2019-12-03 09:10:12
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: Simple question but I can't find the answer in a specification anywhere. I'm probably missing the obvious answer somewhere. How many textures can I use at once in a WebGL Fragment shader? If it's variable, what is a reasonable number to assume for PC use? (Not so interested in mobile). I need at least 23 in one of my shaders so want to know if I'll be able to do that before I start work on the code or if I'll need to do multiple passes. 回答1: I'm pretty sure you can find out with var maxTexturesInFragmentShader = gl.getParameter(gl.MAX