texture-mapping

Using texture for triangle mesh without having to read/write an image file

淺唱寂寞╮ 提交于 2019-12-02 01:59:50
问题 This is a followup on a previous question (see Coloring individual triangles in a triangle mesh on javafx) which I believe is another topic on its own. Is there a way (with javafx) that I can get away from having to actually write to disk (or external device) an image file to use a texture? In other words: can I use a specific texture without having to use Image? Since my color map will change on runtime I don't want to have to write to disk every time I run it. Also, this might be a security

Using texture for triangle mesh without having to read/write an image file

只谈情不闲聊 提交于 2019-12-02 00:46:24
This is a followup on a previous question (see Coloring individual triangles in a triangle mesh on javafx ) which I believe is another topic on its own. Is there a way (with javafx) that I can get away from having to actually write to disk (or external device) an image file to use a texture? In other words: can I use a specific texture without having to use Image? Since my color map will change on runtime I don't want to have to write to disk every time I run it. Also, this might be a security issue (writing to disk) for someone using my app.(with javafx) José Pereda As @Jens-Peter-Haack

How much more efficient are power-of-two textures?

我是研究僧i 提交于 2019-12-01 17:56:55
I am creating an OpenGL video player using Ffmpeg and all my videos aren't power of 2 (as they are normal video resolutions). It runs at fine fps with my nvidia card but I've found that it won't run on older ATI cards because they don't support non-power-of-two textures. I will only be using this on an Nvidia card so I don't really care about the ATI problem too much but I was wondering how much of a performance boost I'd get if the textuers were power-of-2? Is it worth padding them out? Also, if it is worth it, how do I go about padding them out to the nearest larger power-of-two? datenwolf

How much more efficient are power-of-two textures?

大兔子大兔子 提交于 2019-12-01 17:12:54
问题 I am creating an OpenGL video player using Ffmpeg and all my videos aren't power of 2 (as they are normal video resolutions). It runs at fine fps with my nvidia card but I've found that it won't run on older ATI cards because they don't support non-power-of-two textures. I will only be using this on an Nvidia card so I don't really care about the ATI problem too much but I was wondering how much of a performance boost I'd get if the textuers were power-of-2? Is it worth padding them out? Also

How to encode emission or specular info in the alpha of a open gl texture

旧巷老猫 提交于 2019-12-01 13:29:39
I have an OpenGL texture with UV map on it. I've read about using the alpha channel to store some other value which saves needing to load an extra map from somewhere. For example, you could store specular info (shininess), or an emission map in the alpha since you only need a float for that and the alpha isn't being used. So I tried it. Writing the shader isn't the problem. I have all that part worked out. The problem is just getting all 4 channels in to the texture like I want. I have all the maps so in PSD I put the base map in the rgb and the emissions map in the a. But when you save as png

OpenGL texture mapping stubbornly refuses to work

若如初见. 提交于 2019-12-01 13:13:12
I'm writing a 2D game using SDL and OpenGL in the D programming language. At the moment it simply tries to render a texture-mapped quad to the screen. Problem is, the whole texture-mapping part doesn't quite seem to work. Despite the texture apparently loading fine (gets assigned a nonzero texture number, doesn't cause glGetError to return values other than zero), the quad is rendered with the last color set in glColor, entirely ignoring the texture. I've looked for common reasons for texture mapping to fail, including this question , to no avail. The image file being loaded is 64x64, a valid

Why is a texture coordinate of 1.0 getting beyond the edge of the texture?

本秂侑毒 提交于 2019-12-01 13:01:27
问题 I'm doing a color lookup using a texture to apply an effect to a picture. My lookup is a gradient map using the luminance of the fragment of the first texture, then looking that up on a second texture. The 2nd texture is 256x256 with gradients going horizontally and several different gradients top to bottom. So 32 horizontal stripes each 8 pixels tall. My lookup on the x is the luminance, on the y it's a gradient and I target the center of the stripe to avoid crossover. My fragment shader

How to encode emission or specular info in the alpha of a open gl texture

拜拜、爱过 提交于 2019-12-01 12:01:53
问题 I have an OpenGL texture with UV map on it. I've read about using the alpha channel to store some other value which saves needing to load an extra map from somewhere. For example, you could store specular info (shininess), or an emission map in the alpha since you only need a float for that and the alpha isn't being used. So I tried it. Writing the shader isn't the problem. I have all that part worked out. The problem is just getting all 4 channels in to the texture like I want. I have all

Mapping an image onto a set of coordinates

泄露秘密 提交于 2019-12-01 11:13:29
I have a typical source image with a 3:4 aspect ratio. I also have a set of coordinates that I need to map the image onto in a separate image. The coordinates are not perfectly rectangular; if anything they define an irregular mesh. So for example, (0,0) might map to (12,33), (120,0) => (127,36), (240,0)=>(226,13), etc. My goal is to get my source image to fit onto the new shape by mapping the source coordinates to the destination and applying distortion. What are some ways to accomplish this? I'm using .NET but am fine calling out to e.g. ImageMagick. EDIT: As requested, here's a picture. The

How to normalize image coordinates for texture space in OpenGL?

爷,独闯天下 提交于 2019-12-01 10:36:24
Say I have an image of size 320x240 . Now, sampling from an sampler2D with integer image coordinates ux, uy I must normalize for texture coordinates in range [0, size] (size may be width or height). Now, I wonder if I should normalize like this texture(image, vec2(ux/320.0, uy/240.0)) or like this texture(image, vec2(ux/319.0, uy/239.0)) Because ux = 0 ... 319 and uy = 0 ... 239 . The latter one will actually cover the whole range of [0, 1] correct? That means 0 corresponds to the e.g. left-most pixels and 1 corresponds to the right most pixels, right? Also I want to maintain filtering, so I