texture

cocos2D-X 常用功能封装

帅比萌擦擦* 提交于 2019-12-03 06:27:55
Packaging_Kernel.h #pragma once #include <string> #include <map> #include <vector> #include "cocos2d.h" #include "ui\CocosGUI.h" #include "SimpleAudioEngine.h" #include "cocostudio\CocoStudio.h" #include "cocos\editor-support\spine\SkeletonAnimation.h" #include "cocos\platform\desktop\CCGLViewImpl-desktop.h" #include "cocos\math\CCGeometry.h" #include "cocos\editor-support\spine\extension.h" #include "cocos\math\Vec2.h" #include "cocos2d/extensions/GUI/CCScrollView/CCTableView.h" #include <iostream> #include <windows.h> #include <tchar.h> using namespace std; using namespace cocos2d; using

Index expression must be constant - WebGL/GLSL error

匿名 (未验证) 提交于 2019-12-03 03:05:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm having trouble accessing an array in a fragment shader using a non-constant int as the index. I've removed the formula as it wouldn't make much sense here anyway, but my code is meant to calculate the tileID based on the current pixel and use that to determine the color. Here's my code: int tileID = <Insert formula here>; vec3 colorTest; int arrayTest[1024]; for (int x = 0; x < 1024; x++) { if (x == 1) arrayTest[x] = 1; else arrayTest[x] = 2; } if (arrayTest[tileID] == 1) colorTest = vec3(0.0, 1.0, 0.0); else if (arrayTest[tileID] == 2)

D3D11 screen desktop copy to ID3D11Texture2D

匿名 (未验证) 提交于 2019-12-03 03:04:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am writing a DLL plugin that will read the desktop frame buffer (whole screen) and render it directly into a Texture2D pointer that is passed in. The goal is to keep everything in video memory (and avoid the cost of copying back to system memory and back to video memory). I am able to pass the Texture2D (showing up as a ID3D11Texture2D), but I am having issues grabbing the desktop frame buffer with D3D11. D3D9 offered GetFrontBufferData() but it seems D3D11 solution is to use GetBuffer(). My issue is about getting the IDXGISwapChain.

How to copy frontBuffer data to texture DirectX 9

匿名 (未验证) 提交于 2019-12-03 03:04:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I can't seem to find a way to create a texture from the surface data I acquire through the front buffer data of my application This is the code I'm pretty sure is working (Direct X 9, C++) // capture screen IDirect3DSurface9* pSurface; g_pd3dDevice->CreateOffscreenPlainSurface(640, 480, D3DFMT_A8R8G8B8, D3DPOOL_SCRATCH, &pSurface, NULL); g_pd3dDevice->GetFrontBufferData(0, pSurface); Now that I've got my frontBufferData, I would like to create a IDirect3DTexture9 object with it. This function / D3DXCreateTextureFromFileInMemory / seemed the

Get OpenGL max texture size

匿名 (未验证) 提交于 2019-12-03 03:04:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm developing an Android app that's going to work with bitmaps extensively and I'm looking for a reliable way to get the maximum texture size for OpenGL on different devices. I know the minimum size = 2048x2048, but that's not good enough since there are already tablets out there with much higher resolutions (2560x1600 for example) So is there a reliable way to get this information? So far I've tried: Canvas.getMaximumBitmapWidth() (Returns 32766, instead of 2048) GLES10.glGetIntegerv(GL10.GL_MAX_TEXTURE_SIZE ...) (Returns 0) I'm working

THREE.js repeat wrapping texture in shader

匿名 (未验证) 提交于 2019-12-03 03:03:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I want to repeat wrapping texture in THREE.js shader. The original texture image is: I want it to repeat 4x4 times which will looks like: But with the following code, it turns out to be: Vertex shader: varying vec2 vUv; uniform float textRepeat; void main() { // passing texture to fragment shader vUv = uv * textRepeat; gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0); } Fragment shader: varying vec2 vUv; uniform sampler2D texture; void main() { // add origianl texture gl_FragColor = texture2D(texture, vUv); } uniforms

SDL2: Generate fully transparent texture

匿名 (未验证) 提交于 2019-12-03 02:56:01
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: How using SDL_CreateTexture create transparent texture? By default I'm creating texure with such code: SDL_CreateTexture(renderer, SDL_PIXELFORMAT_RGBA8888,SDL_TEXTUREACCESS_TARGET, x, y); And then I'm paining on this texture with redirecting output to this texture. However at the end what I want to render this on screen any (nonupdated) pixel is black. I have tried different ways with using of: SDL_RenderClear(_Renderer); or even with drawing and on created texture with painting transparent Rect with different blending modes but all I had

Can I use OpenGL for off-screen rendering? [duplicate]

匿名 (未验证) 提交于 2019-12-03 02:50:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: This question already has an answer here: How to use GLUT/OpenGL to render to a file? 5 answers I want to try to make a simple program that takes a 3D model and renders it into an image. Is there any way I can use OpenGL to render an image and put it into a variable that holds an image rather than displaying an image? I don't want to see what I'm rendering I just want to save it. Is there any way to do this with OpenGL? 回答1: I'm assuming that you know how to draw stuff to the screen with OpenGL, and you wrote a function such as drawStuff to

How to share a Renderscript allocation with OpenGL in Android

匿名 (未验证) 提交于 2019-12-03 02:50:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 由 翻译 强力驱动 问题: I have a Renderscript which processes an image that is given in output to an Allocation. I want to use this Allocation as a texture in my OpenGL program but I don't know how to get a texture ID from the Allocation. On the other hand, I know I could use a graphic Renderscript, but since it has been deprecated, I guess there must be some other way to achieve the same result. 回答1: Specify USAGE_IO_OUTPUT when you create the Allocation. Assuming you are generating the texture data in a script you would also add USAGE_SCRIPT. You can

Create depth buffer histogram texture with GLSL

匿名 (未验证) 提交于 2019-12-03 02:47:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I'm using the depth buffer of the current context to influence a texture I am displaying. The texture is 1 dimensional and in grayscale. From left to right represents from near to far. The more pixels are at a certain depth the brighter the texture is at that point with black being no pixels are at that depth and white being all pixels are at that depth. Now I have a solution that does glReadPixels() on the depth-buffer, analyzes it on the CPU and then writes it back to the texture. Naturally this is a real bottleneck in the application. I'm