Why do I get bad performance with SDL2 and SDL_RenderCopy inside a double for loop over all pixels?

。_饼干妹妹 提交于 2019-12-18 06:49:57

问题


I am programming a raycasting game using SDL2. When drawing the floor, I need to call SDL_RenderCopy pixelwise. This leads to a bottleneck which drops the framerate below 10 fps. I am looking for performance boosts but can't seem to find some.

Here's a rough overview of the performance drop:

int main() {
  while(true) {
        for(x=0; x<800; x++) {
            for(y=0; y<600; y++) {
                SDL_Rect src = { 0, 0, 1, 1 };
                SDL_Rect dst = { x, y, 1, 1 };
                SDL_RenderCopy(ren, tx, &src, &dst); // this drops the framerate below 10
            }
        }
        SDL_RenderPresent(ren);
    }
 }

回答1:


You should probably be using texture streaming for this. Basically you will create an SDL_Texture of type SDL_TEXTUREACCESS_STREAMING and then each frame you 'lock' the texture, update the pixels that you require then 'unlock' the texture again. The texture is then rendered in a single SDL_RenderCopy call.

  • LazyFoo Example - http://lazyfoo.net/tutorials/SDL/42_texture_streaming/index.php
  • Exploring Galaxy - http://slouken.blogspot.co.uk/2011/02/streaming-textures-with-sdl-13.html

Other than that calling SDL_RenderCopy 480,000 times a frame is always going to kill your framerate.




回答2:


You are calling SDL_RenderCopy() in each frame so 600 * 800 = 480 000 times! It is normal for performance to drop.



来源:https://stackoverflow.com/questions/25214556/why-do-i-get-bad-performance-with-sdl2-and-sdl-rendercopy-inside-a-double-for-lo

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!