What is the fastest way to draw a 2D array of color triplets on screen?

前端 未结 6 1403
星月不相逢
星月不相逢 2021-02-14 18:24

The target language is C/C++ and the program has only to work on Linux, but platform independent solutions are preferred obviously. I run Xorg, XVideo and OpenGL are available.<

相关标签:
6条回答
  • 2021-02-14 18:26

    If you're trying to dump pixels to screen, you'll probably want to make use of sdl's 'surface' facuility. For the greatest performance, try to arrange for the input data to be in a similar layout to the output surface. If possible, steer clear of setting pixels in the surface one at a time.

    SDL is not a hardware interface in its own right, but rather a portability layer that works well on top of many other display layers, including DirectX, OpenGL, DirectFB, and xlib, so you get very good portability, and its a very thin layer on top of those technologies, so you pay very little performance overhead on top of those.

    0 讨论(0)
  • 2021-02-14 18:27

    Other options apart from SDL (as mentioned)

    • Cairo surfaces with glitz (in C, works on all plaforms but best in Linux)
    • QT Canvas (in C++, multiplaform)
    • OpenGL raw API or QT OpenGL (You need to know openGL)
    • pure Xlib/XCB if you want to take into account non-opengl plaforms

    My suggestion

    1. QT if you prefer C++
    2. Cairo if you prefer C
    0 讨论(0)
  • 2021-02-14 18:33

    I did this a while back using C and OpenGL, and got very good performance by creating a full screen sized quad, and then use texture mapping to transfer the bitmap onto the face of the quad.

    Here's some example code, hope you can make use of it.

    #include <GL/glut.h>
    #include <GL/glut.h>
    
    #define WIDTH 1024
    #define HEIGHT 768
    
    unsigned char texture[WIDTH][HEIGHT][3];             
    
    void renderScene() {    
    
        // render the texture here
    
        glEnable (GL_TEXTURE_2D);
        glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
    
        glTexImage2D (
            GL_TEXTURE_2D,
            0,
            GL_RGB,
            WIDTH,
            HEIGHT,
            0,
            GL_RGB,
            GL_UNSIGNED_BYTE,
            &texture[0][0][0]
        );
    
        glBegin(GL_QUADS);
            glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0, -1.0);
            glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0, -1.0);
            glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0,  1.0);
            glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0,  1.0);
        glEnd();
    
        glFlush();
        glutSwapBuffers();
    }
    
    int main(int argc, char **argv) {
        glutInit(&argc, argv);
        glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
    
        glutInitWindowPosition(100, 100);
        glutInitWindowSize(WIDTH, HEIGHT);
        glutCreateWindow(" ");
    
        glutDisplayFunc(renderScene);
    
        glutMainLoop();
    
        return 0;
    }
    
    0 讨论(0)
  • 2021-02-14 18:35

    The fastest way to draw a 2D array of color triplets:

    1. Use float (not byte, not double) storage. Each triplet consists of 3 floats from 0.0 to 1.0 each. This is the format implemented most optimally by GPUs (but use greyscale GL_LUMINANCE storage when you don't need hue - much faster!)
    2. Upload the array to a texture with glTexImage2D
    3. Make sure that the GL_TEXTURE_MIN_FILTER texture parameter is set to GL_NEAREST
    4. Map the texture to an appropriate quad.

    This method is slightly faster than glDrawPixels (which for some reason tends to be badly implemented) and a lot faster than using the platform's native blitting.

    Also, it gives you the option to repeatedly do step 4 without step 2 when your pixmap hasn't changed, which of course is much faster.

    Libraries that provide only slow native blitting include:

    • Windows' GDI
    • SDL on X11 (on Windows it provides a fast opengl backend when using HW_SURFACE)
    • Qt

    As to the FPS you can expect, drawing a 1024x768 texture on an Intel Core 2 Duo with Intel graphics: about 60FPS if the texture changes every frame and >100FPS if it doesn't.

    But just do it yourself and see ;)

    0 讨论(0)
  • 2021-02-14 18:44

    the "how many fps can i expect" question can not be answered seriously. not even if you name the grandpa of the guy who did the processor layouting. it depends on tooooo many variables.

    • how many triplets do you need to render?
    • do they change between the frames?
    • at which rate (you wont notice the change if its more often than 30times a sec)?
    • do all of the pixels changes all of the time or just some of the pixels in some areas?
    • do you look at the pixels without any perspective distortion?
    • do you always see all the pixels?
    • depending on the version of the opengl driver you will get different results

    this could go on for ever, the answer depends absolutly on your algorithm. if you stick to the opengl approach you could also try different extensions (http://www.opengl.org/registry/specs/NV/pixel_data_range.txt comes to mind for example), to see if it fits your needs better; although the already mentioned glTexSubImage() method is quite fast.

    0 讨论(0)
  • 2021-02-14 18:46

    How many FPS can I expect on 1024x768?

    The answer to that question is dependent on so many factors that it’s impossible to tell.

    0 讨论(0)
提交回复
热议问题