The target language is C/C++ and the program has only to work on Linux, but platform independent solutions are preferred obviously. I run Xorg, XVideo and OpenGL are available.<
If you're trying to dump pixels to screen, you'll probably want to make use of sdl's 'surface' facuility. For the greatest performance, try to arrange for the input data to be in a similar layout to the output surface. If possible, steer clear of setting pixels in the surface one at a time.
SDL is not a hardware interface in its own right, but rather a portability layer that works well on top of many other display layers, including DirectX, OpenGL, DirectFB, and xlib, so you get very good portability, and its a very thin layer on top of those technologies, so you pay very little performance overhead on top of those.
Other options apart from SDL (as mentioned)
My suggestion
I did this a while back using C and OpenGL, and got very good performance by creating a full screen sized quad, and then use texture mapping to transfer the bitmap onto the face of the quad.
Here's some example code, hope you can make use of it.
#include <GL/glut.h>
#include <GL/glut.h>
#define WIDTH 1024
#define HEIGHT 768
unsigned char texture[WIDTH][HEIGHT][3];
void renderScene() {
// render the texture here
glEnable (GL_TEXTURE_2D);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D (
GL_TEXTURE_2D,
0,
GL_RGB,
WIDTH,
HEIGHT,
0,
GL_RGB,
GL_UNSIGNED_BYTE,
&texture[0][0][0]
);
glBegin(GL_QUADS);
glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0, -1.0);
glTexCoord2f(1.0f, 0.0f); glVertex2f( 1.0, -1.0);
glTexCoord2f(1.0f, 1.0f); glVertex2f( 1.0, 1.0);
glTexCoord2f(0.0f, 1.0f); glVertex2f(-1.0, 1.0);
glEnd();
glFlush();
glutSwapBuffers();
}
int main(int argc, char **argv) {
glutInit(&argc, argv);
glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
glutInitWindowPosition(100, 100);
glutInitWindowSize(WIDTH, HEIGHT);
glutCreateWindow(" ");
glutDisplayFunc(renderScene);
glutMainLoop();
return 0;
}
The fastest way to draw a 2D array of color triplets:
GL_LUMINANCE
storage when you don't need hue - much faster!)glTexImage2D
GL_TEXTURE_MIN_FILTER
texture parameter is set to GL_NEAREST
This method is slightly faster than glDrawPixels
(which for some reason tends to be badly implemented) and a lot faster than using the platform's native blitting.
Also, it gives you the option to repeatedly do step 4 without step 2 when your pixmap hasn't changed, which of course is much faster.
Libraries that provide only slow native blitting include:
As to the FPS you can expect, drawing a 1024x768 texture on an Intel Core 2 Duo with Intel graphics: about 60FPS if the texture changes every frame and >100FPS if it doesn't.
But just do it yourself and see ;)
the "how many fps can i expect" question can not be answered seriously. not even if you name the grandpa of the guy who did the processor layouting. it depends on tooooo many variables.
this could go on for ever, the answer depends absolutly on your algorithm. if you stick to the opengl approach you could also try different extensions (http://www.opengl.org/registry/specs/NV/pixel_data_range.txt comes to mind for example), to see if it fits your needs better; although the already mentioned glTexSubImage() method is quite fast.
How many FPS can I expect on 1024x768?
The answer to that question is dependent on so many factors that it’s impossible to tell.