We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per lineg
Mark Bessey mentioned it, that you might lack the pixels to display the graph. But given your explanations, I assume you know what you are doing.
OpenGL has an orthogonal mode that has a z-coordinate inside (0;1). There is no perspective projection, the polygons you draw will be planar to the screen clipping area.
DirectX will have similar. On OpenGL, it's called gluOrtho2d().
No you don't, not unless you've got a really really large screen. Given that the screen resolution is probably more like 1,000 - 2,000 pixels across, you really ought to consider decimating the data before you graph it. Graphing a hundred lines at 1,000 points per line probably won't be much of a problem, performance wise.
First of all, we cannot omit any samples when rendering. This is impossible. This would mean the rendering is not accurate to the data the graph is based on. This really is a no-go area. Period.
Secondly, we are rendering all the samples. It might be that multiple samples end up on the same pixel. But still, we are rendering it. The sample data is converted on the screen. Thus, it is rendered. One can doubt the usefullness of this visualized data, byt scientists (our customers) are actually demanding it we do it this way. And they have a good point, IMHO.
If your code gets unreadable because you're dealing with the 3D stuff directly, you need to write a thin adaptor layer that encapsulates all the 3D OpenGL stuff, and takes 2D data in a form convenient to your application.
Forgive me if I've missed something, and am preaching basic object oriented design to the choir. Just sayin'...
I wanted to point out that in addition to using VTK directly there are two other products built on VTK that may be of interest to you.
1) ParaView (paraview.org) is a user interface built on top of VTK that makes scientific visualization products much easier. You can render all the data you want provided you have the hardware to handle it, and it supports MPI for multiple processors / cores / clusters. It is extensible via user created plugins and uses automated tools for project building and compiling.
2) ParaViewGeo (paraviewgeo.mirarco.org) is a geology and mining exploration derivative of ParaView produced by the company I work for. It has built-in support for reading file formats that ParaView does not, such as Gocad, Datamine, Geosoft, SGems, and others. More importantly, we often do work with other groups who have an interest in scientific viz with a loosely-ties-to-mining deliverable, such as our recent work with a group doing Finite/Discrete element modelling. It might be worth checking out.
In both cases (PV and PVG) your data is considered separate from your view of that data, and as such, you will never "render" all of your data (since you wouldn't likely have a monitor large enough to do so) but rest assured it will all "be there" processed from your data set as you expected. If you run additional filters on your data, only what can be seen will be "rendered" but the filters will compute on ALL of your data, which although may not all be visible at once, will all exist in memory.
If you're looking for numbers, today I computed three regular grids of 8 million cells in PVG. One contained a 7-tuple vector property (7x 8 million double values), the other two each contained a scalar property (1x 8 million double values each) for a total of 72 million double values in memory. I believe the memory footprint was close to 500MB but I also had a 400,000 point set where each point had a 7-tuple vector property and some other miscellaneous data available as well.
You really don't have to worry about the Z-axis if you don't want to. In OpenGL (for example), you can specify XY vertices (with implicit Z=0), turn of the zbuffer, use a non-projective projection-matrix, and hey presto you're in 2D.
OpenGL is happy to render 2D if you setup the projection to be Ortho (no z). Also you should decimate your data. Rendering the same pixel 1000 times is a waste of GPU. Spend your time upfront with a performat multi-thread decimator. The be sure to blast large arrays at the GPU using vertex arrays or vertex buffer objects (clearly I'm an OpenGL kinda of guy)