We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per lineg
No you don't, not unless you've got a really really large screen. Given that the screen resolution is probably more like 1,000 - 2,000 pixels across, you really ought to consider decimating the data before you graph it. Graphing a hundred lines at 1,000 points per line probably won't be much of a problem, performance wise.
First of all, we cannot omit any samples when rendering. This is impossible. This would mean the rendering is not accurate to the data the graph is based on. This really is a no-go area. Period.
Secondly, we are rendering all the samples. It might be that multiple samples end up on the same pixel. But still, we are rendering it. The sample data is converted on the screen. Thus, it is rendered. One can doubt the usefullness of this visualized data, byt scientists (our customers) are actually demanding it we do it this way. And they have a good point, IMHO.