We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per lineg
A really popular toolkit for scientific visualization is VTK, and I think it suits your needs:
It's a high-level API, so you won't have to use OpenGL (VTK is built on top of OpenGL). There are interfaces for C++, Python, Java, and Tcl. I think this would keep your codebase pretty clean.
You can import all kinds of datasets into VTK (there are tons of examples from medical imaging to financial data).
VTK is pretty fast, and you can distribute VTK graphics pipelines across multiple machines if you want to do very large visualizations.
Regarding:
This makes we render about 25M samples in a single screen.
[...]
As this is scientific data, we cannot omit any samples. Seriously, this is not an option. Do not even start thinking about it.
You can render large datasets in VTK by sampling and by using LOD models. That is, you'd have a model where you see a lower-resolution version from far out, but if you zoom in you would see a higher-resolution version. This is how a lot of large dataset rendering is done.
You don't need to eliminate points from your actual dataset, but you can surely incrementally refine it when the user zooms in. It does you no good to render 25 million points to a single screen when the user can't possibly process all that data. I would recommend that you take a look at both the VTK library and the VTK user guide, as there's some invaluable information in there on ways to visualize large datasets.