Graph rendering using 3D acceleration

前端 未结 12 1884
鱼传尺愫
鱼传尺愫 2021-02-15 17:56

We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per lineg

12条回答
  •  别那么骄傲
    2021-02-15 18:51

    A really popular toolkit for scientific visualization is VTK, and I think it suits your needs:

    1. It's a high-level API, so you won't have to use OpenGL (VTK is built on top of OpenGL). There are interfaces for C++, Python, Java, and Tcl. I think this would keep your codebase pretty clean.

    2. You can import all kinds of datasets into VTK (there are tons of examples from medical imaging to financial data).

    3. VTK is pretty fast, and you can distribute VTK graphics pipelines across multiple machines if you want to do very large visualizations.

    4. Regarding:

      This makes we render about 25M samples in a single screen.

      [...]

      As this is scientific data, we cannot omit any samples. Seriously, this is not an option. Do not even start thinking about it.

    You can render large datasets in VTK by sampling and by using LOD models. That is, you'd have a model where you see a lower-resolution version from far out, but if you zoom in you would see a higher-resolution version. This is how a lot of large dataset rendering is done.

    You don't need to eliminate points from your actual dataset, but you can surely incrementally refine it when the user zooms in. It does you no good to render 25 million points to a single screen when the user can't possibly process all that data. I would recommend that you take a look at both the VTK library and the VTK user guide, as there's some invaluable information in there on ways to visualize large datasets.

提交回复
热议问题