Graph rendering using 3D acceleration

前端 未结 12 1882
鱼传尺愫
鱼传尺愫 2021-02-15 17:56

We generate graphs for huge datasets. We are talking 4096 samples per second, and 10 minutes per graph. A simple calculation makes for 4096 * 60 * 10 = 2457600 samples per lineg

12条回答
  •  执念已碎
    2021-02-15 18:36

    I wanted to point out that in addition to using VTK directly there are two other products built on VTK that may be of interest to you.

    1) ParaView (paraview.org) is a user interface built on top of VTK that makes scientific visualization products much easier. You can render all the data you want provided you have the hardware to handle it, and it supports MPI for multiple processors / cores / clusters. It is extensible via user created plugins and uses automated tools for project building and compiling.

    2) ParaViewGeo (paraviewgeo.mirarco.org) is a geology and mining exploration derivative of ParaView produced by the company I work for. It has built-in support for reading file formats that ParaView does not, such as Gocad, Datamine, Geosoft, SGems, and others. More importantly, we often do work with other groups who have an interest in scientific viz with a loosely-ties-to-mining deliverable, such as our recent work with a group doing Finite/Discrete element modelling. It might be worth checking out.

    In both cases (PV and PVG) your data is considered separate from your view of that data, and as such, you will never "render" all of your data (since you wouldn't likely have a monitor large enough to do so) but rest assured it will all "be there" processed from your data set as you expected. If you run additional filters on your data, only what can be seen will be "rendered" but the filters will compute on ALL of your data, which although may not all be visible at once, will all exist in memory.

    If you're looking for numbers, today I computed three regular grids of 8 million cells in PVG. One contained a 7-tuple vector property (7x 8 million double values), the other two each contained a scalar property (1x 8 million double values each) for a total of 72 million double values in memory. I believe the memory footprint was close to 500MB but I also had a 400,000 point set where each point had a 7-tuple vector property and some other miscellaneous data available as well.

提交回复
热议问题