问题
I am using opengl (fixed-function pipeline) and I'm drawing potentially hundreds of thousands of points, and labeling each with a text label. This question is about whether I'm doing this in a reasonable way, and what I can expect in terms of speed.
The text labels are drawn by creating a texture coordinate rectangle for each character, and texturing the rectangles using a small font bitmap (each character is about 5x13 pixels in the texture).
In a test file, I have about 158,000 points, given in longitude and latitude, so this lon/lat space is my "model space". I read points those in, and create an opengl vertex buffer for them. Then each point gets a label that typically is three or four characters long. So, let's say 3.5 characters on average. The points are drawn in screen coordinates (ortho projection mode). For each character I create a texture coordinate rect to grab the right pixels for the character, and I create a rectangle in screen coordinates into which the character will be drawn. These two sets of rects each are put into a vertex buffer. So that's 158k * 3.5 * 8 = 4.4 million points, or 8.8 million individual coordinate numbers for the drawing rects, and also 8.8 million numbers for the texture coordinates.
When it comes time to render, I need to (at least I believe this is the only way to do it) update the screen coordinates of all those drawing rects, to match the current screen position of all the model points. So that means for each of the 158 model points I have to compute the projected (screen) coordinates from the model (world) coordinates of the point, and then set the four corner coordinates for each of the three or four character rects for the point. So basically I'm updating all 8.8 million of those numbers on each render. It's taking about 0.3 seconds per render to update those numbers.
QUESTION NUMBER ONE: Does this sound like the right/necessary way to handle labeling of points in opengl? It would be ideal if there were some way to say, "automatically render into this set of rect points, which are linked to this model point but treated as screen offsets from the projected model point". Then I wouldn't have to update the draw rects on each render. But there is no such thing, right?
QUESTION NUMBER TWO: In addition to the time to update all those screen rects before each render, the render itself takes about 1 full second when all 158k labels are shown on the screen (which obviously is not a useful user experience, but I'm just trying to understand the speeds here). As I zoom in, and fewer and fewer points/labels are actually drawn on the screen, the render time becomes proportionally shorter. I'm just trying to understand whether, on my average/modern laptop with an average/modern GPU, that full one second sounds like a reasonable amount of time to render those 158k * 3.5 = 553k textured quads. I know people talk about "millions of triangles" not being an obstacle, but I'm wondering with the texturing the speed I'm seeing is reasonable/expected.
Thanks for any help.
Added code below. Note it's the position_labels
call on each render that I'd like to get rid of.
SCREEN_VERTEX_DTYPE = np.dtype(
[ ( "x_lb", np.float32 ), ( "y_lb", np.float32 ),
( "x_lt", np.float32 ), ( "y_lt", np.float32 ),
( "x_rt", np.float32 ), ( "y_rt", np.float32 ),
( "x_rb", np.float32 ), ( "y_rb", np.float32 ) ]
)
TEXTURE_COORDINATE_DTYPE = np.dtype(
[ ( "u_lb", np.float32 ), ( "v_lb", np.float32 ),
( "u_lt", np.float32 ), ( "v_lt", np.float32 ),
( "u_rt", np.float32 ), ( "v_rt", np.float32 ),
( "u_rb", np.float32 ), ( "v_rb", np.float32 ) ]
)
# screen_vertex_data is numpy array of SCREEN_VERTEX_DTYPE
# texcoord_data is numpy array of TEXTURE_COORDINATE_DTYPE
# not shown: code to fill initial vals of screen_vertex_data and texcoord_data
self.vbo_screen_vertexes = gl_vbo.VBO( screen_vertex_data )
self.vbo_texture_coordinates = gl_vbo.VBO( texcoord_data )
...
# then on each render:
def render( self ):
self.position_labels()
gl.glEnable( gl.GL_TEXTURE_2D )
gl.glBindTexture( gl.GL_TEXTURE_2D, self.font_texture )
gl.glEnableClientState( gl.GL_VERTEX_ARRAY )
self.vbo_screen_vertexes.bind()
gl.glVertexPointer( 2, gl.GL_FLOAT, 0, None )
gl.glEnableClientState( gl.GL_TEXTURE_COORD_ARRAY )
self.vbo_texture_coordinates.bind()
gl.glTexCoordPointer( 2, gl.GL_FLOAT, 0, None )
# set up an orthogonal projection
gl.glMatrixMode(gl.GL_PROJECTION)
gl.glPushMatrix()
gl.glLoadIdentity()
window_size = application.GetClientSize()
gl.glOrtho(0, window_size[ 0 ], 0, window_size[ 1 ], -1, 1)
gl.glMatrixMode(gl.GL_MODELVIEW)
gl.glPushMatrix()
gl.glLoadIdentity()
vertex_count = np.alen( self.character_coordinates_data ) * 4
gl.glDrawArrays( gl.GL_QUADS, 0, vertex_count )
# undo the orthogonal projection
gl.glMatrixMode(gl.GL_PROJECTION)
gl.glPopMatrix()
gl.glMatrixMode(gl.GL_MODELVIEW)
gl.glPopMatrix()
self.vbo_texture_coordinates.unbind()
gl.glDisableClientState( gl.GL_TEXTURE_COORD_ARRAY )
self.vbo_screen_vertexes.unbind()
gl.glDisableClientState( gl.GL_VERTEX_ARRAY )
gl.glBindTexture( gl.GL_TEXTURE_2D, 0 )
gl.glDisable( gl.GL_TEXTURE_2D )
def position_labels( self ):
window_size = application.GetClientSize()
world_size = ( rect.width( application.world_rect ), rect.height( application.world_rect ) )
world_to_screen_factor_x = float( window_size[ 0 ] ) / float( world_size[ 0 ] )
world_to_screen_factor_y = float( window_size[ 1 ] ) / float( world_size[ 1 ] )
wr_lower_left = application.world_rect[ 0 ]
shift_pixels_x = ( wr_lower_left[ 0 ] + 180.0 ) * world_to_screen_factor_x
shift_pixels_y = ( wr_lower_left[ 1 ] + 90.0 ) * world_to_screen_factor_y
# map to screen coordinates
self.character_coordinates_data.screen_x = ( self.character_coordinates_data.world_x + 180.0 ) * world_to_screen_factor_x - shift_pixels_x
self.character_coordinates_data.screen_y = ( self.character_coordinates_data.world_y + 90.0 ) * world_to_screen_factor_y - shift_pixels_y
screen_vertex_data = self.vbo_screen_vertexes.data
screen_vertex_data.x_lb = self.character_coordinates_data.screen_x + self.character_coordinates_data.screen_offset_x
screen_vertex_data.y_lb = self.character_coordinates_data.screen_y + self.character_coordinates_data.screen_offset_y - self.character_coordinates_data.screen_height
screen_vertex_data.x_lt = screen_vertex_data.x_lb
screen_vertex_data.y_lt = screen_vertex_data.y_lb + self.character_coordinates_data.screen_height
screen_vertex_data.x_rt = screen_vertex_data.x_lb + self.character_coordinates_data.screen_width
screen_vertex_data.y_rt = screen_vertex_data.y_lb + self.character_coordinates_data.screen_height
screen_vertex_data.x_rb = screen_vertex_data.x_lb + self.character_coordinates_data.screen_width
screen_vertex_data.y_rb = screen_vertex_data.y_lb
self.vbo_screen_vertexes[ : np.alen( screen_vertex_data ) ] = screen_vertex_data
回答1:
The projection mode and screen coordinates are two distinct things. You can sure choose projection parameters so that OpenGL units match screen pixels, but this is no necessity. Just for clarification.
To question one: OpenGL is merely a drawing API, there's no higher level functionality. So yes, it's your burden to keep those fellas in sync. Luckily you've to do the math only once. Zooming, translating, rotating, etc. can all be done by manipulating the transformation matrices; however each change in the view requires a full redraw.
To question two: It all boils down to fillrate and not processing stuff that's not visible. One interesting thing is, that while GPUs can process millions of triangles in a second, they do it best if served in easy digestible chunks, i.e. if it comes in batches the all fit into the caches. I found out, that batches 1000 to 3000 vertices each work best. Also some influence comes from the total size of the accessed texture, and not just the part you're actually accessing. However your figures sound reasonable for a unoptimized drawing method.
来源:https://stackoverflow.com/questions/5905055/technique-and-speed-expectations-for-opengl-text-labeling