问题
I'm writing a simple image viewer and am implementing a pan and zoom feature (using mouse dragging and mouse wheel scrolling respectively). I've successfully implemented the pan (easy mode) and a naive 'into top left corner' zoom.
I'd now like to refine the zoom such that the coordinate of the user's mouse when zooming becomes the 'focal point': that is, when zooming, the pan is updated so that the pixel (of the image) under the user's mouse stays the same (so that they're really zooming into that area)
The image is viewed by overriding the paintEvent on an otherwise plain QWidget.
Try as I might with intuitive approaches, I can not seem to achieve the correct zoom behaviour.
An attribute scale
represents the current level of zoom (a scale of 2 implies the image is viewed double it's true size, 0.5 implies half, and scale > 0), and position
is the coordinate for the top-left corner of the image region currently viewed (via panning).
Here's how the actual image display is performed
def paintEvent(self, event):
painter = QtGui.QPainter()
painter.begin(self)
painter.drawImage(0, 0,
self.image.scaled(
self.image.width() * self.scale,
self.image.height() * self.scale,
QtCore.Qt.KeepAspectRatio),
self.position[0], self.position[1])
painter.end()
Here is the panning code (relatively simple):
(pressed
and anchor
are used entirely for panning, and refer to the position of the initial mouse press and image view position at that time (respectively))
def mousePressEvent(self, event):
self.pressed = event.pos()
self.anchor = self.position
def mouseReleaseEvent(self, event):
self.pressed = None
def mouseMoveEvent(self, event):
if (self.pressed):
dx, dy = event.x() - self.pressed.x(), event.y() - self.pressed.y()
self.position = (self.anchor[0] - dx, self.anchor[1] - dy)
self.repaint()
Here is the zooming code without attempting to adjust the pan. It results in everything shrinking or growing from / to the top-left corner of the screen
def wheelEvent(self, event):
oldscale = self.scale
self.scale += event.delta() / 1200.0
if (self.scale < 0.1):
self.scale = oldscale
self.repaint()
Here is the zooming code with panning to preserve (anchor) the top left corner of the visible region. When you zoom in, the top-left pixel on the screen will not change.
def wheelEvent(self, event):
oldscale = self.scale
self.scale += event.delta() / 1200
if (self.scale < 0.1):
self.scale = oldscale
self.position = (self.position[0] * (self.scale / oldscale),
self.position[1] * (self.scale / oldscale))
self.repaint()
I want the above effect, but for the anchored point to be at the user's mouse when scrolling. Here is my attempt, which works very slightly: the zooming is still not as I intended, but scrolls into the general region of the mouse, without anchoring. In fact, keeping the mouse in the same position and zooming in seems to follow a curved path, panning right then panning left.
def wheelEvent(self, event):
oldscale = self.scale
self.scale += event.delta() / 1200.0
if (self.scale < 0.1):
self.scale = oldscale
oldpoint = self.mapFromGlobal(QtGui.QCursor.pos())
dx, dy = oldpoint.x() - self.position[0], oldpoint.y() - self.position[1]
newpoint = (oldpoint.x() * (self.scale/oldscale),
oldpoint.y() * (self.scale/oldscale))
self.position = (newpoint[0] - dx, newpoint[1] - dy)
The theory behind this is that before the zoom, the pixel 'under' the mouse is length dx and dy from the top-left corner (position). After the zoom, we calculate the new position of this pixel and force it under the same coordinate on the screen by adjusting our self.position
to be dx and dy west and north of the pixel.
I'm not entirely sure where I'm going wrong: I suspect that the mapping of old point
into my screen coordinates is somehow off, or more likely: my mathematics is wrong because I've confused pixel and screen coordinates.
I've tried a few intuitive variations and nothing comes close to the intended anchoring.
I imagine this is quite a common task for file viewers (since most seem to zoom like this), yet I'm finding it quite difficult to research the algorithms.
Here's the full code (requires PyQt4) to tinker with the zooms:
http://pastebin.com/vvpdZy9g
Any help is appreciated!
回答1:
Ok, I managed to get it working
def wheelEvent(self, event):
oldscale = self.scale
self.scale += event.delta() / 1200.0
if (self.scale < 0.1):
self.scale = oldscale
screenpoint = self.mapFromGlobal(QtGui.QCursor.pos())
dx, dy = screenpoint.x(), screenpoint.y()
oldpoint = (screenpoint.x() + self.position[0], screenpoint.y() + self.position[1])
newpoint = (oldpoint[0] * (self.scale/oldscale),
oldpoint[1] * (self.scale/oldscale))
self.position = (newpoint[0] - dx, newpoint[1] - dy)
the logic here:
- we get the mouses position on the screen (screenpoint), and using this, we have the distance between our anchored pixel and the edge of the screen (by definition)
- we use screenpoint and position to find the coordinate of the mouse in terms of the image's plane (ie: the 2D index of the hovered pixel), as oldpoint
- applying our scaling, we calculate the new 2D index of the pixel (new point)
- we want this pixel on our screen, but not in the top left: we want it dx and dy from the top left (position)
The problem was indeed a trivial confusion between image and display coordinates.
来源:https://stackoverflow.com/questions/20942586/controlling-the-pan-to-anchor-a-point-when-zooming-into-an-image