问题
I created a C++ application with a 800x600 window that successfully draws a few objects in QML using Qt Quick 2 elements as well as Qt 3D objects:
The QML code draws a couple of green/yellow rectangles using Qt Quick 2 Rectangle
elements inside a Scene2D
. Then the 2D scene is blitted to one of the surfaces of a 3D cube for rendering and be displayed within the 3D world. Finally, a blue SphereMesh
from Qt 3D is rendered at the center as the screenshot above demonstrates.
I've been trying to resize the 3D cube (where the 2D UI is being rendered to) so that it has the same size as the window but I can't find a way to do it programatically:
So the question is how to resize or scale the 3D cube so that it is automatically adjusted to have the same size as the window?
I'm looking for a solution that allows the cube to have the same amount of pixels as the window. For instance, on an 800x600 window I would like to see an 800x600 green rectangle.
Here is what I tried: I can adjust the value of camZ
by hand, which is the distance of the Camera
with the center of the 3D world, and kinda eyeball it, but that's not a precise solution: if the window is changed later to a different dimension, I would need to do a lot of testing again to figure out what the new value for camZ
must be.
Any ideas?
main.cpp:
#include <QGuiApplication>
#include <QQmlContext>
#include <Qt3DQuickExtras/qt3dquickwindow.h>
#include <Qt3DQuick/QQmlAspectEngine>
int main(int argc, char **argv)
{
QGuiApplication app(argc, argv);
Qt3DExtras::Quick::Qt3DQuickWindow view;
view.setSource(QUrl("qrc:/main.qml"));
auto rootContext = view.engine()->qmlEngine()->rootContext();
rootContext->setContextProperty("_window", &view);
view.resize(800, 600);
view.show();
return app.exec();
}
main.qml:
import Qt3D.Core 2.12
import Qt3D.Render 2.12
import Qt3D.Extras 2.12
import Qt3D.Input 2.12
import QtQuick 2.0
import QtQuick.Scene2D 2.9
import QtQuick.Controls 1.4
import QtQuick.Layouts 1.2
Entity
{
id: sceneRoot
property int w: _window.width
property int h: _window.height
property real camZ: 1000
/* setup camera */
Camera {
id: mainCamera
projectionType: CameraLens.PerspectiveProjection
fieldOfView: 45
aspectRatio: _window.width / _window.height
nearPlane: 0.01
farPlane: 1000000.0
position: Qt.vector3d( 0.0, 0.0, sceneRoot.camZ )
viewCenter: Qt.vector3d( 0.0, 0.0, 0.0 )
upVector: Qt.vector3d( 0.0, 1.0, 0.0 )
}
components: [
RenderSettings {
activeFrameGraph: ForwardRenderer {
camera: mainCamera
clearColor: "white"
}
pickingSettings.pickMethod: PickingSettings.TrianglePicking
},
InputSettings {}
]
/* setup a 3D cube to be used as the 2D drawing surface for all Qt Quick 2 stuff */
Entity {
id: drawingSurface
CuboidMesh {
id: planeMesh
}
Transform {
id: planeTransform
translation: Qt.vector3d(0, 0, 0)
scale3D: Qt.vector3d(sceneRoot.w, sceneRoot.h, 1)
}
TextureMaterial {
id: planeMaterial
texture: offscreenTexture // created by qmlTexture below
}
// picked up by Scene2D’s "entities" property and used as a source for events
ObjectPicker {
id: planePicker
hoverEnabled: false
dragEnabled: false
}
components: [ planeMesh, planeMaterial, planeTransform, planePicker ]
}
/* setup Scene2D offscreen texture to be used as canvas by Qt Quick 2 */
Scene2D {
id: qmlTexture
output: RenderTargetOutput {
attachmentPoint: RenderTargetOutput.Color0
texture: Texture2D {
id: offscreenTexture
width: sceneRoot.w
height: sceneRoot.h
format: Texture.RGBA8_UNorm
generateMipMaps: true
magnificationFilter: Texture.Linear
minificationFilter: Texture.LinearMipMapLinear
wrapMode {
x: WrapMode.ClampToEdge
y: WrapMode.ClampToEdge
}
}
}
mouseEnabled: false
entities: [ drawingSurface ]
/* Qt Quick 2 rendering */
Rectangle {
width: offscreenTexture.width
height: offscreenTexture.height
x: 0
y: 0
border.color: "red"
color: "green"
Component.onCompleted: {
console.log("Outter rectangle size: " + width + "x" + height + " at " + x + "," + y);
}
Rectangle {
id: innerRect
height: parent.height*0.6
width: height
x: (parent.width/2) - (width/2)
y: (parent.height/2) - (height/2)
border.color: "red"
color: "yellow"
transform: Rotation { origin.x: innerRect.width/2; origin.y: innerRect.height/2; angle: 45}
Component.onCompleted: {
console.log("Inner rectangle size: " + width + "x" + height + " at " + x + "," + y);
}
}
}
} // Scene2D
/* add light source at the same place as the camera */
Entity {
PointLight {
id: light
color: "white"
intensity: 1
constantAttenuation: 1.0
linearAttenuation: 0.0
}
Transform {
id: lightTransform
translation: Qt.vector3d(0.0, 0.0, sceneRoot.camZ)
}
components: [ light, lightTransform ]
}
/* display 3D object */
Entity {
SphereMesh {
id: mesh
radius: 130
}
PhongMaterial {
id: material
ambient: "blue"
}
Transform {
id: transform
translation: Qt.vector3d(0, 0, 0)
}
components: [ mesh, material, transform ]
}
} // sceneRoot
Add these modules to your .pro file:
QT += qml quick 3dquick 3dquickextras
回答1:
Usually, when you want to a texture to cover the whole screen you use orthographic projection. In contrast to perspective projection objects will always appear the same size on the screen no matter their distance from the camera. This type of projection is often used to visualize 3D plans of buildings etc. or to render UI elements in 3D.
The idea is now that you have to framegraph branches:
- Draws the background image
- Draws all the objects
RenderSurfaceSelector
|
Viewport
|
-------------------------------------------
| | | |
ClearBuffers LayerFilter ClearBuffers LayerFilter
| | | |
NoDraw CameraSelector NoDraw CameraSelector
The first (from left to right) clear buffers clears all buffers. The first layer filter filters for the background layer (which you have to attach to the background entity). The second clear buffers clears only depth (so that the objects get definitely drawn). The second layer filter filters for the main layer (which you have to attach to all objects you want to get drawn).
You then create the background camera and set its projection type to orthographic projection:
Camera {
id: backgroundCamera
projectionType: CameraLens.OrthographicProjection
fieldOfView: 45
aspectRatio: sceneRoot.w / sceneRoot.h
left: - sceneRoot.w / 2
right: sceneRoot.w / 2
bottom: - sceneRoot.h / 2
top: sceneRoot.h / 2
nearPlane: 0.1
farPlane: 1000.0
position: Qt.vector3d( 0.0, 0.0, 1.0 )
viewCenter: Qt.vector3d( 0.0, 0.0, 0.0 )
upVector: Qt.vector3d( 0.0, 1.0, 0.0 )
}
You could also choose -1
and 1
for left - right and bottom - top instead of sceneRoot.w
and sceneRoot.h
. In thise case you would have to adjust the textured plane's size to (2, 2)
. I wanted to draw the clicks a user made on a texture once that's why I went with the screen sizes.
A side note: Don't use values that are very high or very low for the nearPlane
and farPlane
. It says in the Qt3D documentation (somewhere, can't find it right now) that when the far plane is set to greater 100.000 inaccuracies will occur. Also, if you set it too small the same will happen. You can read on it on the internet, it's a general problem in 3D computer grahpics.
Well, here's the full code:
import Qt3D.Core 2.12
import Qt3D.Render 2.12
import Qt3D.Extras 2.12
import Qt3D.Input 2.12
import QtQuick 2.0
import QtQuick.Scene2D 2.9
import QtQuick.Controls 1.4
import QtQuick.Layouts 1.2
Entity
{
id: sceneRoot
property int w: _window.width
property int h: _window.height
property real camZ: 1000
components: [
RenderSettings {
activeFrameGraph: RenderSurfaceSelector {
id: surfaceSelector
Viewport {
id: mainViewport
normalizedRect: Qt.rect(0, 0, 1, 1)
ClearBuffers {
buffers: ClearBuffers.ColorDepthBuffer
clearColor: Qt.rgba(0.6, 0.6, 0.6, 1.0)
NoDraw {
// Prevent drawing here, we only want to clear the buffers
}
}
LayerFilter {
id: backgroundLayerFilter
layers: [backgroundLayer]
CameraSelector {
id: backgroundCameraSelector
camera: backgroundCamera
}
}
ClearBuffers {
buffers: ClearBuffers.DepthBuffer
NoDraw {
// Prevent drawing here, we only want to clear the buffers
}
}
LayerFilter {
id: mainLayerFilter
layers: [mainLayer]
CameraSelector {
id: mainCameraSelector
camera: mainCamera
}
}
}
}
pickingSettings.pickMethod: PickingSettings.TrianglePicking
},
InputSettings {}
]
Camera {
id: mainCamera
projectionType: CameraLens.PerspectiveProjection
fieldOfView: 45
aspectRatio: _window.width / _window.height
nearPlane: 0.1
farPlane: 1000.0
position: Qt.vector3d( 0.0, 0.0, camZ )
viewCenter: Qt.vector3d( 0.0, 0.0, 0.0 )
upVector: Qt.vector3d( 0.0, 1.0, 0.0 )
}
/* setup camera */
Camera {
id: backgroundCamera
projectionType: CameraLens.OrthographicProjection
fieldOfView: 45
aspectRatio: sceneRoot.w / sceneRoot.h
left: - sceneRoot.w / 2
right: sceneRoot.w / 2
bottom: - sceneRoot.h / 2
top: sceneRoot.h / 2
nearPlane: 0.1
farPlane: 1000.0
position: Qt.vector3d( 0.0, 0.0, 1.0 )
viewCenter: Qt.vector3d( 0.0, 0.0, 0.0 )
upVector: Qt.vector3d( 0.0, 1.0, 0.0 )
}
/* setup a 3D cube to be used as the 2D drawing surface for all Qt Quick 2 stuff */
Entity {
id: drawingSurface
PlaneMesh {
id: planeMesh
width: sceneRoot.w
height: sceneRoot.h
}
Transform {
id: planeTransform
translation: Qt.vector3d(0, 0, 0)
rotationX: 90
}
TextureMaterial {
id: planeMaterial
texture: offscreenTexture // created by qmlTexture below
}
Layer {
id: backgroundLayer
}
// picked up by Scene2D’s "entities" property and used as a source for events
ObjectPicker {
id: planePicker
hoverEnabled: false
dragEnabled: false
}
components: [ planeMesh, planeMaterial, planeTransform, planePicker, backgroundLayer ]
}
/* setup Scene2D offscreen texture to be used as canvas by Qt Quick 2 */
Scene2D {
id: qmlTexture
output: RenderTargetOutput {
attachmentPoint: RenderTargetOutput.Color0
texture: Texture2D {
id: offscreenTexture
width: sceneRoot.w
height: sceneRoot.h
format: Texture.RGBA8_UNorm
generateMipMaps: true
magnificationFilter: Texture.Linear
minificationFilter: Texture.LinearMipMapLinear
wrapMode {
x: WrapMode.ClampToEdge
y: WrapMode.ClampToEdge
}
}
}
mouseEnabled: false
entities: [ drawingSurface ]
/* Qt Quick 2 rendering */
Rectangle {
width: offscreenTexture.width
height: offscreenTexture.height
x: 0
y: 0
border.color: "red"
color: "green"
Component.onCompleted: {
console.log("Outter rectangle size: " + width + "x" + height + " at " + x + "," + y);
}
Rectangle {
id: innerRect
height: parent.height*0.6
width: height
x: (parent.width/2) - (width/2)
y: (parent.height/2) - (height/2)
border.color: "red"
color: "yellow"
transform: Rotation { origin.x: innerRect.width/2; origin.y: innerRect.height/2; angle: 45}
Component.onCompleted: {
console.log("Inner rectangle size: " + width + "x" + height + " at " + x + "," + y);
}
}
}
} // Scene2D
/* add light source at the same place as the camera */
Layer {
id: mainLayer
}
Entity {
PointLight {
id: light
color: "white"
intensity: 1
constantAttenuation: 1.0
linearAttenuation: 0.0
}
Transform {
id: lightTransform
translation: Qt.vector3d(0.0, 0.0, sceneRoot.camZ)
}
components: [ light, lightTransform, mainLayer ]
}
/* display 3D object */
Entity {
SphereMesh {
id: mesh
radius: 130
}
PhongMaterial {
id: material
ambient: "blue"
}
Transform {
id: transform
translation: Qt.vector3d(0, 0, 0)
}
components: [ mesh, material, transform, mainLayer ]
}
} // sceneRoot
Result screenshot:
By the way: Your code produces buggy results due to the drawing on an offscreen surface. I recommend you create and actual offscreen rendering framegraph and draw your stuff in there. Checkout this very nice and informative GitHub repo and my C++ Qt3D offscreen renderer implementation.
Maybe as a side note: You could definitely achieve the same result by using perspective projection. You can read on perspective projection on the internet, e.g. here. Essentially, you have a linear system of equestions where you know the pixel coordinates (where you want your plane to appear on screen) and solve for the 3D points of the plane. But it might get complicated, I'm sure the solution I posted is easier to use ;)
来源:https://stackoverflow.com/questions/64903558/qt3d-how-to-scale-a-scene2d-to-be-the-same-size-as-the-window-pixel-wise