问题
all,
I've got an iphone project that draws a 3D model using OpenGL-ES for a given model view matrix and given projection matrix. I needed to replace 3D model with CALayer, so I put values of model view matrix into CATransform3D structure and assigned it to layer.transform
. It worked well, layer was visible and moved on the screen as expected, but after some time I realized that my layers behavior is not precise enough and I should take projection matrix into account. And then a problem appeared: when I simply concatenate two matrices my layer looks odd (it is very small, about 2 pixels, while it is supposed to be about 300, as it is far far away) or is not visible at all. How can I solve it?
Here is the piece of code:
- (void)adjustImageObjectWithUserInfo:(NSDictionary *)userInfo
{
NSNumber *objectID = [userInfo objectForKey:kObjectIDKey];
CALayer *layer = [self.imageLayers objectForKey:objectID];
if (!layer) { return; }
CATransform3D transform = CATransform3DIdentity;
NSArray *modelViewMatrix = [userInfo objectForKey:kModelViewMatrixKey];
// Get raw model view matrix;
CGFloat *p = (CGFloat *)&transform;
for (int i = 0; i < 16; ++i)
{
*p = [[modelViewMatrix objectAtIndex:i] floatValue];
++p;
}
// Rotate around +z for Pi/2
transform = CATransform3DConcat(transform, CATransform3DMakeRotation(M_PI_2, 0, 0, 1));
// Project with projection matrix
transform = CATransform3DConcat(transform, _projectionMatrix);
layer.transform = transform;
}
Any help will be appreciated.
回答1:
I recently ran into this exact problem (I was using ARToolKit as well) and I was very disappointed to see that you hadn't figured out the answer. I imagine you have moved on now but I figured it out and I am posting it for any other lost soul who might come through with this same problem.
The most confusing thing for me was that everyone talks about making a CALayer perspective transform by setting the m34 variable to a small negative number. Although that does work it is not very informative at all. What I eventually realized is that the transform works exactly like every other transform, it is a column major transformation matrix for homogenous coordinates. The only special case is that it must combine the model view and projection matrices, and then scale to the size of the openGL viewport all in one matrix. I started by trying to use a matrix in the style where m34 is a small negative number as explained in much greater detail here but eventually switched to open GL style perspective transforms as explained here. They are in fact equivalent to one another they just represent different ways of thinking about the transform.
In our case we are trying to make the CALayer transform exactly replicate an open GL transform. All that requires is multiplying together the modelview,projection,and scaling matrices and flipping the y axis to account for the fact that the device screen origin is top left and open GL is bottom left. As long as the layer anchor is at (.5,.5) and its position is exactly in the center of the screen the result will be identical to the open GL's transform
void attach_CALayer_to_marker(CATransform3D* transform, Matrix4 modelView, Matrix4 openGL_projection, Vector2 GLViewportSize)
{
//Important: This function assumes that the CA layer has its origin in the
//exact center of the screen.
Matrix4 flipY = { 1, 0,0,0,
0,-1,0,0,
0, 0,1,0,
0, 0,0,1};
//instead of -1 to 1 we want our result to go from -width/2 to width/2, same
//for height
CGFloat ScreenScale = [[UIScreen mainScreen] scale];
float xscl = GLViewportSize.x/ScreenScale/2;
float yscl = GLViewportSize.y/ScreenScale/2;
//The open GL perspective matrix projects onto a 2x2x2 cube. To get it onto the
//device screen it needs to be scaled to the correct size but
//maintaining the aspect ratio specified by the open GL window.
Matrix4 scalingMatrix = {xscl,0 ,0,0,
0, yscl,0,0,
0, 0 ,1,0,
0, 0 ,0,1};
//Open GL measures y from the bottom and CALayers measure from the top so at the
//end the entire projection must be flipped over the xz plane.
//When that happens the contents of the CALayer will get flipped upside down.
//To correct for that they are flipped updside down at the very beginning,
//they will then be flipped right side up at the end.
Matrix flipped = MatrixMakeFromProduct(modelView, flipY);
Matrix unscaled = MatrixMakeFromProduct(openGL_projection, flipped);
Matrix scaled = MatrixMakeFromProduct(scalingMatrix, unscaled);
//flip over xz plane to move origin to bottom instead of top
Matrix Final = SiMatrix4MakeFromProduct(flipY, scaled);
*transform = convert_your_matrix_object_to_CATransform3D(Final);
}
This function takes the open GLprojection, and openGL view size and uses them to generate the correct transform for the CALayer. The CALayer size should be specified in the units of the open GL scene. The OpenGL viewport actually contains 4 variables, [xoffset,yoffset,x,y] but the first two are not relevant because the Origin of the CALayer is put in the center of the screen to correspond to the OpenGL 3d Origin.
Just replace Matrix with whatever generic 4x4 column major matrix class you have access to. Anything will work just make sure you multiply your matrices in the right order. All this is essentially doing is replicating the OpenGL pipeline (minus clipping).
回答2:
I just got rid of projection matrix and it is the best variant I've got:
- (void)adjustTransformationOfLayerWithMarkerId:(NSNumber *)markerId forModelViewMatrix:(NSArray *)modelViewMatrix
{
CALayer *layer = [self.imageLayers objectForKey:markerId];
...
CATransform3D transform = CATransform3DIdentity;
CGFloat *p = (CGFloat *)&transform;
for (int i = 0; i < 16; ++i) {
*p = [[modelViewMatrix objectAtIndex:i] floatValue];
++p;
}
transform.m44 = (transform.m43 > 0) ? transform.m43/kZDistanceWithoutDistortion : 1;
CGFloat angle = -M_PI_2;
if (self.delegate.interfaceOrientation == UIInterfaceOrientationLandscapeLeft) { angle = M_PI; }
if (self.delegate.interfaceOrientation == UIInterfaceOrientationLandscapeRight) { angle = 0; }
if (self.delegate.interfaceOrientation == UIInterfaceOrientationPortraitUpsideDown) { angle = M_PI_2; }
transform = CATransform3DConcat(transform, CATransform3DMakeRotation(angle, 0, 0, -1));
transform = CATransform3DConcat(CATransform3DMakeScale(-1, 1, 1), transform);
// Normalize transformation
CGFloat scaleFactor = 1.0f / transform.m44;
transform.m41 = transform.m41 * scaleFactor;
transform.m42 = transform.m42 * scaleFactor;
transform.m43 = transform.m43 * scaleFactor;
transform = CATransform3DScale(transform, scaleFactor, scaleFactor, scaleFactor);
transform.m44 = 1;
BOOL disableAction = YES;
...
[CATransaction begin];
[CATransaction setDisableActions:disableAction]; // Disable animation for layer to move faster
layer.transform = transform;
[CATransaction commit];
}
It wasn't absolutely precise, but it was accurate enough for my purposes. Deflection becomes noticeable when x or y displacement were about screen size.
回答3:
Two possible problems:
1) The concatenation order. Classic matrix math is from right to left. So try
CATransform3DConcat(_projectionMatrix, transform)
, and
2) The value of the projection coefficient is wrong. What are the values you are using?
Hope this helps.
回答4:
Did you take into account that layers render into a GL context which already has an ortho projection matrix applied to it?
See the introductory comment in on Mac; this class is private on iPhone, but the principles are the same.
Also, OpenGL matrices are transposed in-memory compared to CATransform3D. Take that into account, too; while most of the results seem the same, some won't be.
回答5:
As a direct answer to the question: projection matrices are designed to output something in -1 ... 1 range, while CoreAnimation usually works with pixels. From this troubles come. So to cure the problem you should BEFORE multiplying modelview to projection matrix, decrease your model to fit well in -1 ... 1 range (depending on what you world is you can divide it to bounds.size) and after projection matrix multiplication you can return back to pixels.
So below is a code snippet in Swift (I personally hate Swift, but my customer likes it) I believe it can be understood. It's written for CoreAnimation and ARKit and tested that it works. I hope ARKit matrices can be thought as opengl ones
func initCATransform3D(_ t_ : float4x4) -> CATransform3D
{
let t = t_.transpose //surprise m43 means 4th column 3rd row, thanks to Apple for amazing documenation for CATransform3D and total disregarding standard math with Mij notation
//surprise: at Apple didn't care about CA & ARKit compatibility so no any easier way to do this
return CATransform3D(
m11: CGFloat(t[0][0]), m12: CGFloat(t[1][0]), m13: CGFloat(t[2][0]), m14: CGFloat(t[3][0]),
m21: CGFloat(t[0][1]), m22: CGFloat(t[1][1]), m23: CGFloat(t[2][1]), m24: CGFloat(t[3][1]),
m31: CGFloat(t[0][2]), m32: CGFloat(t[1][2]), m33: CGFloat(t[2][2]), m34: CGFloat(t[3][2]),
m41: CGFloat(t[0][3]), m42: CGFloat(t[1][3]), m43: CGFloat(t[2][3]), m44: CGFloat(t[3][3])
)
}
override func updateAnchors(frame: ARFrame) {
for animView in stickerViews {
guard let anchor = animView.anchor else { continue }
//100 here to make object smaller... on input it's in pixels, but we want it to be more real.
//we work in meters at this place
//surprise: nevertheless the matrices are column-major they are inited in "transposed" form because arrays/columns are written in horizontal direction
let mx = float4x4(
[1/Float(100), 0, 0, 0],
[0, -1/Float(100), 0, 0], //flip Y axis; it's directed up in 3d world while for CA on iOS it's directed down
[0, 0, 1, 0],
[0, 0, 0, 1]
)
let simd_atr = anchor.transform * mx //* matrix_scale(1/Float(bounds.size.height), matrix_identity_float4x4)
var atr = initCATransform3D(simd_atr) //atr = anchor transform
let camera = frame.camera
let view = initCATransform3D(camera.viewMatrix(for: .landscapeRight))
let proj = initCATransform3D(camera.projectionMatrix(for: .landscapeRight, viewportSize: camera.imageResolution, zNear: 0.01, zFar: 1000))
//surprise: CATransform3DConcat(a,b) is equal to mathematical b*a, documentation in apple is wrong
//for fun it's distinct from glsl or opengl multiplication order
atr = CATransform3DConcat(CATransform3DConcat(atr, view), proj)
let norm = CATransform3DMakeScale(0.5, -0.5, 1) //on iOS Y axis is directed down, but we flipped it formerly, so return back!
let shift = CATransform3DMakeTranslation(1, -1, 0) //we should shift it to another end of projection matrix output before flipping
let screen_scale = CATransform3DMakeScale(bounds.size.width, bounds.size.height, 1)
atr = CATransform3DConcat(CATransform3DConcat(atr, shift), norm)
atr = CATransform3DConcat(atr, CATransform3DMakeAffineTransform(frame.displayTransform(for: self.orientation(), viewportSize: bounds.size)));
atr = CATransform3DConcat(atr, screen_scale) //scale back to pixels
//printCATransform(atr)
//we assume center is in 0,0 i.e. formerly there was animView.layer.center = CGPoint(x:0, y:0)
animView.layer.transform = atr
}
}
P.S. I think for developers who created all the possible mix with Left & Right hand coordinate systems, Y and Z - axis direction, Column major matrices, float & CGFloat incompatibility and lack of CA and ARKit integration a place in hell will be especially hot...
来源:https://stackoverflow.com/questions/6045502/how-to-get-catransform3d-from-projection-and-modelview-matrices