问题
It is known that the zPosition
of the layers only determines which layer cover up which layer. Whether it is a zPosition
of 10 or 1000 won't affect its position.
That is, unless if we use CATransformLayer to contain those layers, then the zPosition
of those layers will affect the layers' position.
However, the following code running in iOS 5.1.1 does make the zPosition
alter the position of the layers... you can try it in a new Single View App, and add the following code to ViewController.m
. If the zPosition
of layer2
is changed from 88
to 188
, we can see that the layer moves accordingly. So no CATransformLayer
is in the code; why will it behave like that? (Please quote Apple docs or any reference).
Also related is, if the line self.view.layer.sublayerTransform = transform3D;
is changed to self.view.layer.transform = transform3D;
then the zPosition
will have no effect on the position. But according to the Apple docs, transform and sublayerTransform only differ in whether self is transformed or not:
Two layer properties specify transform matrices:
transform
andsublayerTransform
. The matrix specified by thetransform property
is applied to the layer and its sublayers relative to the layer'sanchorPoint
. [...] The matrix specified by thesublayerTransform
property is applied only to the layer’s sublayers, rather than to the layer itself.
So it is strange that why changing that will cause self.view.layer
to act like a CATransformLayer
.
-(void) viewDidAppear:(BOOL)animated {
CATransform3D transform3D = CATransform3DIdentity;
transform3D.m34 = -1.0 / 1000;
transform3D = CATransform3DRotate(transform3D, M_PI / 4, 0, 1, 0);
self.view.layer.sublayerTransform = transform3D;
CALayer *layer1 = [[CALayer alloc] init];
layer1.zPosition = 33;
layer1.frame = CGRectMake(100, 100, 100, 100);
layer1.backgroundColor = [[UIColor orangeColor] CGColor];
[self.view.layer addSublayer:layer1];
CALayer *layer2 = [[CALayer alloc] init];
layer2.zPosition = 88;
layer2.frame = CGRectMake(100, 120, 100, 100);
layer2.backgroundColor = [[UIColor yellowColor] CGColor];
[self.view.layer addSublayer:layer2];
}
回答1:
To really understand the difference between a layer transform
and sublayerTransform
properties, I think it is useful to think of this in terms of looking at a TV with 3D content in it. Check out my modified version of your code in Swift playground.
Here's the beginning version: you have a TV with content consisting of no perspective transformation whatsoever. Therefore, your content (the two orange and yellow sublayers) looks flat even with rotation around the y-axis. Pretty much what you'd expect for an orthographic projection.
However, if you hold your TV still, but transform your content underneath with perspective projection, now you immediately see the depth of your content. The zPosition
of the sublayers you added truly play an important part in giving you a sense of depth, and it is rightfully so by its definition. This is exactly how sublayerTransform
works: transform only the contents, but not the TV itself.
Now, what would it look like if I use transform
instead of sublayerTranform
? Imagine not transforming just the contents, but rotate the entire TV along with the contents attached to the screen, and you'd see the expected result:
So, yeah, apparently transform
and sublayerTransform
behave quite differently when it comes to treating zPosition
of the sublayers, although the documentation doesn't explicitly say so. A sublayer's zPosition
has no effect on its parent's transform
, but does provide normal 3D effect on its parent's sublayerTransform
.
回答2:
we can think the layer as a coordinate layer , when we make a layer transfrom3D, it coordinate become 3D,but its subLayer still show as 2D,but when set sublayer transfrom3D,the layer will make its all sublayer show as 3D.it is the same as the rotation
来源:https://stackoverflow.com/questions/10917380/on-ios-why-does-setting-a-layers-sublayertransform-turn-itself-to-act-like-cat