Author: https://www.cyanhall.com/
Core Animation's original name is Layer Kit
Core Animation is a compositing engine; its job is to compose different pieces of visual content on the screen, and to do so as fast as possible. The content in question is divided into individual layers stored in a hierarchy known as the layer tree
. This tree forms the underpinning for all of UIKit, and for everything that you see on the screen in an iOS application.
In UIView, tasks such as rendering, layout and animation are all managed by a Core Animation class called CALayer
. The only major feature of UIView that isn’t handled by CALayer is user interaction.
There are four hierarchies, each performing a different role:
- view hierarchy
- layer tree
- presentation tree
- render tree
Here are some features of CALayer that are not exposed by UIView:
- Drop shadows, rounded corners, and colored borders
- 3D transforms and positioning
- Nonrectangular bounds
- Alpha masking of content
- Multistep, nonlinear animations
CALayer
has a property called contents
, defined as id
, but only support CGImage
(iOS/Mac OS) or NSImage
(Mac OS), and it's a Core Foundation type.
Core Foundation
is the C-level API, which provides CFString, CFDictionary and the like.Foundation
is Objective-C, which provides NSString, NSDictionary, etc.
So we need to use it this way:
layer.contents = (__bridge id)image.CGImage;
the CALayer
property contentsGravity
is equivalent to UIView's contentMode
.
The contentsScale
property defines a ratio between the pixel dimensions of the layer’s backing image and the size of the view. It’s a floating-point value that defaults to 1.0. The contentsScale
property is actually part of the mechanism by which support for high-resolution (a.k.a. Hi-DPI or Retina) screens is implemented.
When working with backing images that are generated programmatically, you’ll often need to remember to manually set the layer contentsScale
to match the screen scale; otherwise, your images will appear pixelated on Retina devices. You do so like this:
layer.contentsScale = [UIScreen mainScreen].scale;
There is a property on UIView called clipsToBounds
that can be used to enable/disable clipping (that is, to control whether a view’s contents are allowed to spill out of their frame). CALayer has an equivalent property called masksToBounds
.
The following coordinate types are used in iOS:
Points
— The most commonly used coordinate type on iOS and Mac OS. Points are virtual pixels, also known as logical pixels. On standard-definition devices, 1 point equates to 1 pixel, but on Retina devices, a point equates to 2×2 physical pixels. iOS uses points for all screen coordinate measurements so that layouts work seamlessly on both Retina and non-Retina devices.Pixels
- Physical pixel coordinates are not used for screen layout, but they are often still relevant when working with images. UIImage is screen-resolution aware, and specifies its size in points, but some lower-level image representations such as CGImage use pixel dimensions, so you should keep in mind that their stated size will not match their display size on a Retina device.Unit
— Unit coordinates are a convenient way to specify measurements that are relative to the size of an image or a layer’s bounds, and so do not need to be adjusted if that size changes. Unit coordinates are used a lot in OpenGL for things like texture coordinates, and they are also used frequently in Core Animation.
The contentsRect property of CALayer allows us to specify a subrectangle of the backing image to be displayed inside the layer frame, it uses unit
coordinates.
One of the most interesting applications of ntentsRect
is that it enables the use of so called image sprites with Sprite sheets.
The contentsCenter
is actually a CGRect
that defines a stretchable region inside the layer and a fixed border around the edge. By default, the contentsCenter is set to {0, 0, 1, 1}.
This works in a similar way to the -resizableImageWithCapInsets:
method of UIImage, but can be applied to any layer backing image, including one that is drawn at runtime using Core Graphics
If UIView detects that the -drawRect:
method is present, it allocates a new backing image for the view, with pixel dimensions equal to the view size multiplied by the contentsScale.
If you don’t need this backing image, it’s a waste of memory and CPU time to create it, which is why Apple recommends that you don’t leave an empty -drawRect: method in your layer subclasses if you don’t intend to do any custom drawing.
The -drawRect:
method is executed automatically when the view first appears onscreen. The code inside -drawRect:
method uses Core Graphics
to draw into the backing image, and the result will then be cached until the view needs to update it (usually because the developer has called the -setNeedsDisplay
method, although some view types will be redrawn automatically whenever a property that affects their appearance is changed [such as bounds
]).
Although -drawRect:
is a UIView method, it’s actually the underlying CALayer
that schedules the drawing and stores the resultant image.
UIView has three primary layout properties: frame
, bounds
, and center
. CALayer has equivalents called frame
, bounds
, and position
.
The view’s frame
, bounds
, and center
properties are actually just accessors (setter and getter methods) for the underlying layer equivalents. When you manipulate the view frame, you are really changing the frame of the underlying CALayer. You cannot change the view’s frame independently of its layer.
The frame
is not really a distinct property of the view or layer at all; it is a virtual property, computed from the bounds
, position
, and transform
, and therefore changes when any of those properties are modified. Conversely, changing the frame
may affect any or all of those values, as well.
You need to be mindful of this when you start working with transforms, because when a layer is rotated or scaled, its frame
reflects the total axially aligned rectangular area occupied by the transformed layer within its parent, which means that the frame
width and height may no longer match the bounds
.
Both the view’s center
property and the layer’s position
property specify the location of the anchorPoint
of the layer relative to its superlayer.
The anchorPoint
is specified in unit
coordinates.
If a layer change it's anchorPoint
, the frame.origin will change based in it's frame.size. from (frame.origin.x) to (frame.origin.x + frame.size.width) or from (frame.origin.y) to (frame.origin.y + frame.size.y).
When drawing a time clock, the anchorPoint
may be useful.
CALayer
has a cornerRadius
property that controls the curvature of the layer’s corners. By default, this curvature affects only the background color of the layer and not the backing image or sublayers. However, when the masksToBounds
property is set to YES, everything inside the layer is clipped to this curve.
Another useful pair of CALayer
properties are borderWidth
and borderColor
. Together these define a line that is drawn around the edge of the layer. This line (known as a stroke) follows the bounds of the layer, including the corner curvature. The border follows the bounds of the layer, not the shape of its contents.
Drop shadows are cast behind a view to imply depth. A drop shadow can be shown behind any layer by setting the shadowOpacity
property to a value greater than zero (the default). To tweak the appearance of the shadow, you can use a trio of additional CALayer properties: shadowColor
, shadowOffset
, and shadowRadius
.
The shadowOffset
property controls the direction and distance to which the shadow extends. The shadowOffset
is a CGSize
value, with the width controlling the shadow’s horizontal offset and the height controlling its vertical offset.
The shadowRadius
property controls the blurriness of the shadow. A value of zero creates a hard-edged shadow that exactly matches the shape of the view. A larger value creates a soft-edged shadow that looks more natural.
The masksToBounds
property clips both shadow and content. If you want to clip the contents and cast a shadow, you need to use two layers: an empty outer layer that just draws the shadow, and an inner one that has masksToBounds
enabled for clipping content.
Calculate layer shadows can be very expensive in real time, you can improve performance considerably by specifying a shadowPath
.
For something like a rectangle or circle, creating a CGPath
manually is fairly straight-forward. For a more complex shape like a rounded rectangle, you’ll probably find it easier to use the UIBezierPath
class, which is an Objective-C wrapper around CGPath
provided by UIKit.
CALayer has a property called mask
defines the part of the parent layer that is visible. The mask property is itself a CALayer
and has all the same drawing and layout properties of any other layer.
If the mask layer is smaller than the parent layer, only the parts of the parent (or its sub- layers) that intersect the mask will be visible.
When images are displayed at different sizes, an algorithm (known as a scaling filter) is applied to the pixels of the original image to generate the new pixels that will be displayed onscreen.
There is no universally ideal algorithm for resizing an image. The approach depends on the nature of the content being scaled, and whether you are scaling up or down. CALayer offers a choice of three scaling filters to use when resizing images. These are represented by the following string constants:
kCAFilterLinear
: defaultkCAFilterNearest
kCAFilterTrilinear
Linear filtering preserves the shape, and nearest-neighbor filtering preserves the pixels.
Implement group opacity for a specific layer subtree by using a CALayer property called shouldRasterize
. When set to YES
, the shouldRasterize
property causes the layer and its sublayers to be collapsed into a single flat image before the opacity
is applied, thereby eliminating the blending glitch.
In addition to enabling the shouldRasterize
property, we’ve modified the layer’s rasterizationScale
property. By default, all layers are rasterized at a scale of 1.0, so if you use the shouldRasterize
property, you should always ensure that you set the
rasterizationScale
to match the screen to avoid views that look pixelated on a Retina display.
button.layer.rasterizationScale = [UIScreen mainScreen].scale;
Use CGAffineTransform
to rotate, reposition, and distort our layers, user CATransform3D
to change boring flat rectangles into three dimensional surfaces.
The UIView transform property is of type CGAffineTransform
, and is used to represent a two-dimensional rotation, scale, or translation. CGAffineTransform
is a 2-column-by-3-row matrix that can be multiplied by a 2D row-vector to transform its value.
The “affine” in CGAffineTransform
just means that whatever values are used for the matrix, lines in the layer that were parallel before the transform will remain parallel after the transform. A CGAffineTransform can be used to define any transform that meets that criterion.
The following functions each create a new CGAffineTransform
matrix from scratch:
- CGAffineTransformMakeRotation(CGFloat angle)
- CGAffineTransformMakeScale(CGFloat sx, CGFloat sy)
- CGAffineTransformMakeTranslation(CGFloat tx, CGFloat ty)
//rotate the layer 45 degrees
CGAffineTransform transform = CGAffineTransformMakeRotation(M_PI_4);
self.layerView.layer.affineTransform = transform;
Core Graphics also provides a second set of functions that can be used to apply a further transform on top of an existing one.
- CGAffineTransformRotate(CGAffineTransform t, CGFloat angle)
- CGAffineTransformScale(CGAffineTransform t, CGFloat sx, CGFloat sy)
- CGAffineTransformTranslate(CGAffineTransform t, CGFloat tx, CGFloat ty)
When you are manipulating transforms, it is often useful to be able to create a transform that does nothing at all—the CGAffineTransform equivalent of zero or nil. In the world of matrices, such a value is known as the identity matrix, and Core Graphics provides a convenient constant for this: CGAffineTransformIdentity
.
Finally, if you ever want to combine two existing transform matrices, you can use the following function, which creates a new CGAffineTransform matrix from two existing ones:
CGAffineTransformConcat(CGAffineTransform t1, CGAffineTransform t2);
The order in which you apply transforms affects the result; a translation followed by a rotation is not the same as a rotation followed by a translation.
Because Core Graphics provides functions to calculate the correct values for the transform matrix for you, it’s rare that you need to set the fields of a CGAffineTransform directly. One such circumstance is when you want to create a shear transform, for which Core Graphics provides no built-in function.
CGAffineTransform CGAffineTransformMakeShear(CGFloat x, CGFloat y) {
CGAffineTransform transform = CGAffineTransformIdentity;
transform.c = -x;
transform.b = y;
return transform;
}
As the CG prefix indicates, the CGAffineTransform
type belongs to the Core Graphics framework. Core Graphics is a strictly 2D drawing API, and CGAffineTransform
is intended only for 2D transforms (that is, ones that apply only within a two-dimensional plane).
However, the zPosition
property in layer enbales us to move layers toward or away from user's viewpoint. The transform
property (which is of type CATransform3D
) generalizes this idea, allowing us to both move and rotate a layer in three dimensions.
Like CGAffineTransform
, CATransform3D
is a matrix. But instead of a 2-by-3 matrix, CATransform3D
is a 4-by-4 matrix that is capable of arbitrarily transforming a point in 3D.
Core Animation provides a number of functions that can be used to create and combine CATransform3D matrices in exactly the same way as with CGAffineTransform matrices. The functions are similar to the Core Graphics equivalents, but the 3D translation and scaling functions provide an additional z argument, and the rotation function accepts an x, y, and z argument in addition to the angle, which together form a vector that defines the axis of rotation:
CATransform3DMakeRotation(CGFloat angle, CGFloat x, CGFloat y, CGFloat z)
CATransform3DMakeScale(CGFloat sx, CGFloat sy, CGFloat sz)
CATransform3DMakeTranslation(Gloat tx, CGFloat ty, CGFloat tz)
// Rotate the layer 45 degrees along the Y axis
CATransform3D transform = CATransform3DMakeRotation(M_PI_4, 0, 1, 0);
self.layerView.layer.transform = transform;
To modify our transform matrix to include a perspective transform
(sometimes called the z transform
) in addition to the rotation transform we’ve already applied. We’ll have to modify our matrix values manually.
The perspective effect of a CATransform3D
is controlled by a single value in the matrix: element m34
. The m34 value is used in the transform calculation to scale the X and Y values in proportion to how far away they are from the camera. By default, m34
has a value of zero. We can apply perspective to our scene by setting the m34 property of our transform to -1.0 / d
, where d
is the distance between the imaginary camera and the screen, measured in points, a value between 500 and 1000 usually works fairly well; Decreasing the distance value increases the perspective effect, so a very small value will look extremely distorted, and a very large value will just look like there is no perspective at all (isometric).
CATransform3D transform = CATransform3DMakeRotation(M_PI_4, 0, 1, 0);
// Apply perspective
transform.m34 = - 1.0 / 500.0;
self.layerView.layer.transform = transform;
When drawn in perspective, objects get smaller as they move away from the camera. As they move even farther, they eventually shrink to a point. All distant objects eventually converge on a single vanishing point.
In real life, the vanishing point is always in the center of your view, and generally, to create a realistic perspective effect in your app, the vanishing point should be in the center of the screen, or at least the center of the view that contains all of your 3D objects.
Core Animation defines the vanishing point as being located at the anchorPoint of the layer being transformed. If the transform includes a translation component that moves the layer to somewhere else onscreen, the vanishing point will be wherever it was located before it was transformed.
When you change the position of a layer, you also change its vanishing point.
If you intend to adjust the m34
property of a layer to make it appear three-dimensional, you should position it in the center of the screen and then move it to its final location using a translation (instead of changing its position
) so that it shares a common vanishing point with any other 3D layers on the screen.
If you have multiple views or layers, each with 3D transforms, it is necessary to apply the same m34 value to each individually and to ensure that they all share a common position
in the center of the screen prior to being transformed.
CALayer has another transform property called sublayerTransform
. This is also a CATransform3D, but instead of transforming the layer to which it is applied, it affects only the sublayers. This means you can apply a perspective transform once and for all to a single container layer, and the sublayers will all inherit that perspective automatically. (The vanishing point is set as the center of the container layer, not set individually for each sublayer.)
CALayer has a property called doubleSided
that controls whether the reverse side of a layer should be drawn. The doubleSided
property is a BOOL
and defaults to YES
. If you set it to NO, then when the layer is facing away from the camera, it will not be drawn at all.
Although Core Animation layers exist in 3D space, they don’t all exist in the same 3D space. The 3D scene within each layer is flattened. When you look at a layer from face on, you see the illusion of a 3D scene created by its sublayers, but as you tilt the layer away, you realize that 3D scene is just painted on the layer surface.
There is a CALayer subclass called CATransformLayer
designed to deal with this problem.
CAShapeLayer is a layer subclass that draws itself using vector graphics instead of a bitmap image. You specify attributes such as color and line thickness, define the desired shape using a CGPath, and CAShapeLayer renders it automatically.
CAShapeLayer can be used to draw any shape that can be represented by a CGPath.
- It’s fast —
CAShapeLayer
uses hardware-accelerated drawing and is much faster than using Core Graphics to draw an image. - It’smemory efficient — A
CAShapeLayer
does not have to create a backing image like an ordinary CALayer does, so no matter how large it gets, it won’t consume much memory. - It doesn't get clipped to the layer bounds — A
CAShapeLayer
can happily draw outside of its bounds. Your path will not get clipped like it does when you draw into a regular CALayer using Core Graphics. - There's no pixelation—When you transform a CAShapeLayer by scaling it up or moving it closer to the camera with a 3D perspective transform, it does not become pixelated like an ordinary layer’s backing image would.
The CAShapeLayer
path property is defined as a CGPathRef
, but we’ve created the path using the UIBezierPath
helper class, which saves us from having to worry about manually releasing the CGPath
.
We can use a CAShapeLayer to create a view with mixed sharp and rounded corners.
Core Animation provides a subclass of CALayer called CATextLayer
that encapsulates most of the string drawing features of UILabel in layer form and adds a few extra features for good measure.
If we want Retina-quality text, we have to set the contentsScale of our CATextLayer to match the screen scale using the following line of code:
textLayer.contentsScale = [UIScreen mainScreen].scale;
Also, the CATextLayer string property is not an NSString as you might expect, but is typed as id. This is to allow you the option of using an NSAttributedString instead of an NSString to specify the text (NSAttributedString is not a subclass of NSString).
What we really want is a UILabel
subclass that actually uses a CATextLayer
as its backing layer, then it would automatically resize with the view and there would be no redundant backing image to worry about.
We can’t replace the layer once it has been created, but if we subclass UIView, we can override the +layerClass
method to return a different layer subclass at creation time. UIView calls the +layerClass
method during its initialization, and uses the class it returns to create its backing layer.
Another benefit of implementing CATextLayer
as a backing layer is that its contentsScale
is automatically set by the view.
In general, using +layerClass
to create views backed by different layer types is a clean and reusable way to utilize CALayer subclasses in your apps.
A CATransformLayer
is unlike a regular CALayer
in that it cannot display any content of its own; it exists only to host a transform that can be applied to its sublayers. CATransformLayer
does not flatten its sublayers, so it can be used to construct a hierarchical 3D structure.
Using CATransformLayer, we can create two cubes with shared perspective but different transforms applied.
CAGradientLayer is used to generate a smooth gradient between two or more colors.
The gradient colors are specified using the colors property, which is an array. The colors array expects values of type CGColorRef
(which is not an NSObject
derivative), so we need to use the bridging trick.
CAGradientLayer also has startPoint
and endPoint
properties that define the direction of the gradient. These are specified in unit coordinates, not points, so the top-left corner of the layer is specified with {0, 0} and the bottom-right corner is {1, 1}.
By default, the colors in the gradient will be evenly spaced, but we can adjust the spacing using the locations property. The locations property is an array of floating-point values, and are specified in unit coordinates. It is not obligatory to supply a locations array, but if you do, you must ensure that the number of locations matches the number of colors or you’ll get a blank gradient.
The CAReplicatorLayer
class is designed to efficiently generate collections of similar layers. It works by drawing one or more duplicate copies of each of its sublayers, applying a different transform to each duplicate.
The instanceCount
property specifies how many times the layer should be repeated.
By using CAReplicatorLayer
to apply a transform with a negative scale factor to a single duplicate layer, you can create a mirror image of the contents of a given view (or an entire view hierarchy), creating a real-time “reflection” effect.
CAScrollLayer
has a -scrollToPoint:
method that automatically adjusts the origin of the bounds so that the layer contents appear to scroll.
- (void)pan:(UIPanGestureRecognizer *)recognizer
{
//get the offset by subtracting the pan gesture
//translation from the current bounds origin
CGPoint offset = self.bounds.origin;
offset.x -= [recognizer translationInView:self].x;
offset.y -= [recognizer translationInView:self].y;
//scroll the layer
[(CAScrollLayer *)self.layer scrollToPoint:offset];
//reset the pan gesture translation
[recognizer setTranslation:CGPointZero inView:self];
}
The astute among you may wonder then what the point of using a CAScrollLayer
is at all, as you could simply use a regular CALayer
and adjust the bounds origin yourself. The truth is that there isn’t much point really. UIScrollView doesn’t use a CAScrollLayer
, in fact, and simply implements scrolling by manipulating the layer bounds directly.
All images displayed onscreen have to eventually be converted to an OpenGL texture, and OpenGL has a maximum texture size (usually 2048×2048 or 4096×4096, depending on the device model).
CATiledLayer
offers a solution for loading large images that solves the performance problems with large images by splitting the image up into multiple small tiles and loading them individually as needed.
CAEmitterLayer
is a high-performance particle engine designed to let you create real-time particle animations such as smoke, fire, rain, and so on.
CAEmitterLayer
acts as a container for a collection of CAEmitterCell
instances that define a particle effect. You will create one or more CAEmitterCell
objects as templates for the different particle types, and the CAEmitterLayer
is responsible for instantiating a stream of particles based on these templates.
A CAEmitterCell
is similar to a CALayer: It has a contents property that can be set using a CGImage, as well as dozens of configurable properties that control the appearance and behavior of the particles.
The properties of CAEmitterCell generally break down into three categories:
- A starting value for a particular attribute of the particle. For example, the
color
property specifies a blend color that will be multiplied by the colors in thecontents
image. - A range by which a value will vary from particle to particle. For example, the
emissionRange
property is set to 2 * PI in our project, indicating that particles can be emitted in any direction within a 360-degree radius. By specifying a smaller value, we could create a conical funnel for our particles. - A change over time for a particular value. For example, set
alphaSpeed
to-0.4
, meaning that the alpha value of the particle will reduce by 0.4 every second, creating a fadeout effect for the particles as they travel away from the emitter.
The properties of the CAEmitterLayer
itself control the position and general shape of the entire particle system. Some properties such as birthRate
, lifetime
, and velocity
duplicate values that are specified on the CAEmitterCell
. These act as multipliers so that you can speed up or amplify the entire particle system using a single value. Other notable properties include the following:
preservesDepth
, which controls whether a 3D particle system is flattened into a single layer (the default) or can intermingle with other layers in the 3D space of its container layer.renderMode
, which controls how the particle images are blended visually.kCAEmitterLayerAdditive
has the effect of combining the brightness of overlapping particles so that they appear to glow. If we were to leave this as the default value ofkCAEmitterLayerUnordered
, the result would be a lot less pleasing.
When it comes to high-performance graphics on iOS, the last word is OpenGL. OpenGL provides the underpinning for Core Animation. It is a low-level C API that communicates directly with the graphics hardware on the iPhone and iPad, with minimal abstraction.
CAEAGLLayer
is a CALayer
subclass designed for displaying arbitrary OpenGL graphics. With CAEAGLLayer
, you have to do all the low-level configuration of the various OpenGL drawing buffers yourself. In iOS 5, Apple introduced a new framework called GLKit
that takes away some of the complexity of setting up an OpenGL drawing context by providing a UIView subclass called GLKView
that handles most of the setup and drawing for you.
It is rare that you will need to manually set up a CAEAGLLayer
any more (as opposed to just using a GLKView
), but let’s give it a go for old time’s sake. Specifically, we'll set up an OpenGL ES 2.0 context, which is the standard for all modern iOS devices.
In a real OpenGL application, we would probably want to call the -drawFrame method 60 times per second using an NSTimer or CADisplayLink, and we would separate out geometry generation from drawing so that we aren’t regenerating our triangle’s vertices every frame (and also so that we can draw something other than a single triangle), but this should be enough to demonstrate the principle.
AVPlayerLayer
is an example of another framework (in this case, AVFoundation
) tightly integrating with Core Animation
by providing a CALayer
subclass to display a custom content type.
AVPlayerLayer
is used for playing video on iOS. It is the underlying implementation used by high-level APIs such as MPMoviePlayer
, and provides lower-level control over the display of video. Usage of AVPlayerLayer
is actually pretty straightforward: You can either create a layer with a video player already attached using the +playerLayerWithPlayer:
method, or you can create the layer first and attach an AVPlayer
instance using the player property.
@kiaraRobles: The frame is computed by taking the
bounds
rectangle – which in most cases is just a size at 0,0 origin – and working backwards from theposition
point. Setting any one of these positions adjusts the others. The same works onUIView
, setting thecenter
property adjusts theframe
to match. It's actually very convenient.