The fact is, though, that animation has not always been easy. Prior to Core Animation, you needed to understand some fairly complex subjects such as double buffering and more complicated mathematics such as plane geometry. Core Animation abstracts all of that away.


Core Animation is a huge evolutionary step for the MacOS. Other OSs will continue to try to mimic what Apple has done will likely continue to fall short.


You’ve heard this one a thousand times, but you still ignore it. Most developers have to reminded on a regular basis that just because you can add all the controls you need to a tiny little iPhone view to perform some task doesn’t mean you should. When you think of simplicity, don’t think of what will make it simplest for you to implement but what will make it simplest for your user to use. Keeping it simple is all about them — not you.


Artists could then flip between the different frames to ensure that the basic animation was occurring the way they had envisioned it. Next the keyframes would be handed off to another artist to do the grunt work of drawing the frames that would come in between and enable the animation to be played back at the industry standard 24fps.


Playing back movies correctly with a display’s refresh rate and in-sync with the movie’s sound can be tricky. However, Apple has achieved stable movie playback in the MacOS using what they refer to as a display link.


The “affine” in CAAffineTransform just means that whatever values are used for the matrix, lines in the layer that were parallel before the transform will remain parallel after the transform. A CAAffineTransform can be used to define any transform that meets that criterion.


OpenGL has no notion of a hierarchy of objects or layers; it simply deals with triangles. In OpenGL everything is made of triangles that are positioned in 3D space and have colors and textures associated with them. This approach is extremely flexible and powerful, but it’s a lot of work to replicate something like the iOS UI from scratch using OpenGL.


Both programmable chips that can run (more or less) arbitrary software, but for historical reasons, we tend to say that the part of the work that is performed by the CPU is done “in software” and the part handled by the GPU is done “in hardware.”


Animating and compositing layers onscreen is actually handled by a separate process, outside of your application. This process is known as the render server. On iOS 5 and earlier, this is the SpringBoard process (which also runs the iOS home screen). On iOS 6 and later, this is handled by a new process called BackBoard.


When you perform an animation, the work breaks down into 4 phases:

  • Layout: This is the phase where you prepare your view/layer hierarchy and set up the properties of the layers (frame, bg color, border, and so on).
  • Display: This is where the backing images of layers are drawn.
  • Prepare: This is the phase where CA gets ready to send the animation data to the render server. This is also the point at which CA will perform other duties such as decompressing images that will be displayed during the animation.
  • Commit: CA packages up the layers and animation properties and sends them over IPC to the render server for display.

More computing sins are committed in the name of efficiency (without necessarily achieving it) than for any other single reason — including blind stupidity.


CA provides specialist classes for drawing these types of shape with hardware assistance. Polygons, lines, and curves can be drawn using CAShapeLayer. Text — CATextLayer. Gradients — CAGradientLayer. These will all be substantially faster than using CG, and they avoid the overhead of creating a backing image.


CATiledLayer also has the interesting feature of calling the -drawLayer:inContext: method for each tile concurrently on multiple threads. This avoids blocking the UI and also enables it to take advantage of multiple processor cores for faster tile drawing. A CATiledLayer with just a single tile is a cheap way to implement an asynchronously updating image view.


The flash storage in an iOS device is faster than a traditional hard disk, but still around 200 times slower than RAM, making it essential that you manage loading carefully to avoid noticeable delays.


Building a bespoke caching system is nontrivial. Let’s look at the challenges involved:

  • Choosing a suitable cache key
  • Speculative caching
  • Cache invalidation: If an image file changes, how do we know that our cached version needs to be updated?
  • Cache reclamation: When you run out of cache space (memory), how do you decide what to throw away first?

The lossless compression algorithm used by PNG images allows slightly faster decompression than the more complex lossy algorithm used for JPEG images, but this difference is usually dwarfed by the difference in loading time due to (relatively slow) flash storage access latency.


Doing more things faster is no substitute for doing the right things.


Neither cornerRadius nor masksToBounds impose any significant overhead on their own, but when combined, they trigger offscreen rendering.