Skip to content

Instantly share code, notes, and snippets.

@pekkavaa
Created September 13, 2020 22:24
Show Gist options
  • Save pekkavaa/25ed1b2108cf5782718b026382f2c52c to your computer and use it in GitHub Desktop.
Save pekkavaa/25ed1b2108cf5782718b026382f2c52c to your computer and use it in GitHub Desktop.
Data-Oriented Design Book Review

Data-Oriented Design Book Review

Pekka Väänänen, Sep 14 2020

59b4f66b3c27439a8dd629ade17d65f8

Data-Oriented Design (2018) by Richard Fabian

Computers keep getting faster but the future ain't what it used to be. Instead of higher clock rates we get deeper pipelines, higher latencies, more cores. Programming these systems requires paying attention to how we structure and access our data. In Data-Oriented Design Richard Fabian—who has worked at Frontier Developments, Rockstar Games, and Team17—presents us an approach to reason about these issues from a C++ game developer's perspective.

Data-oriented design is about caches and decoupling meaning from data. The former implies laying out your data so that they're compact and predictably accessed. The latter means exposing the raw transforms from one sequence of bits to another. For example, finding the player with the highest score doesn't require you to query player.getScore() of every player object. You just compute max(playerScoreArray). Why doesn't everyone program like this in the first place?

Fabian argues that traditional object-oriented design makes it easier to think about your program in everyday terms. A chair in a videogame is represented by an instance of the Chair class that has a color, size, weight, and all other necessary attributes, just like a real chair does. In this line of thought a videogame is a simulation where discrete entities interact as individuals.

But the sole reason for the existence of the game is to present an illusion of a virtual world to a human being, the player. Therefore it makes sense to organize your code around player attention. This may include dynamically grouping multiple objects (level of detail techniques), adding and removing behaviors (component systems), and converting objects to lossy serialized forms when not seen (think cars in Grand Theft Auto). If a tree falls in a forest and no-one is around to hear it, perhaps we just wasted precious bandwidth loading its sound effect from disk in the first place.

Split them and put them in an array

Despite these philosophical musings the practical difference in the code boils down to splitting big objects into smaller ones, and then organizing them into arrays, like in a database. There are benefits to this reorganization: we have to read less data during processing, which makes caches happy. We can also apply generic algorithms without any worry about class hierarchies, it's all just low-level data.

I tried to come up with a compact code example to illustrate this, but perhaps it's best for the interested to have a look into the Component Based Objects chapter of the book.

This is essentially writing your C++ game code like it was a relational database. The book begins with an introduction to database normalization and shows how to refactor a simple dungeon crawl game to each of the first three normal forms. For me it was a much appreciated recap.

Once you have normalized your data in relational tables, many custom algorithms start to look like variations on a theme. You select some rows based on some criteria, perhaps sort them, filter them ("is health less than 0?"), and finally write new rows someplace else. It's basically pure functions doing transforms from one array to another. But plain old objects are usually easier to understand. Aren't we sacrificing code clarity for perfomance?

This is where things get interesting. If you really store your data in tables, you gain observability and predictability that individual objects can't provide. For example, in game AI you often have state machines. If something goes wrong, you can log all state transitions by logging all writes to the corresponding table. It's also possible to double-buffer the state updates more easily, so that you don't run into an inconsistent state during the frame update.

Serialization and data migration between version changes can now be managed in a principled way, like you'd do on any regular database. You can also finally unit test your game code because the data transforms have clear inputs and outputs and don't depend on an ad-hoc combination of object instances. After all, pure functional code is easier to test.

Unity and entity-component-systems

At this point you might have a hunch that I've just described an entity-component-system (ECS). That hunch is a right one and the book has in-depth discussion of ECS's. Many will be familiar with the Unity game engine's component system, and Fabian makes a distinction between two kinds of designs:

  • An "Entity" class exists but is a collection of components, and
  • no distinct entity object, just an ID and rows in tables (Unity's new ECS).

The first system is what most people are familiar with, but it's not a fully data-oriented design as advocated in the book. The problem is the top-down object update. Basically, Unity's old system does this

for object in AllGameObjects:
    object.update_components()

instead of this

for component in AllGameComponents:
    component.update_subsystem()

This makes it impossible to blast through an tightly packed array, since each object is still updated one at a time. However their new "DOTS" ("Data-Oriented-Technology-Stack") design has a custom compiler and a fully data-oriented approach. My impression is that it's only used for individual systems that need optimization, as opposed to writing all game code with it from the very beginning. In addition to databases, entity-component-systems seem like the most common application of data-oriented design.

Programming for performance

The book has some excellent discussion on code optimization. It's down-to-earth, presents hard numbers, and has its benchmark code available on GitHub. I learnt new tricks despite already being familiar with the subject. For example, I hadn't considered before that cache contention may be impossible to catch with a profiler since it's happening all over the place.

Fabian also presents us a methodical way to measure, hypothesize, and finally fix performance bottlenecks. I really appreciated the step-by-step approach here and will consider following his advice in the future.

There's also the following take on "premature optimization":

The bane of many projects, and the cause of their lateness, has been the insistence on not doing optimisation prematurely. The reason optimisation at late stages is so difficult is that many pieces of software are built up with instances of objects everywhere, even when not needed.

This leads to an insightful meditation on technical requirements of software projects. If there's a minimum bar for performance you need to reach but you aren't measuring it, how can you tell if you're on track to success?

Programming for correctness

The biggest surprise to me in the book was how the data-oriented approach can simplify code. You can eliminate many conditional statements which leads to a smaller state space, which in turn makes the code more robust. On the other hand, some its advice seems hard to implement without a significant investment in infrastructural code. Refactoring an object-oriented codebase to data-oriented style sounds tough. Consider Unity, where the team has to maintain both old and new systems until eventually (if ever?) everyone jumps on the new one.

Hard but good

Data-Oriented Design was a more difficult read than I expected. Its both high and low level, occasionally philosophical but chiefly hands-on. It's written by an expert for other expert programmers.

The book was originally published on the web and its current version can be read for free at dataorienteddesign.com/dodbook/. It's a collection of essays, which works for its benefit as I believe most of the readers will be just sampling occasional chapters of the online edition. Chapters Optimisations, Helping the compiler, and What's wrong? are worth reading even if you already know all about data-oriented design. The writing was clear and usually to the point, apart from a couple of chapters that clearly needed extra editing. Note that there's no index.

All in all, I can recommend the book to people who have struggled with large C++ codebases and wish to get a fresh perspective on high-performance programming. Computers just copy bits around and it's all pretty simple in the end, really.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment