- A.L.I.C.E proprietary framework
- Get to know the ships (how they fly and manouvre)
- Inspired by slow motion video of flies flying
- For final shot 10000s ships used
- Started with flocks and boyds
- Then seeking behaviour
- Finally added directable layer
- Added background-foreground layer for camera depth
- Then started moving camera forward to be more dynamic
New flight system:
- flexible
- performant
- abstract
- easy to use
- lights and rendering
Technology available:
- ALICE: artificial life crowd engine; not good at flying
- Houding/ICE/Custom
Flow:
- Ship initalisation
- Behaviour Injection
- Simulator
- Battle interaction
- Visualisation
Usually not large amount of autonomous agents
Started integrating into ALICE (wings, flaps, thrusters, etc.) Procedurally generated by combining different clips
The system gives information for FX team (i.e. how many frames until explosion) Also added a system to trigger events/effects. This resulted in nice decoupling
ALICE getting "old" - hard to take advantage of multicore systems Partnered with Fabric engine to refactor the system
Robotic AI:
- Game AI became monotonous, almost just moving traffic cones
- Limited difficulty settings
- Started with ghosts, both your own and other players
- No racing against though (no contact, overtake, etc.)
Drivatar: play with your friends anytime you want, even if they are not online They drive the same car, they have real names, etc.
Track What the player does and how precisely they do it Infer behaviour for other tracks, other cars, etc.
Process:
- download the players behaviour for the current track, car, ribbon, etc.
- physics sim to understand how the car reacts and apply a percentage of that capability to simulate player
- Track different parameters (i.e. speed in certain sections) to infer behaviour on similar section in other tracks
- average utilisation and variance for each segment of each track ribbon
- average utilisation and variance for each turn type
System is thought to allow designers to improve gameplay and manage behaviours that might cause player frustration
Single player vs Multi player is an important distinction: if model is too accurate, people competing in single player get frustrated with drivatars hitting them (because modelled too accurately)
Uses grid to evaluate rules in the tree
Problems (because of real time):
- Continous space
- Nondeterministic
- Simultaneous Actions
- Planning Horizon
- Limited CPU
- Authorial control
Solution: define a model for the game to improve evaluation time Turn it into a "board game" by using the nav mesh and generate arrays of values to be used in the evaluation
Initially, generated plan is bad. Need to analyse the search space. Issues:
- Delayed reweard (i.e. health change)
- Score diffusion
- HBF (high branching factor) actions
Solution: create a "macro" action to evaluate (not single orders) Also create action buckets
Priority quantisation, to avoid always picking the best solution
-
AC1 limited to ~100 chars
-
Pre-spawned NPCs (also pre-allocated) and add to it later
-
3 LODs (called bulk, different LOD meshes). Also for animation and AI
-
Autonomous: 0-12 meters (~500us to 5ms)
-
Puppet: 12-40 meters (~150us)
-
Low Res: > 40 meters (~25us)
-
Camera culling (rendering, animation and AI)
-
No hands bones
-
For collision system use of 2D partition map for queries
-
Always clamped on navmesh, simple crowd push behaviour
Legacy AI
- no NPC interaction
- no occlusion
- no networking
Sheperds (AI directors)
- unique ID for bulk (can be queried from other systems)
- only static members
- manually placed by designers (can manage count and density)
- can edit specific positions
Wandering crowds
- cover all of Paris
- no pathfinding
How to manage LODs in the pool
- tag to match real NPC mesh
- match low res visual densities
- the pool is constantly adjusting
Swapping:
- best matching entity (colour, shape, etc.)
- animation blending
- swap when not in FOV (otherwise it pops)
- also turn on full detailed AI, animation, etc.
- Autonomous vs Puppet: only AI and animation
- Puppet vs Low res: low level mesh
- reset all modified variables
- must transition in the correct AI state
- swapping is costly (~5ms)
- also swap/switch AI (i.e. dull bot -> shoot a guard -> swap with correct behaviour
Additional issues with networking (i.e. replication of full NPCs and events) Memory usage:
- pool is fixed
- 160 spawned, 90 active
- 230 MBs for 2000 bulks
Multithreading:
- Need good profiling tools
- Good task scheduling
- Lockless coding/Limit lock times
- Try to remove all CPU idle
2D Map:
- spatially repeating, double buffering (one for current frame, one for next)
- lockless insertion, no remove
- do insertion in one map, query in the other
Future work:
- Dithering (popping of animations, meshes, etc.)
- Deterministic reactions
- Support low res interactions
- Armies (fighting)
MCTS
- Rolling Horizon Evolutionary Algorithms
- Deep Neural Networks
- Evolutionary Algorithms
- Temporal Difference Learning
Animation is King - At Pixar the cycle animator is your boss. No shot goes to the director until the animation director approves
A bug's life
- FSM
- Animation splines
- Procedural "look at"
- still need manual input for more complex scenes
Ratatouille
- Beyond FSMs ato Agent Based Crowds (Massive)
- Locomotion Brain (still an FSM)
- Could be seen as Search and Ranking (sensor + fuzzy logic) + Data Flow (weighted average from logic nodes)
- Major flaw: change in weigths might lead to unintentional behaviour
Got expanded for Up and Wall-e
- more agents (~50)
- more complex terrain
- vision based collision avoidance (Paper from Siggraph 2010)
- also tried flow fields (looked like water flowing, not sentient beings with a purpose)
Predictive understeering (signal filtering): avoid sharp changes in the direction Hysteresis
Instead of changing speed, blend different cycles
Cars 2
- Behavioural agents
- Subsumption architecture from Brooks
- small independent modules that combined give complex behaviours
- Reaction (overshoot, leaning, etc.) can use spring system (harmonic motion)
- Can also be used for animation (i.e. for limbs)
Re-did the whole asset pipeline with Presto, but the crowd system didn't have a clear entry point anymore
Used FSM for Brave (mostly static crowds)
Monster University ("ambient" crowds). Mostly FSM, but allowed for sketching (placement, tracks, etc.). Timeline based system
Return of the Agent Based AI at Pixar, which is a new system integrated in Houdini Pixar as a lot of clips (20K+ in Finding Dory)
- hierarchical connectivity (hierarchic GFSM) and procedural connectivity
- Agent brain is coded in VEX
- Pointcloud KD Tree plugin, then all VEX
- "Easy" to integrate with Bullet Solver (for rigid body dynamics)
Scene is encoded using Universal Scene Description (USD)