Based on https://github.com/protolambda/lmd-ghost/
Simulation config:
config := &sim.SimConfig{
ValidatorCount: 40000,
LatencyFactor: 0.8,
SlotSkipChance: 0.3,
BaseAttestWeight: 100,
MaxExtraAttestWeight: 10,
Blocks: 10000,
AttestationsPerBlock: 1000,
JustifyEpochsAgo: 7,
FinalizeEpochsAgo: 10,
ForkChoiceRule: "<fill in>",
}
Notes:
- results between fork-choice rules are not exactly the same, but similar: sometimes there's no "best" option; two (or more) options have equal attesting weight. In this case you can choose any. The simulation creates new blocks based on this choice, so from this point on the blocks created by the RNG may be different.
- Different algorithms perform better in different scenarios. With tweaks to the config, there may be another winning algorithm.
Do note that some algorithms are more useful than others:
- stateful algorithms provide the fork-choice head of any block in the DAG.
- some algorithms are easier to manage and prune than others. Caches are troublesome sometimes. State may also not fit.
"Winner" (imho): proto_array
does well for several reasons:
- you can look-up the head for any node in the graph in O(1)
- pruning triggering is easy:
gh.indices[gh.dag.Finalized] > X
, withX
being the amount of nodes you want to keep around, while not using them.- Note: you can do more heavy pruning by not exploiting the sequential property of the nodes (aka insertion time), and go further: remove nodes that are inserted later than the finalized node, but not connected to this finalized node. However, this is more complex, and does not win you much, if any at all.
- it is fast
- it is completely stateful, but low on memory (just 4 arrays of integers for state, plus normal node bookkeeping).
- it is very open to extend on: you do not strictly need a second DAG structure, you can use these arrays as DAG. My guess is that fork-choice simulation time of 10000 blocks could be cut by 50% if you remove the overhead+memory of the full pointer & slice based DAG.