Skip to content

Instantly share code, notes, and snippets.

@vbuterin
Last active October 29, 2016 23:38
Show Gist options
  • Save vbuterin/47b6c572ba8f95109f1ba5e414297c09 to your computer and use it in GitHub Desktop.
Save vbuterin/47b6c572ba8f95109f1ba5e414297c09 to your computer and use it in GitHub Desktop.
A Data Availability Backbone

We can create a simplified scalable consensus algorithm that attempts to provide guarantees of only data availability; that is, the basic "unit" of consensus is a data block, a small header that cryptographically points to, say, 1 MB of data, and the only purpose of this system is to provide only the guarantee that the data committed to by these block headers is readily available. This is a challenging problem to solve scalably because the usual technique for scalable validation, incentivized interactive verification with penalties, does not work well for data availability because a malicious attacker can always reveal data just before they are supposed to be penalized. However, using the mechanism described here we can combine rapid shuffling and a form of graceful degradation to solve the scalable data availability validator problem rather effectively.

We can model the protocol by assuming access to a "DHT oracle" where nodes can call DHTget(hash) and get back the preimage if the preimage has been published, otherwise it returns NULL. We assume that if DHTget(hash) != NULL at time T, then DHTget(hash) != NULL at all times T' > T. The oracle may fail occasionally by giving NULL when it should have provided data, but we assume that this happens with probability substantially less than HONEST_FRAC / CHECK_DEPTH. We also assume colluding nodes can share data between each other without publishing it.

Basic Protocol

The system works as follows. Suppose that there is a primary "manager chain" and N shards. The manager chain keeps track of a validator set and contains a function getValidator(shard, height) which pseudorandomly samples a validator (we assume the randomness is not predictable ahead of time because one of the techniques recommended in http://vitalik.ca/files/randomness.html is used). getValidator(shard, H) should only provide a (consistent) response if the block height at which the function is called is at least H; otherwise it should fail.

Every time the head of the manager chain changes (ie. a new block is added to it), every validator calls the getValidator for every shard using the new height of the manager change for the height argument, and saves the set of shards for which the function returns the validator's own address (NOTE: if the number of shards is very large, validators can use an incentivized interactive protocol with other nodes to do this step scalably). On each of those shards, the validator determines what the "head" of the chain for that shard is, and creates a data block on that shard, consisting of (i) a pointer to the previous head, (ii) a pointer to 1 MB of data, and (iii) a signature. Validators are paid a reward if they make a block that ends up being in the chain, and are penalized for creating blocks that do not end up in the chain ("dunkles"), similarly to Casper.

The process for computing the head is a simple one of "longest valid chain wins". The definition of validity is, in theory:

  • Every block on the chain must contain a valid signature signed by the correct validator
  • Every block on the chain must point to a piece of data of 1 MB (or whatever amount) or less in size, and this data must be available (ie. can be readily downloaded from the P2P network)

If every validator were required to check the second condition for the entire chain, then this would not be scalable, as it would require the validator to download all data in all chains. Instead, we only require the validator to check down to some depth CHECK_DEPTH. Note that if CHECK_DEPTH is constant and sufficiently high (eg. 50), then in an honest-majority model this algorithm works well because it is exceedingly unlikely that an honest majority would have enough consecutive malicious validators to confirm an unavailable block.

Cryptoeconomic Security and Majority Collusions

However, the honest majority assumption is not universally accepted, and so we can modify the algorithm to make a stronger claim of safety in a "cryptoeconomic rationality plus honest minority" assumption, where we assume that most validators are rational, and possibly colluding, and at least some small percentage (eg. 10%) are honest (we'll use HONEST_FRAC to refer to this value). In this case, we make the check depth mechanism dynamic. Each validator maintains two counters check_height and max_check_height for each shard, starting off with check_height = max_check_height = 0. If they are asked to make a block on some shard then they check the validity of all blocks starting from max_check_height.

If a validator sees a new valid block on top of the existing head, and check_height < new_height - MIN_CHECK_DEPTH, then it sets check_height += 2 on that shard. If a validator sees a "dunkle", then it sets check_height -= 1 / HONEST_FRAC for that shard. In both cases, it then sets max_check_height = max(max_check_height, check_height) (max_check_height may decrease in one circumstance: if there is a long-range fork of the chain, then it decreases to the height of the common ancestor of the new head and the old head).

This ensures that as long as at least HONEST_FRAC of validators are honest, if there is an attack involving validators being bribed or colluding then validators will fall back to a more cautious strategy. However, note that this does come at the cost of allowing an attacker with at least HONEST_FRAC of malicious nodes to slow down verification on any given shard, at the cost of creating stale blocks. The master chain keeps track of the number of dunkles on each shard; if a shard has more than HONEST_FRAC of its blocks being dunkles then the block interval on that shard is increased (ie. an increasing fraction of the time, getValidator(shard) simply returns undefined) and the block rewards and penalties are increased, allowing validators to take a longer amount of time to validate the number of blocks that they need to via the strategy described above, at the cost of decreasing that shard's throughput.

Hence, this scheme achieves safety even against attackers that can corrupt specified validators with at most 1 - HONEST_FRAC success probability and up to a maximum of up to ~33-50% of the total validator set, and is "scalable" in the sense that given validators with O(N) computational and bandwidth bounds it can validate the availabilty of O(N^2) data, though it does have the property that an attack with O(N * t) economic power can greatly decrease the throughput of a single shard for duration t (this is deemed acceptable because simple transaction spam attacks can already do something similar).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment