totalSize(block)
: sum of the sizes of RLP encoded trie nodes and contract codes in the account trie and storage tries. Items or subtrees referenced multiple times are counted multiple times. Needs a database upgrade to store totalSubtreeSize
for each internal node. Total size is the total subtree size of the state root. Storing in consensus is not needed.
sizeLimit(block)
: specified in the protocol and can only be changed for future blocks in a hard fork. (only applies to the state, chain history is not discussed here)
Since internal account trie nodes do not belong to a single entry we use an estimated value overheadEstimate
over the actual enrty size when calculating individual entry size on which the actual pricing is based.
overheadEstimate
: estimated short trie node size + estimated full trie node size * (1/16 + 1/256 + ...)
stateEntrySize(account)
: RLP size of the state entry + overheadEstimate
stateEntrySize(contract)
: RLP size of the state entry + contract code size + totalSubtreeSize(storage)
+ overheadEstimate
- either the storage price or the total size limit needs to be specified
- either as a value fixed in protocol or by a miner voting mechanism
- if miner vote is used then it does not matter too much whether the price is controlled directly or based on the size limit which is controlled
- miner voting in general might not be a good idea because (unlike the case of block gas limits) short-term decisions might have serious permanent effects so there could be some bad incentives to "squeeze" the network too hard by the current generation of miners and not care about the future
- this is also the reason why storage fee should be burned and not paid to the miners; storing data is not a short-term activity related to the given block and its miner in particular. Price peaks in the storage fee are generally a bad thing and should not be rewarding anyone who then might want to deliberately cause them.
- using protocol-defined constants gives this power in the hands of the whole community and burning the fees also "rewards" the whole ecosystem over the long term.
- fixed (protocol-defined) price would not ensure any desirable properties (no way to know how much is enough to limit the state size and also Ether value can change)
- fixed total size limit seems desirable because of drive capacity limits
- using a hard limit seems dangerous because of imminent required action and certain actions potentially becoming impossible to perform (lots of corner cases)
- instead a "soft" limiting mechanism is proposed which raises the price slowly but exponentially when the size is over the desired limit
- exponential price adjustment ensures that the system can adjust to the necessary price range and is also independent of current Ether value over the long term
- ensures that total size is under the limit at least 50% of the time
- allows some fluctuations in total size but that is not a problem for a fixed size SSD setup because SSDs should not be 100% utilized anyway
controlValue(block) = LIMIT( (totalSize(block)*2^20/sizeLimit(block)-2^20), -2^16, 2^16 )
pricePerByte(block) = MAX( 2^16, pricePerByte(block.Parent) + pricePerByte(block.Parent) * controlValue(block) / 2^30 )
cumulativePrice(block) = cumulativePrice(block.Parent) + pricePerByte(block)
stateEntry.storageBalance -= stateEntrySize(stateEntry) * cumulativePrice(currentBlock)-stateEntry.lastCumulativePrice
stateEntry.lastCumulativePrice = cumulativePrice(currentBlock)
if (stateEntry.storageBalance + stateEntry.balance < 0) { delete(stateEntry) }
Once we have implemented the state rent the refunds for clearing state elements should be immediately abandoned. That thing is a liability, we should not ever have negative prices even with that weird workaround. Processing and storing are two different things and gas is for processing. Setting a storage element to zero decreases the rent for the future but it has a positive immediate one-time processing cost similar to setting it to any other value and it should be charged accordingly.
Thanks a lot for working on this!
I'd still argue for a fixed fee. Thinking of a hard drive capacity limit is the wrong model for this; rather, the better way to think about it is growing levels of inconvenience as storage gets larger. For example:
OTOH, the benefits of price predictability even going into the long term - being able to pay rent for a contract and guarantee its survival for, say, 2 years, are very valuable too (my paper goes into more detail on why if you accept the assumption that social costs of increasing storage are linear, fixed fees are optimal).
Here's how we could approximately target a storage size: currently, filling a byte of storage forever costs ~$0.0004 at 5 gwei. We seem to believe that storage is relatively too large even after ~1.5 years after serious activity, so we can make that be the cost of filling a byte of storage for 4 months, so charge $0.0012 per year per byte (ie. 10 szabo per year per byte). For a regular account (~100 bytes), this would cost 1 finney per year, which I believe was already the recommended fee level.
If hypothetically the ETH market cap reaches gold (~$7t market cap, so ~$70000 price), this would mean an account costs $70/year, which seems ridiculous, but remember that under such levels of adoption it's also extremely probable that without scaling tech we'll see very high txfees, and txfees have already gone up to ~$10/tx levels before.
If we make the minimum live time 1/10 years, then someone with 1 ETH can burn it to temporarily add a megabyte to the storage for a little over a month, and someone with 10000 ETH can burn it to temporarily add 10 gigabytes. So it's quite a high margin for a DoS attack; easier to do tx fee spam.