Last active
August 14, 2017 20:43
-
-
Save gubatron/143e431ee01158f27db4 to your computer and use it in GitHub Desktop.
Function to propose accepted maximum block size limit in Bitcoin blockchain.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
''' | |
INITIAL IDEA: | |
Just to illustrate idea on how I believe the maximum block size should be determined. | |
My idea stems from a simple scalability metric that affects real users and the desire to use Bitcoin: | |
Waiting times to get your transactions confirmed on the blockchain. | |
Anything past 45mins-1 hour should be unnacceptable. | |
Initially I wanted to measure the mean time for the transactions in blocks to go from being sent by the user | |
(initial broadcast into mempools) until the transaction was effectively | |
confirmed on the blockchain, say for 2 blocks (acceptable 15~20mins) | |
When blocks get full, people start waiting unnaceptable times for their transactions to come through | |
if they don't adjust their fees. The idea is to avoid that situation at all costs and keep the network | |
churning to the extent of its capabilities, without pretending a certain size will be right at some | |
point in time, nobody can predict the future, nobody can predict real organic usage peaks | |
on an open financial network, not all sustained spikes will come from spammers, | |
they will come from real world use as more and more people think of great uses for Bitcoin. | |
I presented this idea to measure the mean wait time for transactions and I was told | |
there's no way to reliably meassure such a number, there's no consensus when transactions are still | |
in the mempool and wait times could be manipulated. Such an idea would have to include new timestamp fields | |
on the transactions, or include the median wait time on the blockheader (too complex, additional storage costs) | |
ITERATION AFTER FEEDBACK: | |
This is an iteration on the next thing I believe we can all agree is 100% accurately measured, blocksize. | |
Full blocks are the cause why many transactions would have to be waiting in the mempool, so we should be able | |
to also use the mean size of the blocks to determine if there's a legitimate need to increase or reduce the | |
maximum blocksize. | |
The idea is simple, If blocks are starting to get full past a certain threshold then we double the blocksize | |
limit starting the next block, if blocks remain within a healthy bound, transaction wait times should be as | |
expected for everyone on the network, if blocks are not getting that full and the mean goes below a certain | |
threshold then we half the maximum block size allowed until we reach the level we need. | |
Similar to what we do with hashing difficulty, it's something you can't predict, therefore no fixed limits, | |
or predicted limits should be established. | |
-@gubatron | |
''' | |
# Half the block limit size if we're only using consistently < 15% of the current block size limit. | |
HALFING_THRESHOLD = 0.15 | |
# Double the current block limit size if we've been using consistently > 85% of the current block size limit. | |
DOUBLING_THRESHOLD = 0.85 | |
# Number of blocks to check (Suggested, Past ~2 hours) | |
NUM_RECENT_BLOCKS_TO_CHECK = 12 | |
MIN_MAX_BLOCK_SIZE = 1000000 | |
def block_size_limit(n_last_block_sizes, current_max_block_size_limit): | |
''' | |
Parameters: | |
n_last_block_sizes: | |
A list of the most recent NUM_RECENT_BLOCKS_TO_CHECK block sizes on the blockchain in bytes. | |
e.g. | |
[512803, 426082, 869289, 704532, 119053, 575327, 303564, 178326, 161196, 925118, 926796, 910579] | |
current_max_block_size_limit: The current blocksize limit in bytes. | |
''' | |
median_block_fillrate = median(n_last_block_sizes) / current_max_block_size_limit | |
if median_block_fillrate > DOUBLING_THRESHOLD: | |
return current_max_block_size_limit * 2 | |
elif median_block_fillrate < HALFING_THRESHOLD: | |
return max(current_max_block_size_limit/2, MIN_MAX_BLOCK_SIZE) | |
return current_max_block_size_limit | |
def median(mylist): | |
sorts = sorted(mylist) | |
length = len(sorts) | |
if not length % 2: | |
return (sorts[length / 2] + sorts[length / 2 - 1]) / 2.0 | |
return sorts[length / 2] | |
if __name__=='__main__': | |
# Test. | |
current_max_block_size_limit = 1000000 # 1Mb today. | |
last_12_block_sizes = [512803, 426082, 869289, 704532, 119053, 575327, 303564, 178326, 161196, 925118, 926796, 910579] | |
current_max_block_size_limit = block_size_limit(last_12_block_sizes, 1000000) | |
print current_max_block_size_limit, "\n" | |
last_12_block_sizes = [512803, 926082, 926796, 704532, 119053, 575327, 903564, 178326, 861196, 925118, 926796, 910579] | |
current_max_block_size_limit = block_size_limit(last_12_block_sizes, current_max_block_size_limit) | |
print current_max_block_size_limit, "\n" | |
miners would have to be able to collude very quickly.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Measuring confirmation time is a good idea and here's why: if waiting times actually increase, the fee market has failed to solve the problem.
Unfortunately, as a metric to derive max blocksize, confirmation time suffers from the same problem as actual blocksize -- the miner can influence it deliberately.