Last active
August 11, 2017 06:15
-
-
Save mbarkhau/00129f99e19cf28cbfb2cdf8c58c5f60 to your computer and use it in GitHub Desktop.
proof_of_work_in_time_and_space.ipynb
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| { | |
| "cells": [ | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "# Proof of Work in Time and Space\n", | |
| "\n", | |
| "This article was inspired by a talk called \"Combining Proofs of Space\n", | |
| "and Proofs of Time\" by Bram Cohen (@bramcohen) at BPASE '17:\n", | |
| "https://www.youtube.com/watch?v=aYG0NxoG7yw\n", | |
| "\n", | |
| "Bram Cohen has described in the abstract, how to combine a traditional\n", | |
| "proof of work system which uses processing resources together with proof\n", | |
| "of work system based on disk space. Here I will describe an example of\n", | |
| "such a system in more detail.", | |
| "\n", | |
| "Much more comprehensive work has been done in \"Proofs of Space\":\n", | |
| "https://eprint.iacr.org/2013/796.pdf\n", | |
| "A cryptocurrency called \"Burstcoin\" has been developed based on these ideas\n", | |
| "\n", | |
| "This notebook differs from both in that it seeks to combine the two\n", | |
| "kinds of Proof of Work\n" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "## Proof of Work Use Case: Email\n", | |
| "\n", | |
| "Let's start with a little motivation and background on proofs of work. Say for example you want to send me an email. The trouble is, that I don't want to get spammed by any and all snake oil salesmen, and how can I tell that you aren't one of them? One scheme we have come up with (though it has not been widely adopted) is to impose a minimal cost on each sender. The idea being that for the spammer this cost is prohibitive, whereas for any legitimate sender it is negligible. The reason it is so costly for the spammer is, that her principle is to get a very few suckers among tens of thousands to take the bait. Since it doesn't cost her much to send tens of thousands of emails, even just the one sucker is enough to make the whole endeavour worthwhile. If we impose even a minimal cost for each email however, the calculation becomes quite different. Now she requires a vast quantity of resources to find even just one sucker. So maybe she will decide to instead spend her time on something more worthwhile for\n", | |
| "humanity.\n", | |
| "\n", | |
| "For such a scheme to work, we need a way for a sender to prove to the recipient, that they have spent a minimum of resources on a specific task. If I as a recipient can easily validate, that the sender of an email has spent scarce resources for no other purpose than to convince me that they really want me to get their message, then maybe I'm not just one of a few thousand potential suckers and it may be worthwhile for me to read their message." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "## Finding a Needle in a Haystack\n", | |
| "\n", | |
| "Cryptographic hash functions turn out to be a suitible building block for\n", | |
| "these kinds of proofs. Ideally these hash functions have the property that\n", | |
| "any bit in their output has a 50/50 chance of being set to 1 or 0. In other\n", | |
| "words, if we look at the first bit of an output for any given input, the\n", | |
| "chances of it being set to 0 are 1 in 2. For the first two consecutive\n", | |
| "bits, the chances of both being 0 are 1 in 4, for the first three 1 in 8\n", | |
| "and so forth. This is the same behaviour as we would expect for a random number, so just for illustration let's compare the output of of sha256 to /dev/urandom. " | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 1, | |
| "metadata": { | |
| "collapsed": true | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "# First we need some boilerplate/setup code\n", | |
| "# you can ignore this cell if you like.\n", | |
| "\n", | |
| "import os\n", | |
| "import hashlib\n", | |
| "from binascii import hexlify, unhexlify\n", | |
| "\n", | |
| "def random_nonce():\n", | |
| " return os.urandom(32)\n", | |
| "\n", | |
| "def hashfn(*inputs):\n", | |
| " h = hashlib.new('sha256')\n", | |
| " for data in inputs:\n", | |
| " h.update(data)\n", | |
| " return h.digest()\n", | |
| "\n", | |
| "null_hash = b\"e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855\"\n", | |
| "assert hexlify(hashfn(b\"\")) == null_hash\n", | |
| "assert hashfn(b\"foobar\") == hashfn(b\"foo\", b\"bar\")\n", | |
| "assert hashfn(b\"foobar\") != hashfn(b\"bar\", b\"foo\")\n", | |
| "\n", | |
| "def bin_repr(val, pad=256):\n", | |
| " if isinstance(val, bytes):\n", | |
| " val = int(hexlify(val), 16)\n", | |
| " else:\n", | |
| " assert isinstance(val, int)\n", | |
| " return bin(val)[2:].zfill(pad)\n", | |
| "\n", | |
| "# sanity check/tests for binary representation of a bytes string\n", | |
| "assert bin_repr(b\"\\x00\", pad=8) == \"00000000\"\n", | |
| "assert bin_repr(b\"\\x80\", pad=8) == \"10000000\"\n", | |
| "assert bin_repr(b\"\\x08\", pad=8) == \"00001000\"\n", | |
| "assert bin_repr(b\"\\x08\\xff\", pad=16) == \"0000100011111111\"\n" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 2, | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "name": "stdout", | |
| "output_type": "stream", | |
| "text": [ | |
| "0011 0110 0111 0010 1001 1110 1000 0000 1001 0100 1000 1011 0010 1101 1010 1011 0100 1001 0110 0010 0011 1001 1111 0110 1100 0111 1110 1000 1110 0101 0000 0100 0001 0011 1011 1000 0011 1000 1110 0010 0001 1100 0000 1010 1110 1101 1001 1011 " | |
| ] | |
| } | |
| ], | |
| "source": [ | |
| "# To start off with, let's just look at a small sample of random\n", | |
| "# bits\n", | |
| "\n", | |
| "for i in range(48):\n", | |
| " random_data = random_nonce()\n", | |
| " print(bin_repr(random_data)[:4], end=\" \")" | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 3, | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "name": "stdout", | |
| "output_type": "stream", | |
| "text": [ | |
| "1000 1110 0111 0100 0001 0111 1110 1111 0000 0010 1100 0110 1000 1011 0100 0110 1110 1100 1011 1111 0001 0101 1011 1000 0000 0111 1100 0100 1000 1100 0001 0010 0111 1111 0110 0011 0011 0010 1100 0100 0111 0101 1010 1101 0000 0100 0010 0001 " | |
| ] | |
| } | |
| ], | |
| "source": [ | |
| "# and compare them to the output of sha256\n", | |
| "for i in range(48):\n", | |
| " input_data = random_nonce()\n", | |
| " output_data = hashfn(input_data)\n", | |
| " print(bin_repr(output_data)[:4], end=\" \")" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "Of course this is just so you can roughly see what we're dealing with.\n", | |
| "Let's demonstrate more thoroughly to ourselves that the number of left most bits\n", | |
| "is distributed as we expect them to be." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 4, | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "name": "stdout", | |
| "output_type": "stream", | |
| "text": [ | |
| "leading counts odds\n", | |
| "zeros random hashes random hashes\n", | |
| " 1 499772 500091 1/2 1/2 \n", | |
| " 2 249779 249584 1/4 1/4 \n", | |
| " 3 124916 124723 1/8 1/8 \n", | |
| " 4 62145 62350 1/16 1/16 \n", | |
| " 5 31086 31176 1/32 1/32 \n", | |
| " 6 15656 15638 1/64 1/64 \n", | |
| " 7 7786 7925 1/128 1/126 \n", | |
| " 8 3896 3943 1/257 1/254 \n", | |
| " 9 1951 2032 1/513 1/492 \n", | |
| " 10 970 1012 1/1031 1/988 \n", | |
| " 11 474 504 1/2110 1/1984\n", | |
| " 12 247 235 1/4049 1/4255\n" | |
| ] | |
| } | |
| ], | |
| "source": [ | |
| "trials = 1000000\n", | |
| "random_counts = [0] * 256\n", | |
| "hash_counts = [0] * 256\n", | |
| "\n", | |
| "for i in range(trials):\n", | |
| " random_data = random_nonce()\n", | |
| " hash_output_data = hashfn(random_data)\n", | |
| " for index, bit in enumerate(bin_repr(hash_output_data)):\n", | |
| " if bit == \"1\":\n", | |
| " break\n", | |
| " hash_counts[index] += 1\n", | |
| " \n", | |
| " for index, bit in enumerate(bin_repr(random_data)):\n", | |
| " if bit == \"1\":\n", | |
| " break\n", | |
| " random_counts[index] += 1\n", | |
| "\n", | |
| "print(\"leading counts odds\")\n", | |
| "print(\"zeros random hashes random hashes\")\n", | |
| "for index in range(12):\n", | |
| " random_count = random_counts[index]\n", | |
| " hash_count = hash_counts[index]\n", | |
| " random_under = int(round(trials / random_count))\n", | |
| " hash_under = int(round(trials / hash_count))\n", | |
| " print(\n", | |
| " f\"{index + 1:>5} \"\n", | |
| " f\"{random_count:>6} {hash_count:>6} \"\n", | |
| " f\"1/{random_under:<4} 1/{hash_under:<4}\"\n", | |
| " )" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "We can see that the odds for a hash to have a certain number of leading zeros is what we would expect if the output were random. Based on the above numbers, it appears that the odds for `n` leading zeros are `1/2^n`.\n", | |
| "\n", | |
| "This means that an email recipient can pose a challenge to a sender of the following form: For this random input, find a hash with at least 10 leading zeros. If such a hash is provided by the sender, we can assume that on average 1024 hashes must have been tested, before such a hash could be found. If the recipient receives such a hash and all the inputs required to produce it, they can verify that:\n", | |
| "\n", | |
| " 1. One of the inputs was the one they provided, so that the hash must have been freshly computed.\n", | |
| " 2. The hash was indeed produced correctly as posed by the challenge.\n", | |
| "\n", | |
| "This verification step is very cheap, compared to the cost of finding the hash in the first place and so the recipient can be satisfied that a certain amount of CPU time was probably consumed in order to produce it, without investing much themselves. " | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "## Proof of Work in Time\n", | |
| "\n", | |
| "Now that we have reviewed the principle, let's look at a toy implementation of such a scheme." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 5, | |
| "metadata": { | |
| "collapsed": true | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "def get_max_witness_hash(difficulty):\n", | |
| " bin_mwh = (\n", | |
| " \"0\" * difficulty +\n", | |
| " \"1\" +\n", | |
| " \"0\" * (255 - difficulty) \n", | |
| " )\n", | |
| " int_mwh = int(bin_mwh, 2)\n", | |
| " bytes_mwh = int_mwh.to_bytes(32, byteorder='big')\n", | |
| " assert len(bin_mwh) == 256\n", | |
| " assert len(bytes_mwh) == (256 / 8)\n", | |
| " return bytes_mwh\n", | |
| "\n", | |
| "# Since we will be using lexiographic comparison, we\n", | |
| "# do a little sanity check for the byteorder. We always\n", | |
| "# expect a smaller difficulty to produce a larger max\n", | |
| "# valid witness.\n", | |
| "for dif in range(20):\n", | |
| " assert get_max_witness_hash(dif) > get_max_witness_hash(dif + 1)\n", | |
| " \n", | |
| "def find_pow_witness(challenge_nonce, difficulty):\n", | |
| " max_witness_hash = get_max_witness_hash(difficulty)\n", | |
| " rounds = 0\n", | |
| " witness_hash = b\"\\xff\" * 32\n", | |
| " while witness_hash > max_witness_hash:\n", | |
| " rounds += 1\n", | |
| " witness_nonce = random_nonce()\n", | |
| " witness_hash = hashfn(challenge_nonce, witness_nonce)\n", | |
| " return rounds, witness_nonce, witness_hash\n", | |
| "\n", | |
| "def is_valid_witness(challenge_nonce, witness_nonce, difficulty):\n", | |
| " verification_hash = hashfn(challenge_nonce, witness_nonce)\n", | |
| " return verification_hash < get_max_witness_hash(difficulty)" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "As we can see, the validation function `is_valid_witness` must only execute the hash function `hashfn`, whereas the function `find_pow_witness` must churn through random inputs, in the hope that at some point it will find an appropriate `witness_hash`. Here we have introduced the concept of difficulty, which in our case simply corresponds to a certain number of leading zeros. The greater the difficulty, the more zeros required. The following code demonstrates that fewer proof of work witnesses can be found for a given difficulty within one second." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 6, | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "name": "stdout", | |
| "output_type": "stream", | |
| "text": [ | |
| "difficulty: 0 witnesses: 13796 rounds/witness: 2.0\n", | |
| "difficulty: 1 witnesses: 10537 rounds/witness: 4.0\n", | |
| "difficulty: 2 witnesses: 6087 rounds/witness: 7.9\n", | |
| "difficulty: 3 witnesses: 2608 rounds/witness: 15.9\n", | |
| "difficulty: 4 witnesses: 1559 rounds/witness: 31.7\n", | |
| "difficulty: 5 witnesses: 903 rounds/witness: 62.5\n" | |
| ] | |
| } | |
| ], | |
| "source": [ | |
| "import time\n", | |
| "\n", | |
| "for difficulty in range(6):\n", | |
| " t0 = time.time()\n", | |
| " all_rounds = []\n", | |
| " elapsed = 0\n", | |
| " while elapsed < 1:\n", | |
| " challenge_nonce = random_nonce()\n", | |
| " num_rounds, witness_nonce, witness_hash = find_pow_witness(\n", | |
| " challenge_nonce, difficulty\n", | |
| " )\n", | |
| " assert is_valid_witness(challenge_nonce, witness_nonce, difficulty)\n", | |
| " all_rounds.append(num_rounds)\n", | |
| " elapsed = time.time() - t0\n", | |
| " mean_rounds_per_witness = sum(all_rounds) / len(all_rounds)\n", | |
| " print(\n", | |
| " f\"difficulty: {difficulty} \"\n", | |
| " f\"witnesses: {len(all_rounds):>5} \"\n", | |
| " f\"rounds/witness: {mean_rounds_per_witness: 6.1f}\"\n", | |
| " )" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "## Proof of Work in Space\n", | |
| "\n", | |
| "Any proof of work scheme requires the investment of resources, that seems to be unavoidable. It would be nice however, if we had more choice in the resouces we use. Unused online storage, such as the empty space on so many hard disks are one such resource, for which the following proof of work scheme might be suitible.\n", | |
| "\n", | |
| "Again we will look at the case of an email recipient, who wishes to avoid being spammed. This time, the sender will have to compute a set of hashes at the outset, and store them somewhere. Since he has to compute them, the inputs will have to be known at the outset also. In this case we will simply choose the email address of the recipient as one of the inputs." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 7, | |
| "metadata": { | |
| "collapsed": true | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "def gen_witness_hashes(seed_nonce, difficulty):\n", | |
| " num_hashes = 2 ** difficulty\n", | |
| " hashes = []\n", | |
| " witness_nonces = (random_nonce() for i in range(num_hashes))\n", | |
| " witness_hashes = [\n", | |
| " (hashfn(seed_nonce, witness_nonce), witness_nonce)\n", | |
| " for witness_nonce in witness_nonces\n", | |
| " ]\n", | |
| " witness_hashes.sort()\n", | |
| " return witness_hashes" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "The recipient can now pose a different challenge to the sender: For this nonce, give me the closest hash you have to it, as well as the inputs you generated it with. If the distance between the two hashes is low enough, the recipient can be sure, that it was stored with many others, provided the response time was low enough that calculating it on the fly would have been infeasible.\n", | |
| "\n", | |
| "Here we generate 2^18 hashes, which takes approximately six seconds. If we receive a response to a challenge by a much smaller time than this, then we can assume they must have already been calculated and stored in some kind of memory." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 8, | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "name": "stdout", | |
| "output_type": "stream", | |
| "text": [ | |
| "duration: 6 seconds\n" | |
| ] | |
| } | |
| ], | |
| "source": [ | |
| "t0 = time.time()\n", | |
| "witness_hashes = gen_witness_hashes(b\"[email protected]\", difficulty=18)\n", | |
| "print(\"duration:\", round(time.time() - t0), \"seconds\")" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "Based on this we can again implement a toy example, where the sender simply has to do a binary search on on the hashes he has previously produced. Since we are using precalculated values, we cannot continue calculating until an appropriate value is found. Instead we simply return the closest hash. On the side of the recipient we could pose multiple challenges and see what the average distance of the witness hashes is, before we are satisfied with the proof." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 9, | |
| "metadata": { | |
| "collapsed": true | |
| }, | |
| "outputs": [], | |
| "source": [ | |
| "import bisect\n", | |
| "\n", | |
| "def distance(hash_a, hash_b):\n", | |
| " int_a = int.from_bytes(hash_a, byteorder='big')\n", | |
| " int_b = int.from_bytes(hash_b, byteorder='big')\n", | |
| " return abs(int_a - int_b)\n", | |
| "\n", | |
| "def find_closest_hash(hashes, challenge_nonce):\n", | |
| " idx = bisect.bisect(hashes, (challenge_nonce, None))\n", | |
| " return hashes[idx]\n", | |
| "\n", | |
| "def is_valid_witness(seed_nonce, witness_nonce, challenge_nonce, difficulty):\n", | |
| " max_distance = get_max_witness_hash(difficulty)\n", | |
| " witness_hash = hashfn(seed_nonce, witness_nonce)\n", | |
| " int_witness_distance = distance(witness_hash, challenge_nonce)\n", | |
| " witness_dist = int_witness_distance.to_bytes(32, byteorder='big')\n", | |
| " return witness_dist < max_distance" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "Here we see how many valid witnesses we receive. (There is something I haven't considered, otherwise the result would be closer to 0.5. Looking at the inputs, I am at least satisfied that principle is demonstrated, but any feedback on what I missed would be appreciated)." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": 10, | |
| "metadata": {}, | |
| "outputs": [ | |
| { | |
| "name": "stdout", | |
| "output_type": "stream", | |
| "text": [ | |
| "0.35\n" | |
| ] | |
| } | |
| ], | |
| "source": [ | |
| "valid_witnesses = 0\n", | |
| "num_challenges = 100\n", | |
| "\n", | |
| "for i in range(num_challenges):\n", | |
| " challenge_nonce = random_nonce()\n", | |
| " witness_hash, witness_nonce = find_closest_hash(witness_hashes, challenge_nonce)\n", | |
| " if is_valid_witness(b\"[email protected]\", witness_nonce, challenge_nonce, difficulty=18):\n", | |
| " valid_witnesses += 1\n", | |
| " \n", | |
| "print(valid_witnesses / num_challenges)" | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": {}, | |
| "source": [ | |
| "Using this scheme, a recipient could now pose a list of challenges to a sender, examine the witnesses and be satisfied or not, that the sender has dedicated a certain amount of memory to storing hashes. A nice feature of this is, that the precomputed hashes can be reused. All that a legitimate sender needs to do is compute it once, for example when he adds the email address of the recipient to his contact list, and from then on he can reuse that list again and again." | |
| ] | |
| }, | |
| { | |
| "cell_type": "markdown", | |
| "metadata": { | |
| "collapsed": true | |
| }, | |
| "source": [ | |
| "## Use Case: Blockchain\n", | |
| "\n", | |
| "By itself a Proof of Work in Space system is not suitable for a blockchain, since there does not appear to be any way to have a steady rate for new blocks to be generated. However, it may be possible to combine a Proof of Work in Time together with a Proof of Work in Space, so that the security provided by mining is based on more diverse resources which are more widely distributed at a low marginal cost to each miner.\n", | |
| "\n", | |
| "To keep things simple let's assume that two witness hashs are required in order to generate a new block. One from a Proof of Work in Time miner and one from a Proof of Work in Space miner, with the rewards going to each miner.\n", | |
| "\n", | |
| "Before going through the steps of the protocol please indulge my use of these aliases.\n", | |
| "\n", | |
| " - TimePoW: Proof of Work in Time\n", | |
| " - SpacePoW: Proof of Work in Space\n", | |
| "\n", | |
| "### Block Generation\n", | |
| "\n", | |
| "The first step in generating a new block is similar to the generation of a new block in existing blockchains such as Bitcoin. A TimePoW miner uses inputs based on the highest block to generate TimePoW witness hash which must satisfy a certain difficulty rating. Once the miner finds such a witess hash, all that is required to generate a block is a corresponding SpacePoW witness hash.\n", | |
| "\n", | |
| "The TimePoW miner generates SpacePoW challenge, which might simply be the TimePoW witness hash itself, together with all of its inputs. Any SpacePoW miners who receive this broadcast do a lookup in their database of hashes and if they find a witness hash which satisfies the SpacePoW difficulty rating for the next block, they can proceed to generate the block. \n", | |
| "\n", | |
| "\n", | |
| "### Incentives and Witness Hash Inputs\n", | |
| "\n", | |
| "SpacePoW witness hashes must be precomputed, so that challenges can be satisfied as quickly as possible. A few criteria must be satisfied to incentivise participation in this mining scheme.\n", | |
| "\n", | |
| "Since collaboration between two parties is required in order to generate a block, it must be possible to make these hashes public without the risk of third parties obtaining the mining reward. One simple way to do this, would be to have the receiving/reward address of the miners be included as inputs to the witness hashes they generate. Another input which miners might want to influence are the version bits for protocol upgrades which they want to support. The blockchain protocol should define valid blocks to only be those, where the rewards are paid out to the same addresses used as inputs to witness hashes.\n", | |
| "\n", | |
| "In order to prevent unnecessary blockchain reorganization, the blockchain protocol should either ensure that only one of the miners can generate the final block, or ensure that both will generate the same block. Since generation of blocks is time sensitive, it may be preferable for SpacePoW miners to be the ones who generate the final block so that round trips are avoided.\n", | |
| "\n", | |
| "It may be that there are no SpacePoW witness hashes in the whole network which satisfy a given challenge and difficulty. In this case TimePoW miners simply continue until a new witness hash and challenge are produced which does satisfy the difficulty rating.\n", | |
| "\n", | |
| "\n", | |
| "### Conclusion\n", | |
| "\n", | |
| "There is obviously a great deal more work and thought that must be done. For example there might be concerns about daily fluctuations in the number of active SpacePoW nodes: Since the marginal cost of participating in block generation as a SpacePoW miner might be minimal, we might assume that many more consumer devices might contribute their otherwise unused storage space in order to gain a potential reward. An unfortunate side effect of this might be that not all of these devices would be online 24/7 as current TimePoW miners are incentivised to be. They must constantly be online in order to earn back their investments in mining hardware. In the worst case the number of SpacePoW nodes which are online would fall considerably during the early morning hours, so that block generation would slow down. Alternatively, the difficulty adjustment might be biased fairly low, so that block generation would always be quick, at the price of a higher number of blockchain reorganizations when more nodes are online and generating competing blocks.\n", | |
| "\n", | |
| "This is presumably just one of many problems which need to be considered and solved before this kind of combined Proof of Work system might be used in a blockchain. I hope more developers become aware of this alternative possibility so that we can determine if it is viable." | |
| ] | |
| }, | |
| { | |
| "cell_type": "code", | |
| "execution_count": null, | |
| "metadata": { | |
| "collapsed": true | |
| }, | |
| "outputs": [], | |
| "source": [] | |
| } | |
| ], | |
| "metadata": { | |
| "kernelspec": { | |
| "display_name": "Python 3", | |
| "language": "python", | |
| "name": "python3" | |
| }, | |
| "language_info": { | |
| "codemirror_mode": { | |
| "name": "ipython", | |
| "version": 3 | |
| }, | |
| "file_extension": ".py", | |
| "mimetype": "text/x-python", | |
| "name": "python", | |
| "nbconvert_exporter": "python", | |
| "pygments_lexer": "ipython3", | |
| "version": "3.6.0" | |
| } | |
| }, | |
| "nbformat": 4, | |
| "nbformat_minor": 2 | |
| } |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment