Each repository in a distributed version control system (DVCS) such as Git and Mercurial, is a [directed acyclic graph] (DAG). Each DAG node is a [cryptographic] hash that uniquely identifies (i.e. hashes or checksums) a comment, the set of changes (a.k.a. ‘commit’ or ‘changeset’) from, and the cryptographic hash of each of, the directly antecedent ‘parent’ changeset(s). A ‘merge’ changeset has multiple parents, and specifies the changes to each parent such that all those parents result in the same content.
The Union Type Quiz | |
=================== | |
0. Please give your typing approach an intuitive name: | |
Consistent, Unified Subsumption | |
1. Please describe the guiding principle of your typing approach in one sentence: | |
A fast rejection signature is an essential ingredient for asymmetric leverage against distributed denial-of-service in many scenarios.
At a high level of abstraction, the ontological taxonomy of denial-of-service attacks categorize into either A) network bandwidth flooding; or B) saturated consumption a resource other than bandwidth[1].
In both cases, the attacker gains leverage by exploiting some asymmetry in the consumption or (uncompensated) cost of the attacked resource.
jl777 wrote:
why not have the way you allocate the memory determined how it behaves?
In a garbage collected (GC) language such as JavaScript, you don't actually ever explicitly allocate and deallocate memory. Thus we avoid the bug of crashing when a pointer points to memory that has already been freed, or the memory leaks due to forgetting to deallocate memory. GC languages can still have semantic memory leaks though, such as forgetting to remove an event handler. Automatic reference counting pointers have the flaw that they memory leak where there is a circular reference (pointer points to some structure which contains a pointer which points back to the struct that contains the first pointer).
So GC is much easier for the programmer than manually tracking memory allocation and deallocation. But it has one disadvantage which is that it requires up to 5X the memory to achieve the same performance, and the mark-and-sweep pauses can lockup the machine for many seconds. There are ways to improve the GC. The m
So Rust has global references ownership rules, which only allow one (borrowed) mutable reference to the same object to be active in the current block scope. The objective is to prevent race conditions on the object. But from the discussion we had on the forum, a DAG tree of disjointness is not a general model of imperative programming and afaics in the general case requires dependent typing (which is known to exclude Turing-completeness). Thus I abandoned Rust's global borrowing rules, because it seems like a pita of false promises that is rarely (20% of the time?) applicable.
But I [proposed a more localized
PoW’s lack of concurrent (aka asynchronous) partial orders as elaborated in the sub-section Aliasing, doesn’t directly negate resiliency and liveness, because every mining node is eligible to produce the next block. But the resiliency and liveness provided by the availability of a diverse (in geography, performance, policies, cost structure, etc) plurality of miners competing to produce the next block is always diminishing as explained in the sub-section Censorship.
There is no asynchronous (i.e. concurrency of) transaction ordering for scaling transaction volume throughput. Only the mining node (possibly a pool) that produces the next block may add transactions during the block period, and all transactions must be propagated to every node. The transaction throughput can theoretically be increased by increasing the block size, although this creates other scaling limitations such as O(n²)
propagation to, and validation load on, all nodes― even those
There is no possible solution to the block size dilemma in Satoshi’s proof-of-work design, except for a power vacuum driven monopolistic outcome.
If the block size is constrained and when the transaction volume exceeds the block size, a fee market is created in which lower valued transactions will be delayed (some indefinitely), i.e. “crowded out”, because the more high valued transactions (with their presumably higher fees) take priority. Above some levels of transaction volume for a given block size, some threshold of lower valued transactions will not be added to a block indefinitely. And the constrained block size that causes this undesirable outcome may be necessary to prevent consensus incentive incompatibilities[45] (i.e. disincentive to form consensus, thus forking and other chaos) when minted block rewards cease.[46]
If a) the block size is unbounded, b) the minted block reward is significantly greater than the average transaction fees per block, and c) no mining ca
The majority hashrate (aka “51%”) attack which orphans some or all of the minority’s blocks is not objectively distinguishable from a random result. Some blocks are orphaned and others are not, yet there is no way to objectively prove from the information stored in the blockchain that a 51% attack was present. Clues might be indicative of a 51% attack such as a higher orphan rate, but this can also be caused by network delays which is what the 51% attack masquerades to other mining nodes. Tracking down nodes which are participating in the attack is not plausible because IP addresses and even pools can be a Sybil attack.[41] There can’t exist an objective perspective in the Byzantine Generals Problem (aka “BGP”) without a total perspective.[42] Yet a total perspective is not a BGP.
Unless a minority hashrate miner has, or is in a pool that has, at least ¹/₃
of the systemic hashrate, they won’t be able to triangulate to differentiate between random bad luck or
Dictatorship: Exponential or power-law distribution of the control of stake is a power-vacuum trending toward a winner-take-all disequilibrium.
Censorship: Not a permissionless free market to add transaction events and stand up consensus ordering (i.e. delegate witness) nodes. Eventually the winner-take-all cartel of power-law distributed whales are in control.
Monopolistic: Funding for delegate witness is not set in a competitive free market of transaction fees. Instead eventually the winner-take-all cartel of power-law distributed whales decide both the set witnesses and the level of funding generated from minting supply, thus effectively deciding how much to pay themselves by debasing the money supply.
Synchronous: No asynchronous concurrency of transaction ordering for enhanced scaling, throughput, resiliency, and liveness. Only the witness that produces the next block may add transactions during the block period, and all transactions must be propagated
The subsequent sections will explain in detail why PoW is posited to be a winner-take-all power vacuum, and the harmful effects anticipated.
Unlike for example a farm or factory which have a diminishing rate of marginal utility of economies-of-scale, i.e. economies-of-scale above which no further increases in pro rata profit are obtained, the marginal utility of additional economies-of-scale in PoW hashrate do not diminish until greater then 50% of the systemic hashrate is consolidated, which is the winner-take-all outcome.
Power vacuums are disequilibria that fail because they are incongruent with the fact that small things grow exponentially faster than large things.[34] Most small things don’t grow large enough to become stable large things, e.g. the competing saplings in the forest, because they have more competition and friction. Analogously most of the lower middle class and poor don’t grow wealthy because a higher portion of a lower income is budgeted for food instead of saving