I (instagibbs) was asked to draft a statement, I feel this is a fair summation of the project's direction. I might be wrong
Bitcoin Core’s next release will, by default, relay and mine transactions whose
OP_RETURN
outputs exceed 80 bytes and allow any number of these outputs.
The long-standing cap, originally a gentle signal that block space should be used
sparingly for non-payment proof of publication data, has outlived its utility.
Readers who want the full policy history should consult Bitcoin Optech’s Waiting for
Confirmation mempool series.
Consensus rules decide whether a transaction can ever be included in a block. Standardness rules (aka policy) implemented in Bitcoin Core’s relay code decide whether it is forwarded across the peer-to-peer network before it reaches a miner. Three considerations motivate those extra checks.
- Denial-of-Service defence. Nodes decline transactions that waste CPU, RAM, or bandwidth disproportionate to their fee. E.g., quadratic hashing from legacy scripts.
- Incentive alignment. Giving policy nudges towards wallet authors for fee-efficient yet UTXO-friendly constructions.
- Upgrade safety. Unknown opcodes or version bits remain non-standard until activated by a soft fork, preventing premature use that could hamper future consensus changes.
Standardized OP_RETURN
outputs embody that philosophy. Users were
already embedding arbitrary data in spendable outputs, leaving toxic,
unspendable entries in the UTXO set. OP_RETURN gave them a provably
unspendable output that is not added to the UTXO set; the 80-byte ceiling that accompanied it was a
soft deterrent: large enough for a hash or short commitment, too small for
a photograph.
The modern transaction landscape has rendered the legacy cap ineffective and, in several ways, damaging.
A number of private mining accelerators simply do not enforce these limits, and other centralized services use alternative implementations to peer with these miners as well.
Large-data inscriptions are happening regardless and can be done in more or less abusive ways; the cap merely channels them into more opaque forms that cause damage to the network.
When the polite avenue is blocked, determined users turn to impolite ones. Some use bare multisig or craft fake output public keys that do enter the UTXO set, exactly the outcome OP_RETURN was invented to avoid.
Some have proposed an aggressive blacklist against recognised data-embedding tricks. The project declined for both pragmatic and philosophical reasons. It does not stop the most basic forms of data embedding as mentioned before. There is no reliable pattern to detect “bad data”, resulting in a complex game of cat-and-mouse increasing negative externalities, and risks confiscation of user’s funds.
Blocks remain limited to 4 million weight units; dust outputs are still rejected; signature-operation and ancestor/descendant caps still guard mempool growth. The withdrawal of the 80-byte rule yields in at least two tangible benefits:
- Cleaner UTXO set. Data now fits in a single, provably unspendable output rather than being disguised in spendable scripts or spread over multiple transactions.
- Consistent default behaviour. Nodes relay the same transactions miners want to see, making fee estimation and compact-block relay more reliable.
Three possible paths were considered:
- Keep the cap. Rejected as ineffective and arbitrary.
- Raise the cap. Still arbitrary; any figure likely to age poorly.
- Delete the cap. Aligns default policy with actual network practice, minimises incentives for harmful workarounds, and simplifies the relay path.
Option 3 earned broad, though not perhaps unanimous, support. Dissenting parties remain free to modify software, run stricter policy, or propose new resource limits if empirical harm emerges.
The change re-affirms that Bitcoin is governed by transparent, minimal rules rather than editorial preference. By retiring a deterrent that no longer deters, Bitcoin Core keeps the policy surface lean and lets the fee market arbitrate competing demands.
Should a future pattern demonstrably exhaust node resources, targeted protections will be considered as they have been for signature-checking limits, ancestor limits, and dust rules.
@achow101 Thank you for the clarifications. I'd like to follow up on your comment. Maybe you can clarify this, too:
I don't see all outcomes as bad.
The small miners know the propagation filters in advance. If they still choose to produce a block, knowing it will be slowed down, then it's their fault for losing. No one stopped them from following the propagation filters. They lost, so the filter worked as intended.
Any other small miners who respected the filter got a boost in comparison with the ones who did not respect the filter.
In such an event, the forfeited share of bitcoin rewards might move toward the large miners. I'm fine with this outcome. The large miners did nothing wrong, and the rewards were initially heading to a bad actor.
The outcome of large miners producing bad blocks is bad. But it will always be bad (with or without the propagation delay). If large miners see an edge when using slow blocks, they do not need the filter to slow their blocks. They will publish them as slowly as they want.