Skip to content

Instantly share code, notes, and snippets.

★ Insight ─────────────────────────────────────
- EigenDA was the trustless intermediary for upload broadcast — gateway posts blob, operators pull, operators verify the KZG cert
against an L1 anchor before applying. Direct gateway broadcast trades that property for latency.
- The 120s commit cadence (Q3) and the daCertHash field on StateCommit (IStateRootCommittable.sol, locked in §6, error selectors
0x822ef683/0xdc4e1d57) were both calibrated to EigenDA economics (~$/blob, 14-day retention cliff). Removing EigenDA from the hot path
unlocks both knobs.
- The compliance constraint in §S.2 ("gateway MUST NOT hold authoritative private data") was load-bearing for the original topology.
Dennis's meeting line — "if bootnode is also we are operating it, then it's not really a difference" — collapses gateway and bootnode
into one Newton-trust footprint, which means the §S.2 invariant needs to be re-derived from a new principle, not just relaxed.
─────────────────────────────────────────────────
I've grounded in docs/BOOTNODE.md, the canonical PDS spec §S.18/§S.19, and the design-decisions Q6/Q9/Q18/Q21. Now I'll walk down
the design tree branch-by-branch. Starting at the root.
---
★ Insight ─────────────────────────────────────
Two contradictions surface immediately on a careful read of the bootnode docs:
1. BOOTNODE.md:3 claims "no new trust root" — but PRIVATE_DATA_STORAGE.md:418-471 introduces opsAddr as a privileged ECDSA signer
for injectSealedSnapshot. That IS a trust root, by definition.
2. Q18 picks "bootnode-first, EigenDA fallback" — but the Sub-A design rationale ("bootnode is purely a latency optimization")
implies the opposite: EigenDA should be primary, bootnode purely accelerates the cold-start cliff.
@denniswon
denniswon / gist:c9e73e39e14f8de1230d4253f46f6bda
Created April 24, 2026 14:14
Private Data Storage - Subproject B
⏺ You're right — I was stuck in the wrong substrate model. Let me course-correct.
★ Insight ─────────────────────────────────────
- I was treating "state tree" as the private-data substrate and "EVM contract storage" as the public-data substrate, as if Sub-project A's
fabric didn't apply here. It does apply. The state tree is a content-addressed key-value store with on-chain root commitments — it has nothing
inherently private about it. Encrypted leaves = private data (Sub-project A's SecretsRegistry); plaintext leaves = public data (Sub-project B's
artifacts). The substrate is the same, only leaf encryption differs.
- On-chain, we only ever need the hash + root commitments. For a specific task, NewtonPolicy.policyCodeHash already commits to the Rego bytes —
any fetcher verifies keccak256(bytes) == policyCodeHash and that closes the integrity loop regardless of where bytes live. For registry-level
questions ("is this plugin still approved?"), the 120s BLS-signed state root + inclusion proof
@denniswon
denniswon / gist:2d46eb407984d546b7a4cd8e230a1326
Created April 22, 2026 20:17
Private Data Stroge - Round 2
⏺ You're right — I was stuck in the wrong substrate model. Let me course-correct.
★ Insight ─────────────────────────────────────
- I was treating "state tree" as the private-data substrate and "EVM contract storage" as the public-data substrate, as if Sub-project A's
fabric didn't apply here. It does apply. The state tree is a content-addressed key-value store with on-chain root commitments — it has nothing
inherently private about it. Encrypted leaves = private data (Sub-project A's SecretsRegistry); plaintext leaves = public data (Sub-project B's
artifacts). The substrate is the same, only leaf encryption differs.
- On-chain, we only ever need the hash + root commitments. For a specific task, NewtonPolicy.policyCodeHash already commits to the Rego bytes —
any fetcher verifies keccak256(bytes) == policyCodeHash and that closes the integrity loop regardless of where bytes live. For registry-level
questions ("is this plugin still approved?"), the 120s BLS-signed state root + inclusion proof
@denniswon
denniswon / gist:44c9b2f407a0f8cd0790b464698324e0
Created April 17, 2026 20:10
Streaming consensus re-evaluation
❯ Refer to @docs/STREAMING_CONSENSUS.md , that contains our architectural improvement design for improving the performance / latency /
efficiency of the protocol. recall from previous memory about our discussion, analysis, trade offs, decision choices and rationales. We
will go back to this architectural design and critically re-evaluate the architecture and design decisions from the first principle,
evaluating against other approaches in terms of performance, latency, reliability, scalability, throughput, security, fault tolerance,
and (least prioritized) decentralization
⎿  Read docs/STREAMING_CONSENSUS.md (1772 lines)
⎿  Loaded docs/CLAUDE.md
⏺ I'll invoke the brainstorming skill to properly evaluate this design against alternatives from first principles.
@denniswon
denniswon / gist:4ad3289b280121723668ea7f908eb855
Created April 17, 2026 05:17
Protocol Version Compatibility enforcement test
The goal of prevent developers in the team accidentally forget to make sure the Rust code to be compatible to both old vs new
contract versions. There are 3 scenarios: 1) contracts upgraded first with breaking change before Rust update 2) contracts with
breaking change not upgraded yet, but Rust deploys, 3) contracts upgraded and Rust update also deployed, but for Newton
Policy/Newton Policy Data/and Policy Clients (that extends NewtonPolicyClient mixin), there is a grace period for the developers
to migrate/update their policy clients with new newton policy or policy data. This means the Rust update that pairs with the
contracts upgrade with breaking change should support both versions during this grace period (until version enforcement happens).
1, 2 are for prevent protocol literally failing, while 3 is for external developer's policy clients to start failing suddenly
without having time to become compatible. /clarify-plan THINK HARD
⏺ I'll work through this as a design problem. The three s
@denniswon
denniswon / gist:8ca54e7621adf642e5948676c490d9d1
Created April 15, 2026 21:23
Octane Critical Findings Resolution
⏺ Review posted: https://github.com/newt-foundation/newton-prover-avs/pull/541#pullrequestreview-4116329428
Three inline comments, state CHANGES_REQUESTED:
1. Gap 1 (blocker) on ChallengeVerifier.sol:156 — Critical 4 is only half-fixed. The hash-encoding mismatch is resolved, but the
proof-to-canonical-code binding (sub-issue 5b in Octane's report) remains open. raiseAndResolveChallenge and
slashForCrossChainChallenge bind to policy.getEntrypoint() but never to policyCid or equivalent code digest — a malicious challenger
can craft a proof evaluating different Rego code sharing the same entrypoint and slash honest operators.
2. Gap 6 (high) on ChallengeVerifier.sol:513 — after the try/catch in challengeDirectlyVerifiedMismatch, the slashing path decodes
NonSignerStakesAndSignature from untrusted signatureData. Unlike raiseAndResolveChallenge, there's no validateSignatoryRecord bind — a
how do I publish newton-cli?
Searched for 2 patterns, read 1 file, listed 1 directory (ctrl+o to expand)
⏺ Here's the honest answer: there is currently no path to publish newton-cli — and it's blocked by three distinct issues, some of which
are hard blockers.
Blockers
1. License is not accepted by crates.io (hard blocker)
@denniswon
denniswon / gist:8a55e803f87d96eca546ac7aeea8297a
Created April 14, 2026 14:38
Gateway Scalability & Throughput Analysis
---
Gateway Scalability & Throughput Analysis
End-to-End Flow Timeline
Here's the create_task critical path with approximate timings:
Client Request
├── Phase 0: Request Setup ..................... ~1-5ms
@denniswon
denniswon / gist:06d1c34ba5ca47a4fcfefc4de1ef49bf
Created April 10, 2026 22:33
TEE Integration for Newton Protocol
---
Meeting: TEE Integration for Newton Protocol
---
Problem Statement
Newton Protocol accumulates sensitive data (identity PII, confidential data, policy client secrets) that operators must decrypt and
process during policy evaluation. Two core concerns:
1. Operator data leakage — operators currently decrypt private data locally, meaning a malicious or compromised operator could exfiltrate