Skip to content

Instantly share code, notes, and snippets.

@denniswon
denniswon / gist:f1fd0c07b48ad4c1908ae1e7cb0c5195
Created March 26, 2026 22:34
Plan: High-Throughput Batch Transaction Submission
❯ Load Test Report:
Configuration:
- Users: 10
- Spawn rate: 10 users/sec (all spawn immediately)
- Pacing: 15 seconds between requests per user
- Duration: 5 minutes
- Effective throughput: ~10 requests every 15 seconds (~40 req/min)
HTTP Layer — all 200 requests succeeded, median 2.8s, p95 3.5s
On-chain Submit (RPC → NewTaskCreated)
- 141/200 seen (70.5%)
@denniswon
denniswon / gist:d466df9c38324a9d4b3362bdd74604f7
Created March 25, 2026 17:08
transporter sync issue: initial startup sync failed, continuing with event loop
★ Insight ─────────────────────────────────────
0xc73a136a = InvalidGlobalTableRoot() — this is from the OperatorTableUpdater contract (EigenLayer's
IOperatorTableUpdater interface).
─────────────────────────────────────────────────
The transporter is calling confirmGlobalTableRoot on the destination chain's ECDSAOperatorTableUpdater (Base Sepolia
84532), and the contract is rejecting the Merkle root being submitted.
This typically means one of:
---
Newton Agent NeoBank — Project Status
Phase 0 — Completed
Phase 0 is fully built and deployed to Base Sepolia (stagef). Everything from the original scope shipped:
┌────────────────────────┬──────────────┬───────────────────────────────────────────────────────────────────────────┐
│ Component │ Status │ Key Evidence │
├────────────────────────┼──────────────┼───────────────────────────────────────────────────────────────────────────┤
Now I have the full picture. Here's the deep analysis:
The Architecture Problem
The commenter is pointing at a fundamental tension in the PRD's design. Let me map it out:
Current PRD Architecture (per-account policy storage)
Developer deploys PolicyClient contract (once)
└── Per user account:
@denniswon
denniswon / gist:80496d6d51674f21def9259d1b40c190
Created March 20, 2026 23:29
Private Data Storage Layer — Architecture Proposal
Looking at the Fireflies MCP setup, it's already configured in your system — I can see mcp__claude_ai_Fireflies__* tools are available in
the deferred tools list. That means Fireflies MCP is already connected via Claude's built-in MCP integration (managed by Anthropic), not a
self-hosted server.
Let me verify it's working:
⏺ Fireflies MCP is already set up and working. It's connected via Claude's built-in MCP integration (the mcp__claude_ai_Fireflies__ prefix
means it's a managed integration, not a self-hosted server). You have access to all 10 Fireflies tools:
┌─────────────────────────────────┬────────────────────────────────────────────────────────────────┐
@denniswon
denniswon / gist:3a69aa0a1860aba3060c7eb226e69e8a
Created March 20, 2026 11:53
Newton Protocol: Comprehensive Overview
---
Newton Protocol: Comprehensive Overview
Vision
Newton Protocol is the Authorization Layer for the Onchain Economy. Just as Visa authorizes payment transactions before settlement —
checking fraud rules, verifying identity, enforcing spend limits in real time — Newton authorizes onchain transactions against
programmable compliance and risk policies before they execute on any blockchain.
The key insight: Newton is not another regulator, not another Visa-like centralized entity. It is credibly neutral infrastructure that
@denniswon
denniswon / gist:8875c69ab656278974f6367243d3296d
Created March 19, 2026 19:36
Private Data Storage Layer — Architecture Proposal
Research agent completed with an excellent analysis. Let me synthesize the meeting discussion + research into a comprehensive architecture
proposal and execution roadmap.
★ Insight ─────────────────────────────────────
The research strongly favors redb over MDBX for Newton's use case. Key reasons: (1) pure Rust with zero C/C++ dependencies — critical for
decentralized operator nodes where you can't control build environments, (2) MDBX's Rust bindings are stale (Reth frozen at v0.12.10, closed
as "not planned"), (3) redb's performance gap (~1.7x slower reads than LMDB) is irrelevant for Newton's access pattern (one blob fetch per
task evaluation, not millions/sec). Jacob's intuition was right.
CRDTs are not a storage engine — they're a consistency model. The team's data is content-addressed (no conflicts to resolve), so a simple
@denniswon
denniswon / gist:70c9b98044d1848a8603138189acd582
Created March 19, 2026 06:30
Dashboard API: Agent Compatibility Gap Analysis
---
Agent Compatibility Gap Analysis
★ Insight ─────────────────────────────────────
The core blocker is simple: every protected endpoint uses get_current_user_id, which only accepts JWTs. There's no path from a gw_
API key to a JWT, and no way to create the initial API key without a human in the loop (setup.py requires manual OTP entry). The
existing get_user_id_from_api_key dependency uses UserKey.secret_key (a different key type), not the gw_ API keys agents would
use.
─────────────────────────────────────────────────
@denniswon
denniswon / gist:01c83e329ef2f42d844683a69dc8c22a
Created March 19, 2026 02:05
newton verifiable credential - identity data ref fetch from DB
So the answer is: yes, operators DO have KMS access. The DataProviderConfig (loaded by the operator's DataProvider) reads
DATA_PROVIDER_KMS_RSA_KEY_ID from the environment, creates an AWS KMS client, and uses it for decrypt_data_kms. This is the pre-HPKE
path where operators decrypt identity data locally.
But the critical problem remains: operators don't have API keys, so the gateway RPC approach I implemented won't work. The gateway
authenticates every RPC call via API key.
★ Insight ─────────────────────────────────────
This means the gateway-side resolution is the correct approach:
@denniswon
denniswon / gist:584283a34ef44f493d297c153bede47e
Created March 19, 2026 00:08
Migrating development environment
MacBook migration is a surprisingly deep problem — it's not just copying files. You have layered state: dotfiles, package managers,
language runtimes, auth tokens (which can't just be copied), MCP server configs, IDE state, and agent memories scattered across
~/.claude/, ~/.config/, keychains, and more. Getting this right upfront saves days of "why doesn't X work on my new machine."
★ Insight ─────────────────────────────────────
Your environment is substantial — 176 Homebrew formulae, 74 casks, 5 language version managers, blockchain toolchains (Foundry, SP1,
RISC0), Claude Code with 9 plugins, 6 MCP servers, and API keys hardcoded in .zshrc. The biggest risk areas are: (1) secrets
management (API keys in plaintext), (2) auth tokens that can't be copied (need re-auth), and (3) the order of installation (version
managers before languages before tools).
─────────────────────────────────────────────────