Skip to content

Instantly share code, notes, and snippets.

@denniswon
Created March 25, 2026 17:08
Show Gist options
  • Select an option

  • Save denniswon/d466df9c38324a9d4b3362bdd74604f7 to your computer and use it in GitHub Desktop.

Select an option

Save denniswon/d466df9c38324a9d4b3362bdd74604f7 to your computer and use it in GitHub Desktop.
transporter sync issue: initial startup sync failed, continuing with event loop
★ Insight ─────────────────────────────────────
0xc73a136a = InvalidGlobalTableRoot() — this is from the OperatorTableUpdater contract (EigenLayer's
IOperatorTableUpdater interface).
─────────────────────────────────────────────────
The transporter is calling confirmGlobalTableRoot on the destination chain's ECDSAOperatorTableUpdater (Base Sepolia
84532), and the contract is rejecting the Merkle root being submitted.
This typically means one of:
1. Stale or mismatched operator table state — the source chain's BN254TableCalculator has operator weights/roots that
don't match what the transporter is computing. Per the lessons in lessons.md (NEWT-845), if weights changed on the
source chain without a successful sync, the roots diverge.
2. The root being submitted is zero or invalid — e.g., no operators are configured in the table calculator on the source
chain.
3. Reference block issue — the transporter uses latest_block - 1 for the reference timestamp. If the source chain state
at that block doesn't have a valid operator table, the computed root is invalid.
Let me check how the contract validates this.
Searched for 1 pattern (ctrl+o to expand)
⏺ The key line is ECDSAOperatorTableUpdater.sol:156:
require(_globalTableRoots[referenceTimestamp] == globalTableRoot, InvalidGlobalTableRoot());
This reverts during updateOperatorTable — the contract checks that the globalTableRoot passed by the transporter matches
the root previously stored via confirmGlobalTableRoot for that referenceTimestamp.
★ Insight ─────────────────────────────────────
The flow is two-step: (1) confirmGlobalTableRoot(referenceTimestamp, globalTableRoot) stores the root (owner-only), then
(2) updateOperatorTable(...) verifies per-operator Merkle proofs against that stored root. The error means either the
root was never stored for this timestamp, or the transporter is computing a different root than what's on-chain.
─────────────────────────────────────────────────
Most likely causes on stagef:
1. Operator table changed on source chain without re-syncing the root — if someone ran SeedTableCalculator or
registered/deregistered operators on Sepolia, the source chain's table root changed. The transporter computes the new
root locally but the destination chain still has the old root stored. The confirmGlobalTableRoot call succeeds (stores
new root), but if that step failed or was skipped, updateOperatorTable sees a mismatch.
2. confirmGlobalTableRoot itself is reverting — check if the transporter signer is still the owner of
ECDSAOperatorTableUpdater on Base Sepolia. Per lessons.md, if the owner doesn't match the transporter signer, root
commits fail silently upstream, leaving _globalTableRoots[referenceTimestamp] as zero.
Quick diagnostics to run:
# Check who owns the ECDSAOperatorTableUpdater on Base Sepolia
cast call <ECDSAOperatorTableUpdater_address> "owner()" --rpc-url <BASE_SEPOLIA_RPC>
# Compare against the transporter's signer address
# (from TRANSPORTER__SIGNER__PRIVATE_KEY in stagef config)
If the owner doesn't match the transporter signer, you'll need to call transferOwnership() from the current owner. If
they do match, the source chain table calculator state likely diverged — run make transport-sync network=testnet
private_key=0x... to force a fresh full sync.
@denniswon
Copy link
Copy Markdown
Author

Another question: do we already return the actual error issue/problem for failed policy evaluation task? (like specifically, things
like "Policy evaluation failed: Failed to validate params schema: Schema validation failed: Missing required property 'max_daily_spend'
at ''" that pinpoint the exact issue. I thought we already have the gateway to aggregate the operator error responses for their
failures/errors to report back to the client. Why does the client still only report high level generic error? Or is this a front end
side of the bug that it is not using the operator aggregated error responses returned by the gateway?

⏺ Good question. Let me check what the gateway actually returns vs what the client/explorer shows.

Searched for 2 patterns (ctrl+o to expand)

⏺ The gateway does return operator_errors in the JSON-RPC response. The data is there. Let me check how the explorer/frontend consumes
it.

Searched for 2 patterns (ctrl+o to expand)

⏺ There it is. Neither the dashboard API nor the indexer reads operator_errors from the gateway response.

★ Insight ─────────────────────────────────────

The data flow for failed tasks is:

  1. Gateway returns CreateTaskResponse with error: "Quorum not reached: ..." AND operator_errors: [{ operator_address, message: "Policy
    evaluation failed: Missing required property 'max_daily_spend'" }]
  2. Indexer writes the task to the database via the optimistic indexer DB write — but it only stores the top-level error field, not
    operator_errors
  3. Dashboard API reads from the database and serves it to the explorer
  4. Explorer frontend displays only the generic error string

The detailed per-operator errors are returned in the JSON-RPC response to the direct caller (SDK, CLI), but they're not persisted to
the indexer database and therefore never make it to the explorer UI. The explorer only sees "Quorum not reached: Unified Quorum NOT
reached for quorums [0] (threshold: 40%)".

─────────────────────────────────────────────────

So to answer your question: the gateway already does the right thing — it returns detailed operator_errors in the RPC response. The
bug is that the indexer/dashboard pipeline drops this information. A client calling the gateway RPC directly (e.g., via the Newton
SDK) would see the full error. The explorer doesn't because the indexer never stores operator_errors.

This is a separate issue from PR #479 — it's a dashboard/indexer gap. Want me to file it or look at what the indexer DB write
currently persists?

✻ Brewed for 34s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment