Created
March 25, 2026 17:08
-
-
Save denniswon/d466df9c38324a9d4b3362bdd74604f7 to your computer and use it in GitHub Desktop.
transporter sync issue: initial startup sync failed, continuing with event loop
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| ★ Insight ───────────────────────────────────── | |
| 0xc73a136a = InvalidGlobalTableRoot() — this is from the OperatorTableUpdater contract (EigenLayer's | |
| IOperatorTableUpdater interface). | |
| ───────────────────────────────────────────────── | |
| The transporter is calling confirmGlobalTableRoot on the destination chain's ECDSAOperatorTableUpdater (Base Sepolia | |
| 84532), and the contract is rejecting the Merkle root being submitted. | |
| This typically means one of: | |
| 1. Stale or mismatched operator table state — the source chain's BN254TableCalculator has operator weights/roots that | |
| don't match what the transporter is computing. Per the lessons in lessons.md (NEWT-845), if weights changed on the | |
| source chain without a successful sync, the roots diverge. | |
| 2. The root being submitted is zero or invalid — e.g., no operators are configured in the table calculator on the source | |
| chain. | |
| 3. Reference block issue — the transporter uses latest_block - 1 for the reference timestamp. If the source chain state | |
| at that block doesn't have a valid operator table, the computed root is invalid. | |
| Let me check how the contract validates this. | |
| Searched for 1 pattern (ctrl+o to expand) | |
| ⏺ The key line is ECDSAOperatorTableUpdater.sol:156: | |
| require(_globalTableRoots[referenceTimestamp] == globalTableRoot, InvalidGlobalTableRoot()); | |
| This reverts during updateOperatorTable — the contract checks that the globalTableRoot passed by the transporter matches | |
| the root previously stored via confirmGlobalTableRoot for that referenceTimestamp. | |
| ★ Insight ───────────────────────────────────── | |
| The flow is two-step: (1) confirmGlobalTableRoot(referenceTimestamp, globalTableRoot) stores the root (owner-only), then | |
| (2) updateOperatorTable(...) verifies per-operator Merkle proofs against that stored root. The error means either the | |
| root was never stored for this timestamp, or the transporter is computing a different root than what's on-chain. | |
| ───────────────────────────────────────────────── | |
| Most likely causes on stagef: | |
| 1. Operator table changed on source chain without re-syncing the root — if someone ran SeedTableCalculator or | |
| registered/deregistered operators on Sepolia, the source chain's table root changed. The transporter computes the new | |
| root locally but the destination chain still has the old root stored. The confirmGlobalTableRoot call succeeds (stores | |
| new root), but if that step failed or was skipped, updateOperatorTable sees a mismatch. | |
| 2. confirmGlobalTableRoot itself is reverting — check if the transporter signer is still the owner of | |
| ECDSAOperatorTableUpdater on Base Sepolia. Per lessons.md, if the owner doesn't match the transporter signer, root | |
| commits fail silently upstream, leaving _globalTableRoots[referenceTimestamp] as zero. | |
| Quick diagnostics to run: | |
| # Check who owns the ECDSAOperatorTableUpdater on Base Sepolia | |
| cast call <ECDSAOperatorTableUpdater_address> "owner()" --rpc-url <BASE_SEPOLIA_RPC> | |
| # Compare against the transporter's signer address | |
| # (from TRANSPORTER__SIGNER__PRIVATE_KEY in stagef config) | |
| If the owner doesn't match the transporter signer, you'll need to call transferOwnership() from the current owner. If | |
| they do match, the source chain table calculator state likely diverged — run make transport-sync network=testnet | |
| private_key=0x... to force a fresh full sync. |
Author
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Another question: do we already return the actual error issue/problem for failed policy evaluation task? (like specifically, things
like "Policy evaluation failed: Failed to validate params schema: Schema validation failed: Missing required property 'max_daily_spend'
at ''" that pinpoint the exact issue. I thought we already have the gateway to aggregate the operator error responses for their
failures/errors to report back to the client. Why does the client still only report high level generic error? Or is this a front end
side of the bug that it is not using the operator aggregated error responses returned by the gateway?
⏺ Good question. Let me check what the gateway actually returns vs what the client/explorer shows.
Searched for 2 patterns (ctrl+o to expand)
⏺ The gateway does return operator_errors in the JSON-RPC response. The data is there. Let me check how the explorer/frontend consumes
it.
Searched for 2 patterns (ctrl+o to expand)
⏺ There it is. Neither the dashboard API nor the indexer reads operator_errors from the gateway response.
★ Insight ─────────────────────────────────────
The data flow for failed tasks is:
evaluation failed: Missing required property 'max_daily_spend'" }]
operator_errors
The detailed per-operator errors are returned in the JSON-RPC response to the direct caller (SDK, CLI), but they're not persisted to
the indexer database and therefore never make it to the explorer UI. The explorer only sees "Quorum not reached: Unified Quorum NOT
reached for quorums [0] (threshold: 40%)".
─────────────────────────────────────────────────
So to answer your question: the gateway already does the right thing — it returns detailed operator_errors in the RPC response. The
bug is that the indexer/dashboard pipeline drops this information. A client calling the gateway RPC directly (e.g., via the Newton
SDK) would see the full error. The explorer doesn't because the indexer never stores operator_errors.
This is a separate issue from PR #479 — it's a dashboard/indexer gap. Want me to file it or look at what the indexer DB write
currently persists?
✻ Brewed for 34s