📦 Also part of claude-code-skills — a small collection of opinionated Claude Code skills (project bootstrap, persistent session memory, and this multi-model review). Clone the repo for the full set.
A Claude Code skill that orchestrates iterative adversarial review of specs or code between Claude and OpenAI's Codex (GPT-5.5) until both models independently agree the work is production-ready.
Claude and Codex catch different classes of issues. Running them as adversarial reviewers — each given the same artifact and the other's feedback — surfaces problems that either model would miss alone: missed edge cases, security gaps, ambiguous requirements, regressions introduced by remediation. The loop terminates when both models reach GO, or after 5 rounds (Claude's position prevails to avoid blocking work indefinitely).