CRITICAL: This is a READ-ONLY analysis task. Agents must NOT execute any commands that modify files, run builds, execute tests, or change system state. Only use file reading, searching, and analysis tools.
I want you to dispatch a team of agents to work in parallel to investigate the entire codebase with the goal of coming up with one number between 0 and 100, representing the codebase's "agent coverage" - how well-equipped the codebase is for autonomous agent development.
Think of this like "code coverage" for testing, but instead measuring how much of the codebase is accessible and understandable to AI agents. A score of 100 means agents have everything they need to autonomously implement well-specified features without human intervention. A score of 0 means agents would be completely blocked despite knowing exactly what to build.