The strategy should include specific metrics to detect if the AAG becomes a bottleneck:
- Time from ADR submission to initial AAG feedback
- Time from ADR submission to final decision
- Queue depth of pending reviews
- Percentage of decisions requiring escalation beyond AAG
Without these measurements, you won't know if the AAG is helping or hindering velocity until it's too late.
Before rolling out to all of Engineering, run a 4-week pilot with just the SRE organization to:
- Test the ADR process and tooling at small scale
- Validate the "2-3 hours per week" time estimate for AAG members
- Identify process friction points and refine the approach
- Gather concrete data on review turnaround times
This pilot would provide evidence-based refinements before broader adoption.
Remove metrics like "90% ADR compliance" and "<5% decision escalation rate" unless you can explain why these specific numbers matter. Instead focus on:
- Elapsed time metrics (as suggested above)
- If keeping compliance metrics, use them only to understand why non-compliance happens, not as targets
- Measure actual impact on engineering velocity, not proxy metrics
The current success criteria feel arbitrary and could drive the wrong behaviors.