OpenSpec is a lightweight spec-driven development (SDD) framework designed specifically to align humans and AI coding assistants before any code gets written. Think of it as "version control for intent" - you create explicit, reviewable specifications that guide AI assistants to build what you actually want, not what they infer from chat history.
OpenSpec distinguishes itself from alternatives (GitHub's Spec Kit, AWS Kiro) through two key principles:
1. Brownfield-First Design Most SDD tools assume greenfield (0→1) development. OpenSpec excels at existing codebase modification (1→n), especially when changes span multiple specifications. The folder structure reflects this:
openspec/
├── specs/ # Current truth - living documentation
│ └── auth-session/
│ └── spec.md
└── changes/ # Proposed modifications
└── add-2fa/
├── proposal.md
├── tasks.md
├── design.md (optional)
└── specs/ # Delta showing changes
└── auth-session/
└── spec.md
2. Single Source of Truth Specification
Unlike fragmented approaches where specs scatter across multiple files, OpenSpec maintains a unified specification per capability. When you archive a change, the deltas merge back into openspec/specs/, keeping the truth consolidated. This solves a persistent problem: without a top-level spec, understanding overall system intent requires reading dozens of disconnected files.
Here's the complete cycle people follow:
1. Draft Proposal (/openspec:proposal or natural language)
You: "Add two-factor authentication to login"
AI: Creates openspec/changes/add-2fa/ with:
- proposal.md (why and what)
- tasks.md (implementation checklist)
- specs/auth-session/spec.md (delta showing additions)2. Review & Iterate
$ openspec list # Verify change exists
$ openspec validate add-2fa # Check spec formatting
$ openspec show add-2fa # Review full proposal
You: "Add acceptance criteria for TOTP vs SMS"
AI: Updates the spec delta and tasks3. Implement (/openspec:apply)
AI: Works through tasks.md, marking each complete
Task 1.1: Add OTP column ✓
Task 1.2: Create verification endpoint ✓
Task 2.1: Update login flow ✓4. Archive (/openspec:archive)
$ openspec archive add-2fa --yes
# Moves change to changes/archive/
# Merges deltas into specs/auth-session/spec.md
# Your source of truth is now updatedIndividual Developers:
- Use it to avoid "vibe coding" drift where AI goes off-rails mid-implementation
- Generate better prompts than they could write manually
- Build documentation as a side-effect of development
- Switch between Claude Code, Cursor, Codex without losing context
Small Teams:
- Treat specs as PR artifacts - review intent before reviewing code
- New developers browse
openspec/specs/to understand system capabilities - Use
openspec/project.mdto encode team conventions once, apply everywhere
Brownfield Projects:
- Start with one new feature to get comfortable
- Gradually build specs as they modify existing code
- Don't try to generate specs for entire legacy codebase upfront (waste of time)
- Specs accumulate organically through real work
| Aspect | OpenSpec | Spec Kit |
|---|---|---|
| Philosophy | Lightweight, brownfield-first | Comprehensive, greenfield-optimized |
| Verbosity | ~250 lines per change | ~800+ lines per change |
| Structure | Two-folder model (specs + changes) | Specs aligned to git branches |
| Workflow | 3 commands (proposal, apply, archive) | 8+ commands (analyze, specify, plan, tasks, etc.) |
| Git Integration | Manual branching (you control strategy) | Auto-creates branches |
| Best For | Modifying existing systems, quick iteration | Greenfield, enterprise governance, audit trails |
Real developers report OpenSpec is "faster and less noisy" - easier to review, faster to execute, better flow state.
The Problem: Creating change after change without ever archiving. Your openspec/changes/ directory becomes a junk drawer of 47 feature folders, and nobody knows what's actually implemented.
Why It Happens:
- Forgetting to run
openspec archiveafter completion - Treating changes as "throwaway" rather than building blocks
- Fear that archiving will lose information
The Fix:
# Make archiving part of your done definition
git commit -m "Implement add-2fa"
openspec archive add-2fa --yes
git add openspec/specs/
git commit -m "Archive add-2fa specs"Best Practice: Archive immediately when PR merges. Your CI pipeline could even enforce this.
The Problem: Specs that read like code:
### Requirement: Login Validation
The system SHALL use bcrypt.compare() with salt rounds of 12
and store the hash in users.password_hash column using
VARCHAR(255) with UTF-8 encoding...Why It's Wrong: Specs describe what the system must do (requirements/behavior), not how it does it (implementation). Over-specified specs constrain AI unnecessarily and become maintenance burdens when implementation changes.
The Fix:
### Requirement: Secure Password Storage
The system SHALL store passwords using industry-standard
one-way hashing with appropriate salt.
#### Scenario: Password verification
- GIVEN a user provides credentials
- WHEN the password is validated
- THEN the system SHALL verify against the stored hash
- AND SHALL NOT expose timing informationPut implementation decisions in design.md if you need to capture them. Keep specs focused on observable behavior.
The Problem: Going straight from proposal to implementation without iteration:
/openspec:proposal Add payment processing
/openspec:apply add-payment # WAIT, DID YOU READ IT?Why It Happens:
- Impatience to see code
- Over-trusting AI's first interpretation
- Not understanding that specs are the value, not just scaffolding
The Reality: The AI's first proposal is rarely complete. It doesn't know:
- Your error handling conventions
- Which edge cases matter to your business
- How this interacts with existing features
- Your team's naming conventions
The Fix:
/openspec:proposal Add payment processing
openspec show add-payment # Actually read it
# Iterate 2-5 times refining specs
openspec validate add-payment # Ensure formatting is correct
# THEN implement
/openspec:apply add-paymentBest Practice: Treat the proposal phase as collaborative spec writing. The AI drafts, you refine. This 10-minute investment saves hours of rework.
The Problem: Tasks that could apply to any feature:
## Tasks
- [ ] Update database schema
- [ ] Add API endpoints
- [ ] Update frontend
- [ ] Write testsWhy It's Weak: These don't guide implementation. The AI will make assumptions about what goes where.
The Fix: Be specific about the what and where:
## 1. Database Schema Changes
- [ ] 1.1 Add `otp_secret` column to `users` table (VARCHAR(32), encrypted)
- [ ] 1.2 Create `otp_verification_logs` table with user_id, timestamp, success
## 2. Backend Implementation
- [ ] 2.1 Add POST /api/auth/otp/generate endpoint (returns QR code)
- [ ] 2.2 Modify POST /api/auth/login to require OTP when enabled
- [ ] 2.3 Add POST /api/auth/otp/verify endpoint
## 3. Frontend Updates
- [ ] 3.1 Create OTPEnrollment component in src/components/auth/
- [ ] 3.2 Modify LoginForm.tsx to show OTP field conditionallyBest Practice: Include file paths, specific function names, and clear acceptance criteria for each task.
The Problem: Every change proposal reinvents conventions:
- One feature uses
camelCase, next usessnake_case - Error handling inconsistencies
- Different logging approaches
- Reinventing authentication patterns
The Fix: Populate openspec/project.md after initialization:
# Project Context
## Tech Stack
- Python 3.11 with FastAPI
- PostgreSQL 15 with SQLAlchemy
- React 18 with TypeScript
- Tailwind CSS for styling
## Conventions
### Error Handling
- Use custom exceptions from `src/exceptions.py`
- All API errors return RFC 7807 Problem Details
- Log errors at ERROR level with request_id
### Database Migrations
- Use Alembic, never raw SQL in code
- Migration files in `alembic/versions/`
- Always include both upgrade and downgrade
### Testing
- Pytest for backend (min 80% coverage)
- React Testing Library for frontend
- Integration tests in `tests/integration/`
## Architecture Constraints
- No direct database calls from React
- All state management through React Query
- Authentication via JWT in httpOnly cookiesBest Practice: Update project.md whenever you establish a new pattern. It's your team's AI-readable constitution.
The Problem: Creating specs, archiving them, then never updating them as the system evolves. Six months later, specs describe a system that no longer exists.
Why It Happens:
- "The code is the truth, specs are just nice-to-have"
- Time pressure to move on to next feature
- No CI enforcement that specs stay current
The Fix: When modifying existing functionality:
# WRONG: Create a new spec
/openspec:proposal Extend login with OAuth
# RIGHT: Modify the existing spec via delta
/openspec:proposal Extend authentication to support OAuth
# AI should update specs/auth-session/spec.md with new scenariosThe delta system is designed for this - it shows how requirements change, not just what's being added.
Best Practice: Make spec accuracy part of your definition of done. If you change auth behavior, the auth spec must be updated before merge.
The Problem: Creating full OpenSpec workflow for one-line config changes:
/openspec:proposal Change session timeout from 24h to 48h
# Creates proposal.md, tasks.md, design.md, spec delta...
# For changing a single constantWhen It's Overkill:
- Simple config changes
- Typo fixes
- Dependency version bumps
- Trivial UI tweaks
The Fix: Use judgment. OpenSpec is for:
- New features with unclear requirements
- Modifications that touch multiple files
- Changes that affect system behavior
- Anything you'd want documented
Best Practice: If the change is simpler to explain than to spec, just make the change. Don't cargo-cult the process.
The Problem: Never running openspec validate, so specs have formatting errors, missing required sections, or malformed deltas.
Why It Matters: The AI reads these specs. Malformed specs = confused AI = bad implementations.
The Fix:
# After any spec modification
openspec validate add-2fa
# Common errors it catches:
# - Missing `### Requirement:` headers
# - Scenarios without GIVEN/WHEN/THEN
# - Invalid delta markers (ADDED/MODIFIED/REMOVED)
# - Malformed task checkboxesBest Practice: Add validation to your pre-commit hooks:
# .git/hooks/pre-commit
#!/bin/bash
openspec validate --all || exit 1The Problem: Writing deltas like this:
# Delta for Auth
Add two-factor authentication support with TOTP.Why It Fails: This isn't a structured delta. The AI and humans can't diff it against the original spec.
The Fix: Use the proper delta markers:
# Delta for Auth
## ADDED Requirements
### Requirement: Two-Factor Authentication
The system MUST require a second factor during login.
#### Scenario: OTP required
- WHEN a user submits valid credentials
- THEN an OTP challenge is required
- AND the OTP must be verified within 5 minutes
## MODIFIED Requirements
### Requirement: Session Creation
- The system SHALL issue a JWT on successful login
+ The system SHALL issue a JWT on successful login AND OTP verificationBest Practice: Always use ## ADDED, ## MODIFIED, ## REMOVED sections. This is what gets merged during archiving.
The Problem: Using OpenSpec inconsistently:
- Some features use it, others don't
- Different team members have different interpretations
- Specs drift from reality because there's no rhythm
The Fix: Establish team norms:
## Our OpenSpec Workflow
1. All new features start with `/openspec:proposal`
2. Proposals are reviewed in stand-up before implementation
3. Implementation happens in feature branches
4. Archive happens during merge to main
5. Specs are updated within the same PR as code changes
## When NOT to use OpenSpec
- Config changes
- Dependency updates
- Refactoring without behavior change
- Documentation-only changesBest Practice: Make OpenSpec part of your team's social contract, not just a tool individuals use differently.
Despite its strengths, OpenSpec has limits:
Multi-Repo Systems: OpenSpec lives in one repo. If your change spans 5 microservices, you'll need to coordinate specs across repos manually.
Very Large Changes: If your proposal generates 2,000 lines of spec and 50 tasks, you've probably scoped too broadly. Break it down.
Real-Time Collaboration: OpenSpec is file-based. Multiple people editing the same change simultaneously will hit merge conflicts. Use PR-based workflows.
When AI Doesn't Follow Specs: OpenSpec can't force AI compliance. If your AI consistently ignores specs, the problem might be:
- Specs are too vague (add scenarios)
- Tasks are too large (break them down)
- AI context window is overwhelmed (reduce noise)
OpenSpec works when you internalize this mental model:
- Specs are the product. Code is just one way to verify specs are implemented.
- Iteration is the point. First proposals are drafts. Refine before implementing.
- Deltas tell stories. Good deltas explain what changed and why, not just what's added.
- Archive is not optional. Unarchived changes are technical debt.
- Lightweight beats comprehensive. 10 minutes of good spec > 2 hours of perfect spec.
The teams seeing the most value are treating OpenSpec as a thinking tool, not just an AI control mechanism. The act of writing specs forces clarity. The AI just happens to execute better when you're clear.