Skip to content

Instantly share code, notes, and snippets.

@chapinb
Created April 9, 2026 18:28
Show Gist options
  • Select an option

  • Save chapinb/a3c0ca32dbdb5d04b398b1dbb8f4dff5 to your computer and use it in GitHub Desktop.

Select an option

Save chapinb/a3c0ca32dbdb5d04b398b1dbb8f4dff5 to your computer and use it in GitHub Desktop.
Skill: Intentional Development
name intentional-development
description Use when starting a new feature, approaching an unfamiliar problem, or deciding how to structure code -- guides the full workflow from uncertain start to clean implementation

Intentional Development

Overview

Code with purpose. Know why every line is there. Understand the end-to-end path before filling in details. Prove behavior with tests. Design modules so their interface is simpler than their implementation.

This skill governs the workflow. Use superpowers:test-driven-development for Red-Green-Refactor mechanics.

If facing meaningful tradeoffs or an unclear approach, use intentional-design first.

The Workflow

digraph intentional_dev {
    rankdir=TB;

    start [label="New feature or problem", shape=doublecircle];
    known [label="Approach clear?", shape=diamond];
    spike [label="SPIKE\nThrowaway exploration", shape=box, style=filled, fillcolor="#fff0cc"];
    tracer [label="TRACER BULLET\nEnd-to-end slice (real code)", shape=box, style=filled, fillcolor="#cce5ff"];
    acceptance [label="Write acceptance test\n(failing)", shape=box, style=filled, fillcolor="#ffcccc"];
    tdd [label="TDD\nUnit tests + implementation", shape=box, style=filled, fillcolor="#ccffcc"];
    design [label="Design check:\nIs the interface simpler\nthan the implementation?", shape=diamond];
    refactor [label="Redesign module\nboundaries", shape=box];
    done [label="Done criteria met?", shape=diamond];
    complete [label="Complete", shape=doublecircle];

    start -> known;
    known -> spike [label="no"];
    known -> tracer [label="yes"];
    spike -> tracer [label="discard spike,\nstart real code"];
    tracer -> acceptance;
    acceptance -> tdd;
    tdd -> design;
    design -> refactor [label="no"];
    refactor -> tdd;
    design -> done [label="yes"];
    done -> tdd [label="no -- more\nbehavior needed"];
    done -> complete [label="yes"];
}

Phase 1: Spike or Tracer?

Spike -- use when the approach is genuinely unknown:

  • Unfamiliar library, API, or domain
  • Multiple competing approaches with unclear tradeoffs
  • Estimating feasibility

Spikes are throwaway. No tests. No cleanup. When done, delete the spike and start over with real code. Never promote a spike to production.

Tracer bullet -- use when the approach is clear but the end-to-end path is unproven:

  • You know the technology but haven't wired this specific flow
  • You need early feedback on whether the pieces fit together
  • You want a thin but fully working slice to iterate on

Tracers are real code -- tested, clean, production-quality. They just cover a thin slice. The goal is to hit the target (working end-to-end) quickly, then expand.

# Tracer bullet example: user auth flow
# Wire the full path -- login -> token -> protected route -- with the simplest possible data
# before implementing all the edge cases

def test_authenticated_user_can_access_profile():
    user = create_user(email="a@example.com", password="secret")
    token = login(email="a@example.com", password="secret")
    response = get_profile(token=token)
    assert response["email"] == "a@example.com"

This test fails. Now make it pass with the minimal real implementation across all layers.

Phase 2: Acceptance Test First

Before writing any implementation, write a test that describes what success looks like from the outside. This is your target.

# Acceptance test: describes behavior in terms of what the user/caller experiences
# Not testing internals -- testing outcomes

def test_order_confirmed_email_sent_on_purchase():
    customer = Customer(email="buyer@example.com")
    product = Product(name="Widget", price=10_00, stock=5)

    order = place_order(customer=customer, product=product, quantity=2)

    assert order.status == "confirmed"
    assert len(fake_email_sender.sent) == 1
    assert fake_email_sender.sent[0].recipient == "buyer@example.com"

The acceptance test is your tracer bullet test. It stays failing until the full slice works. Unit tests fill in the gaps.

Phase 3: TDD for Units

With the acceptance test failing, drive out units using superpowers:test-driven-development.

Key rule: write tests that describe behavior, not implementation.

# Good: tests what it does
def test_order_reduces_product_stock():
    product = Product(name="Widget", price=10_00, stock=5)
    place_order(product=product, quantity=2)
    assert product.stock == 3

# Bad: tests how it does it
def test_order_calls_inventory_service():
    mock_inventory = Mock()
    place_order(product=product, quantity=2, inventory_service=mock_inventory)
    mock_inventory.decrement.assert_called_once_with(product_id=product.id, qty=2)

The bad test breaks on every refactor. The good test survives them.

Phase 4: Design Check -- Deep Modules

After each unit is green, apply Ousterhout's test before moving on:

Is the interface simpler than the implementation?

A deep module hides complexity behind a simple surface:

# Deep: simple interface, hides I/O and parsing complexity
def load_config(path: str) -> Config:
    ...

# Shallow: interface as complex as the implementation -- leaks internals
def load_config(path: str, parser: ConfigParser, validator: SchemaValidator,
                env_overrides: dict[str, str]) -> tuple[Config, list[str]]:
    ...

If the interface is as complicated as the implementation, redesign. Move complexity inward. The caller should not know your implementation details.

Information hiding rule: hide the design decisions most likely to change. If your storage engine changes, callers should not need to.

Phase 5: Done Criteria

Code is done when it satisfies Kent Beck's four rules, in order:

  1. Passes all tests -- behavior is proven correct
  2. Expresses intent -- names reveal purpose; no reader should puzzle over what a function does
  3. No duplication -- every piece of knowledge exists once
  4. Fewest elements -- remove anything that isn't needed (YAGNI)

If the code passes rules 1 and 2 but has duplication, refactor. If it passes 1-3 but has extra abstractions, remove them.

YAGNI is strict: do not build for hypothetical future needs. Build for the tests you have.

Programming by Intention

Always know why your code works. Before writing a line, ask:

  • What should this do?
  • What are the preconditions I'm relying on?
  • What will the output look like?

If you can't answer these, you're programming by coincidence. Code that works for unknown reasons is a liability.

# Coincidence: works, but why?
result = sorted(items, key=lambda x: x[1])[0]

# Intention: clear
cheapest = min(items, key=lambda item: item.price)

Never rely on behavior you haven't verified. If you're not sure a library does what you think, write a test to prove it before building on it.

Orthogonality

Changes to one module should not cascade to others. When a change forces you to edit 4 files, that is a coupling smell.

Design so that:

  • Modules have a single, well-defined responsibility
  • Interfaces are stable; implementations are replaceable
  • Tests can run a module in isolation without complex setup

Common Failure Modes

Symptom Likely cause Fix
"Afraid to refactor" Insufficient tests Add tests before touching code
Changes cascade everywhere Modules too coupled / shallow Apply information hiding; deepen interfaces
Test setup requires building half the system Poor module boundaries Extract dependency; inject it
"We skipped the tracer, went straight to features" No proven end-to-end path Stop, wire the skeleton first
"This works but I'm not sure why" Programming by coincidence Write a test to prove the assumption
"We built X but nobody needed it" YAGNI violation Delete it; build to failing tests only
"The spike became the product" Promoted throwaway code Spikes are never production; start over

Red Flags -- Stop and Reassess

  • Starting unit tests before the end-to-end path exists
  • Promoting spike code to production without rewriting
  • Interface has more parameters than the implementation has logic
  • Test requires mocking internals to pass
  • Adding a feature "in case we need it later"
  • Fixing a bug without first writing a failing test
  • Merging code that passes all tests but you can't explain

REQUIRED: Use superpowers:test-driven-development for Red-Green-Refactor mechanics within each TDD cycle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment