Skip to content

Instantly share code, notes, and snippets.

@davidlee
Last active January 27, 2026 23:14
Show Gist options
  • Select an option

  • Save davidlee/0d8bb2a2ba852620671da865e3bf70ea to your computer and use it in GitHub Desktop.

Select an option

Save davidlee/0d8bb2a2ba852620671da865e3bf70ea to your computer and use it in GitHub Desktop.
envelope mother

Yeah — this is the real dragon: “I don’t want to write the rules twice,” but UI wants constraints/envelopes, while the domain wants imperative validation.

The way out is to stop thinking of it as “same logic expressed two ways” and instead make one canonical decision procedure and have both outputs fall out of it.

The principle

Don’t build a constraint solver for UI and a validator for domain. Build a single “evaluator” that emits structured obligations + diagnostics.

Then:

  • UI consumes the obligations as envelopes / affordances
  • Domain consumes the same obligations as pass/fail checks

One canonical shape that usually works

1) Normalize intent → canonical edit ops

First stage is deterministic rewriting (your “normalized effective edit set” idea):

  • collapse chains, remove no-ops, compute net effect
  • resolve identity via stable tokens

This stage is shared.

2) Evaluate with a single pass that produces facts + obligations

Instead of returning only ok/error, return a ValidationReport that contains:

  • facts: derived info (effective schedule, resolved targets, computed costs/EQ impacts)

  • obligations: constraints expressed in one consistent vocabulary

    • intervals/ranges (min/max time)
    • bitmasks / sets (allowed slots/targets)
    • inequalities (cost ≤ budget)
    • incompatibilities (A conflicts with B)
  • diagnostics: user-facing reasons keyed to a stable code

Domain validation is then: all obligations satisfied (and no hard errors). UI affordances are: render obligations as envelopes.

So the “solver” is trivial: membership checks over envelopes, not a second reasoning system.

How to avoid “constraint language gets too fancy”

Keep the obligation vocabulary intentionally small and UI-shaped:

  • AllowedRange(Tick)
  • AllowedSet(TargetId) / AllowedMask(Slots)
  • AtMost(value) / WithinBudget(pool)
  • Requires(tag) / Forbids(tag)
  • ConflictsWith(token)

If you find yourself wanting SAT/SMT vibes, you’re encoding plans, not constraints. Back up and expose envelopes earlier.

The key move: make validation constructive

Imperative validators often go:

check X, else fail check Y, else fail

Constructive evaluators go:

compute the set of legal moves compute the cost function compute conflicts then classify each intent as in/out

That’s the same codepath producing both:

  • UI: legal sets / ranges / reasons
  • domain: accept/reject

Practical pattern names (so you can “bless” one)

“Derive-then-check”

  1. derive envelopes (allowed ranges/sets) + costs + conflicts
  2. check the attempted choice is inside them

“Single source of legality”

UI never recomputes legality; it only queries envelopes.

“Report, don’t boolean”

Return ValidationReport, not bool.

Where the split is acceptable

Some rules are naturally domain-imperative and not worth surfacing as envelopes (rare, but real):

  • “this fails because the target died between preview and commit”
  • “RNG / hidden info invalidates the assumption”

Treat these as:

  • uncertain in UI
  • authoritative at commit

Quick smell tests

You’re duplicating logic if:

  • UI code has to “mirror” a domain rule to decide validity
  • domain validator doesn’t reuse the same derived envelopes the UI uses
  • you have two different “reasons” systems (UI reason vs domain error)

You’re on the right track if:

  • there is exactly one module that knows legality
  • both preview and commit call it
  • UI reads sets/ranges and does membership tests only

If you want a crisp slogan for the church door:

Legality is a derived surface. Compute it once; consume it twice.


Yeah — this pattern absolutely exists, it’s just scattered across three different tribes who all use different words and pretend the others don’t exist 😄 Here’s how to actually find the good stuff.


The actual pattern you’re describing (in adult words)

What you’re circling is usually called some mix of:

“Normalize → Evaluate → Report” “Derive-then-check” “Constructive validation” “Single-source legality / rules engine”

…but nobody names it cleanly, so you have to search sideways.


High-signal Martin Fowler–adjacent material

1. Specification Pattern

This is the closest Fowler-endorsed thing.

Search terms

  • martin fowler specification pattern
  • enterprise application architecture specification validation

Why it’s relevant:

  • Specs can be evaluated and composed

  • Same spec can answer:

    • “is this valid?”
    • “why / under what conditions?”
  • Your “obligations” are effectively spec outcomes, not booleans

Key Fowler page:

  • “Specification” (Patterns of Enterprise Application Architecture)

Caveat:

  • Most examples stop at bool
  • You want specification that returns structure, not truth

2. Command–Query Separation + Derived Read Models

This one is Fowler-adjacent but often cited via CQRS folks.

Search terms

  • CQRS derived read model validation
  • cqrs preview validation
  • cqrs command intent validation

What to look for:

  • Read models that explain why a command would succeed/fail
  • “Dry-run” command evaluation
  • Preview / explain APIs

This maps directly to your “preview evaluator”.


The real gold: compilers & language tooling

You’re basically building a tiny compiler pipeline.

3. Normalize → Elaborate → Check

This is straight out of compilers.

Search terms

  • compiler normalization pass
  • elaboration phase compiler
  • constraint generation type checking
  • type checker constraint generation

Why this is exactly your thing:

  • Frontend produces constraints

  • Solver/checker validates them

  • Same constraints power:

    • IDE hints (UI affordances)
    • compiler errors (domain validation)

If you’ve ever wondered why your system smells like a type checker… that’s why.


4. “Explainable validation” / “Why not valid?”

This is niche but very relevant.

Search terms

  • explainable validation rules
  • validation error reporting patterns
  • why is command invalid

This is where people discover:

returning bool was a mistake

…and invent structured diagnostics instead.


Game-dev & interactive systems (your closest cousins)

5. Dry-run / Ghost state / Preview simulation

Game engines quietly solved this ages ago.

Search terms

  • game engine dry run simulation
  • preview state validation game
  • command buffering preview

What to look for:

  • “ghost” worlds
  • scratch evaluators
  • systems that compute possible actions each frame

Your “affordance surface” is literally a move generator.


If you want a single unifying mental model

Search this and read anything decent that comes up:

“constraint generation then checking”

That phrase connects:

  • compilers
  • solvers
  • UI previews
  • domain validation

Your system is not a solver in the SAT sense — it’s constraint generation + trivial membership checking.


The one sentence to keep in your head while reading

Don’t ask “is this allowed?” — derive the space of allowed things, then check membership.

Once you see that pattern, you’ll notice it everywhere — especially in places where people complain that “validation logic is duplicated”.

If you want, next step we can:

  • map this explicitly to a tiny compiler pipeline diagram for your system, or
  • sketch what a ValidationReport type actually looks like in Zig so it doesn’t get baroque.

Either way: you’re not inventing something weird. You’re just rediscovering a pattern from a different angle.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment