Yeah — this is the real dragon: “I don’t want to write the rules twice,” but UI wants constraints/envelopes, while the domain wants imperative validation.
The way out is to stop thinking of it as “same logic expressed two ways” and instead make one canonical decision procedure and have both outputs fall out of it.
Don’t build a constraint solver for UI and a validator for domain. Build a single “evaluator” that emits structured obligations + diagnostics.
Then:
- UI consumes the obligations as envelopes / affordances
- Domain consumes the same obligations as pass/fail checks
First stage is deterministic rewriting (your “normalized effective edit set” idea):
- collapse chains, remove no-ops, compute net effect
- resolve identity via stable tokens
This stage is shared.
Instead of returning only ok/error, return a ValidationReport that contains:
-
facts: derived info (effective schedule, resolved targets, computed costs/EQ impacts)
-
obligations: constraints expressed in one consistent vocabulary
- intervals/ranges (min/max time)
- bitmasks / sets (allowed slots/targets)
- inequalities (cost ≤ budget)
- incompatibilities (A conflicts with B)
-
diagnostics: user-facing reasons keyed to a stable code
Domain validation is then: all obligations satisfied (and no hard errors). UI affordances are: render obligations as envelopes.
So the “solver” is trivial: membership checks over envelopes, not a second reasoning system.
Keep the obligation vocabulary intentionally small and UI-shaped:
AllowedRange(Tick)AllowedSet(TargetId)/AllowedMask(Slots)AtMost(value)/WithinBudget(pool)Requires(tag)/Forbids(tag)ConflictsWith(token)
If you find yourself wanting SAT/SMT vibes, you’re encoding plans, not constraints. Back up and expose envelopes earlier.
Imperative validators often go:
check X, else fail check Y, else fail
Constructive evaluators go:
compute the set of legal moves compute the cost function compute conflicts then classify each intent as in/out
That’s the same codepath producing both:
- UI: legal sets / ranges / reasons
- domain: accept/reject
- derive envelopes (allowed ranges/sets) + costs + conflicts
- check the attempted choice is inside them
UI never recomputes legality; it only queries envelopes.
Return ValidationReport, not bool.
Some rules are naturally domain-imperative and not worth surfacing as envelopes (rare, but real):
- “this fails because the target died between preview and commit”
- “RNG / hidden info invalidates the assumption”
Treat these as:
- uncertain in UI
- authoritative at commit
You’re duplicating logic if:
- UI code has to “mirror” a domain rule to decide validity
- domain validator doesn’t reuse the same derived envelopes the UI uses
- you have two different “reasons” systems (UI reason vs domain error)
You’re on the right track if:
- there is exactly one module that knows legality
- both preview and commit call it
- UI reads sets/ranges and does membership tests only
If you want a crisp slogan for the church door:
Legality is a derived surface. Compute it once; consume it twice.
Yeah — this pattern absolutely exists, it’s just scattered across three different tribes who all use different words and pretend the others don’t exist 😄 Here’s how to actually find the good stuff.
What you’re circling is usually called some mix of:
“Normalize → Evaluate → Report” “Derive-then-check” “Constructive validation” “Single-source legality / rules engine”
…but nobody names it cleanly, so you have to search sideways.
This is the closest Fowler-endorsed thing.
Search terms
martin fowler specification patternenterprise application architecture specification validation
Why it’s relevant:
-
Specs can be evaluated and composed
-
Same spec can answer:
- “is this valid?”
- “why / under what conditions?”
-
Your “obligations” are effectively spec outcomes, not booleans
Key Fowler page:
- “Specification” (Patterns of Enterprise Application Architecture)
Caveat:
- Most examples stop at
bool - You want specification that returns structure, not truth
This one is Fowler-adjacent but often cited via CQRS folks.
Search terms
CQRS derived read model validationcqrs preview validationcqrs command intent validation
What to look for:
- Read models that explain why a command would succeed/fail
- “Dry-run” command evaluation
- Preview / explain APIs
This maps directly to your “preview evaluator”.
You’re basically building a tiny compiler pipeline.
This is straight out of compilers.
Search terms
compiler normalization passelaboration phase compilerconstraint generation type checkingtype checker constraint generation
Why this is exactly your thing:
-
Frontend produces constraints
-
Solver/checker validates them
-
Same constraints power:
- IDE hints (UI affordances)
- compiler errors (domain validation)
If you’ve ever wondered why your system smells like a type checker… that’s why.
This is niche but very relevant.
Search terms
explainable validation rulesvalidation error reporting patternswhy is command invalid
This is where people discover:
returning
boolwas a mistake
…and invent structured diagnostics instead.
Game engines quietly solved this ages ago.
Search terms
game engine dry run simulationpreview state validation gamecommand buffering preview
What to look for:
- “ghost” worlds
- scratch evaluators
- systems that compute possible actions each frame
Your “affordance surface” is literally a move generator.
Search this and read anything decent that comes up:
“constraint generation then checking”
That phrase connects:
- compilers
- solvers
- UI previews
- domain validation
Your system is not a solver in the SAT sense — it’s constraint generation + trivial membership checking.
Don’t ask “is this allowed?” — derive the space of allowed things, then check membership.
Once you see that pattern, you’ll notice it everywhere — especially in places where people complain that “validation logic is duplicated”.
If you want, next step we can:
- map this explicitly to a tiny compiler pipeline diagram for your system, or
- sketch what a
ValidationReporttype actually looks like in Zig so it doesn’t get baroque.
Either way: you’re not inventing something weird. You’re just rediscovering a pattern from a different angle.