by Anatoly Levenchuk and assortment of LLMs. September 2025
Preface (non-normative)
| ID & Title | Status | Concise content reminder — “what belongs here” |
|---|---|---|
| FPF is a first principle based architecture decisions for transdiciplinary SoTA methods of evolving holons: systems, epistemes, communities. | full text | FPF serves the Engineer, Researcher, and Manager by providing a generative pattern language for constructing and evolving thought, designed as an "operating system for thought". |
| Creativity in Open-Ended Evolution and Assurance* | full text | FPF integrates assurance (audits, evidence) and creativity (generating novel ideas) as complementary engines for responsible innovation, providing a structured choreography for creative work from abduction to operation. |
| Navigating Uncertainty: Building Closed Worlds within an Open World | full text | Explains how FPF reconciles Open-World and Closed-World assumptions, using Bounded Contexts to create reliable 'islands of closure' for engineering decisions within an inherently open world. |
| FPF as an Evolutionary Architecture for Thought | full text | Positions FPF as an architecture for the reasoning process itself, designed to sustain key characteristics like auditability, evolvability, and falsifiability by applying architectural thinking to the dynamics of reasoning. |
| Architectural Characteristic of Thought | full text | Details the key characteristics of rigorous thought (e.g., Auditability, Evolvability, Composability) and the specific FPF mechanisms designed to preserve them. |
| Beyond Cognitive Biases: FPF as a Generative Architecture for Thought | full text | Contrasts FPF's generative, structural approach to avoiding cognitive errors with the traditional corrective, diagnostic approach of hunting for biases, framing FPF as a scaffold that makes errors harder to commit. |
| Thinking Through Writing: The FPF Discipline of Conceptual Work | full text | Describes how FPF uses a discipline of "thinking through writing" with conceptual forms (Cards, Tables, Records) to make thought tangible, shareable, and auditable, while remaining tool-agnostic. |
| Descriptive Ontologies vs. A Thinking-Oriented Architecture | full text | Differentiates FPF's goal of orchestrating reasoning from classical ontologies' goal of cataloging existence, emphasizing FPF's focus on objectives, trust, and dynamics. |
| The "Bitter Lesson" trajectory — compute, data, and freedom over hand‑tuned rules (FPF stance) | full text | How FPF operationalizes the contemporary trend: prefer general models + data + compute + minimal constraints; autonomy budgets; rule‑of‑constraints vs instruction‑of‑procedure; continuous adaptation. |
| The “big storylines” unique to FPF (load‑bearing commitments) | full text | Lists the nine core, load-bearing commitments that define FPF's unique architectural and philosophical stance, from its holonic kernel to its explicit treatment of creativity and assurance. |
| Transdisciplinarity as a Meta‑Theory of Thinking | full text | Explains how FPF treats transdisciplinarity as a meta-theory for designing reasoning, using architheories as generative scaffolds grounded in physical reality to bridge disciplinary silos. |
| FPF as a Culinary Architecture for Collective Thought: Why We Formalize “Obvious” Ideas | full text | Uses the 'culinary architecture' analogy to explain FPF's role in synthesizing 'obvious' ideas into a robust framework for complex, generative problems. |
| Intellect Stack (informative Overview) | full text | Presents a five-layer pedagogical map of cognitive skills (Structure → Knowledge → Action → Strategy → Governance) and links them to FPF architheories. |
| Purpose, Scope, and Explicit Non‑Goals | full text | Clarifies FPF's mission as a generative scaffold for thought, its scope as tool-agnostic normative patterns, and what it explicitly is not (e.g., a domain encyclopedia or a specific methodology). |
Part A · Kernel Architecture Cluster
| § | ID & Title | Status | Keywords & Search Queries | Dependencies |
|---|---|---|---|---|
| A.0 | Onboarding Glossary (NQD & E/E‑LOG) | Stable | Keywords: novelty, quality‑diversity (NQD), explore/exploit (E/E‑LOG), portfolio (set), illumination map (gauge), parity run, comparability, ReferencePlane, CL^plane, ParetoOnly default. Queries: "What is NQD in FPF?", "How does FPF handle creative generation?", "What is an explore-exploit policy in FPF?" | Builds on: E.2, A.5, C.17–C.19. Coordinates with: E.7, E.8, E.10; F.17; G.5, G.9–G.12. Constrains: Any pattern/UTS row that describes a generator, selector, or portfolio. |
| Cluster A.I · Foundational Ontology | ||||
| A.1 | Holonic Foundation: Entity → Holon | Stable | Keywords: part-whole composition, system boundary, entity, holon, U.System, U.Episteme. Queries: "How does FPF model a system and its parts?", "What is a holon?", "Difference between entity and system." | Builds on: P-8 Cross-Scale Consistency. Prerequisite for: A.1.1, A.2, A.14, B.1. |
| A.1.1 | U.BoundedContext: The Semantic Frame |
Stable | Keywords: local meaning, context, semantic boundary, domain, invariants, glossary, DDD. Queries: "How does FPF handle ambiguity?", "What is a Bounded Context in FPF?", "How to define rules for a specific project?" | Builds on: A.1. Prerequisite for: A.2.1, F.0.1. |
| A.2 | Role Taxonomy | Stable | Keywords: role, assignment, holder, context, function vs identity, responsibility, U.RoleAssignment. Queries: "How to model responsibilities?", "What is the difference between what a thing is and what it does?" | Builds on: A.1, A.1.1. Prerequisite for: A.2.1-A.2.6, A.13, A.15. |
| A.2.1 | U.RoleAssignment: Contextual Role Assignment |
Stable | Keywords: Standard, holder, role, context, RoleEnactment, RCS/RSG. Queries: "How to formally assign a role in FPF?", "What is the Holder#Role:Context Standard?" | Refines: A.2. Prerequisite for: A.15. |
| A.2.2 | U.Capability: System Ability (dispositional property) |
Stable | Keywords: ability, skill, performance, action, work scope, measures. Queries: "How to separate ability from permission?", "What is a capability in FPF?" | Builds on: A.2. Informs: A.15, A.2.3. |
| A.2.3 | U.Service: The External Promise |
Stable | Keywords: promise, commitment, consumer, provider, SLO, SLA. Queries: "How to model a service promise?", "Difference between capability and service." | Builds on: A.2.2. Prerequisite for: F.12. |
| A.2.4 | U.EvidenceRole: The Evidential Stance |
Stable | Keywords: evidence, claim, support, justification, episteme. Queries: "How does an episteme serve as evidence?", "Modeling evidence roles." | Builds on: A.2. Informs: A.10, B.3. |
| A.2.5 | U.RoleStateGraph: The Named State Space of a Role |
Stable | Keywords: state machine, RSG, role state, enactability, lifecycle. Queries: "How to model the state of a role?", "What is a Role State Graph?" | Builds on: A.2.1. Prerequisite for: A.15. |
| A.2.6 | Unified Scope Mechanism (USM): Context Slices & Scopes | Stable | Keywords: scope, applicability, ClaimScope (G), WorkScope, set-valued. Queries: "How to define the scope of a claim or capability?", "What is G in F-G-R?" | Builds on: A.1.1. Constrains: A.2.2, A.2.3, B.3. |
| Cluster A.II · Transformation Engine | ||||
| A.3 | Transformer Constitution (Quartet) | Stable | Keywords: action, causality, change, System-in-Role, MethodDescription, Method, Work. Queries: "How does FPF model an action or a change?", "What is the transformer quartet?" | Builds on: A.2. Prerequisite for: A.3.1, A.3.2, A.15. |
| A.3.1 | U.Method: The Abstract Way of Doing |
Stable | Keywords: recipe, how-to, procedure, abstract process. Queries: "What is a Method in FPF?", "Difference between Method and Work." | Refines: A.3. Prerequisite for: A.15. |
| A.3.2 | U.MethodDescription: The Recipe for Action |
Stable | Keywords: specification, recipe, SOP, code, model, epistemic artifact. Queries: "How to document a procedure?", "What is a MethodDescription?" | Refines: A.3. Informs: A.15. |
| A.3.3 | U.Dynamics: The Law of Change |
Stable | Keywords: state evolution, model, simulation, state space. Queries: "How to model state transitions or system dynamics?", "Difference between a Method and Dynamics." | Builds on: A.19. Informs: B.4. |
| Cluster A.III · Time & Evolution | ||||
| A.4 | Temporal Duality & Open-Ended Evolution Principle | Stable | Keywords: design-time, run-time, evolution, versioning, lifecycle, continuous improvement. Queries: "How does FPF handle plan vs. reality?", "How are systems updated?" | Builds on: P-10 Open-Ended Evolution. Prerequisite for: B.4. |
| Cluster A.IV · Kernel Modularity | ||||
| A.5 | Open-Ended Kernel & Architheory Layering | Stable | Keywords: micro-kernel, plug-in, CAL/LOG/CHR, modularity, extensibility. Queries: "What is the architecture of FPF?", "How are new domains added?" | Builds on: P-4, P-5. Prerequisite for: A.6, all Part C. |
| A.6.0 | U.Signature — Universal, law‑governed declaration | Stable | Keywords: signature, vocabulary, laws, applicability, bounded context. Queries: "What is the universal signature block?", "Where do laws vs. implementations live?" | Placement: Kernel; Coordinates: A.6, A.6.1. |
| A.6 | Architheory Signature & Realization | Stable | Keywords: architheory, signature, realization, Γ-export, invariants. Queries: "What is an Architheory signature?", "How do I export Γ?", "What belongs in the signature vs realization?" | Builds on: A.5, E.10, E.8. Prerequisite for: Part C catalogue. |
| A.6.1 | U.Mechanism — Law‑governed signature for a GovernedSubject | Stable | Keywords: mechanism, OpSig, LawSet, GuardPolicy, Transport(Bridge), Γ_time. Queries: "How to declare a mechanism?", "Where do CL/planes penalties route?", "How to relate mechanisms (refine/extend/quotient/product)?" | Builds on: A.6, E.10.D1; Instances: USM, UNM. |
| Cluster A.V · Constitutional Principles of the Kernel | ||||
| A.7 | Strict Distinction (Clarity Lattice) | Stable | Keywords: category error, Object ≠ Description, Role ≠ Work, ontology. Queries: "How to avoid common modeling mistakes?", "What are FPF's core distinctions?" | Builds on: A.1, A.2, A.3. Constrains: all patterns. |
| A.8 | Universal Core (C-1) | Stable | Keywords: universality, transdisciplinary, domain-agnostic, generalization. Queries: "How does FPF ensure its concepts are universal?" | Builds on: P-8. Constrains: Kernel-level U.Types. |
| A.9 | Cross-Scale Consistency (C-3) | Stable | Keywords: composition, aggregation, holarchy, invariants, roll-up. Queries: "How do rules compose across different scales?", "How to aggregate metrics safely?" | Builds on: A.1, A.8. Prerequisite for: B.1. |
| A.10 | Evidence Anchoring (C-4) | Stable | Keywords: evidence, traceability, audit, provenance, SCR/RSCR. Queries: "How are claims supported by evidence?", "How to ensure auditability?" | Builds on: A.1. Prerequisite for: B.3. |
| A.11 | Ontological Parsimony (C-5) | Stable | Keywords: minimalism, simplicity, Occam's razor, essential concepts. Queries: "How does FPF avoid becoming too complex?", "Rule for adding new concepts." | Builds on: P-1 Cognitive Elegance. Constrains: all new U.Type proposals. |
| A.12 | External Transformer & Reflexive Split (C-2) | Stable | Keywords: causality, agency, self-modification, external agent, control loop. Queries: "How to model a self-healing or self-calibrating system?", "What is the external transformer principle?" | Builds on: A.3. Prerequisite for: B.2.5. |
| A.13 | The Agential Role & Agency Spectrum | Stable | Keywords: agency, autonomy, AgentialRole, Agency-CHR, decision-making. Queries: "How is agency modeled in FPF?", "What is the agency spectrum?" | Builds on: A.2. Refined by: C.9 Agency-CHR. |
| A.14 | Advanced Mereology: Components, Portions, Aspects & Phases | Stable | Keywords: mereology, part-of, ComponentOf, PortionOf, PhaseOf, composition. Queries: "How to model different kinds of 'part-of' relationships?" | Refines: A.1. Prerequisite for: B.1.1. |
| A.15 | Role–Method–Work Alignment (Contextual Enactment) | Stable | Keywords: enactment, alignment, plan vs reality, design vs run, MIC, WorkPlan. Queries: "How do roles, methods, and work connect?", "How does an intention become an action in FPF?" | Integrates: A.2, A.3, A.4. Prerequisite for: all operational models. |
| A.15.1 | U.Work: The Record of Occurrence |
Stable | Keywords: execution, event, run, actuals, log, occurrence. Queries: "What is a Work record?", "Where are actual resource costs stored?" | Refines: A.15. Used by: B.1.6, all Part D. |
| A.15.2 | U.WorkPlan: The Schedule of Intent |
Stable | Keywords: plan, schedule, intent, forecast. Queries: "How to model a plan or schedule?", "Difference between a WorkPlan and a MethodDescription." | Refines: A.15. Informs: U.Work. |
| A.16 | Formality–Openness Ladder (FOL): Building Closed Worlds Inside an Open World | Draft/Stub | Keywords: formality levels, rigor, proof, specification, sketch, F0-F9. Queries: "How to measure the formality of a document?", "What are the F0-F9 levels?" | Builds on: A.1. Informs: B.3. |
| A.17 | A.CHR-NORM — Canonical “Characteristic” & rename (Dimension/Axis → Characteristic) | Stable | Keywords: characteristic, measurement, property, attribute, dimension, axis. Queries: "What is the correct term for a measurable property?", "How to define a metric?" | Prerequisite for: A.18, A.19, C.16. |
| A.18 | A.CSLC-KERNEL — Minimal CSLC in Kernel (Characteristic/Scale/Level/Coordinate) | Stable | Keywords: CSLC, scale, level, coordinate, measurement Standard. Queries: "What is the CSLC Standard?", "How to ensure measurements are comparable?" | Builds on: A.17. Prerequisite for: all metric-based patterns. |
| A.19 | A.CHR-SPACE — CharacteristicSpace & Dynamics hook | Stable | Keywords: state space, CharacteristicSpace, dynamics, state model, RSG. Queries: "How to define a system's state space?", "How does FPF model change over time?" | Builds on: A.17, A.18, A.2.5. Prerequisite for: A.3.3. |
Part B — Trans-disciplinary Reasoning Cluster
| § | ID & Title | Status | Keywords & Search Queries | Dependencies |
|---|---|---|---|---|
| B.1 | Universal Algebra of Aggregation (Γ) | Stable | Keywords: aggregation, composition, holon, invariants, IDEM, COMM, LOC, WLNK, MONO, gamma operator. Queries: "How does FPF combine parts into a whole?", "What are the rules for aggregation?", "What is the Gamma (Γ) operator?" | Builds on: A.1, A.9. Prerequisite for: All B.1.x, B.2. |
| B.1.1 | Dependency Graph & Proofs | Stable | Keywords: dependency graph, proofs, structural aggregators, sum, set, slice. Queries: "What is the input for the Gamma operator?", "How are aggregation invariants proven in FPF?" | Builds on: B.1. |
| B.1.2 | System-specific Aggregation Γ_sys | Stable | Keywords: system aggregation, physical systems, mass, energy, boundary rules, Sys-CAL. Queries: "How to aggregate physical systems?", "Conservation laws in FPF aggregation?" | Builds on: B.1, A.1, C.1. |
| B.1.3 | Γ_epist — Knowledge-Specific Aggregation | Stable | Keywords: knowledge aggregation, epistemic, provenance, trust, KD-CAL. Queries: "How to combine knowledge artifacts?", "How does trust propagate in FPF?" | Builds on: B.1, A.1, C.2. |
| B.1.4 | Contextual & Temporal Aggregation (Γ_ctx & Γ_time) | Stable | Keywords: temporal aggregation, time-series, order-sensitive, composition. Queries: "How does FPF handle time-series data?", "How to model processes where order matters?" | Builds on: B.1. |
| B.1.5 | Γ_method — Order-Sensitive Method Composition & Instantiation | Stable | Keywords: method composition, workflow, sequential, concurrent, plan vs run. Queries: "How to combine methods or workflows?", "How does FPF model complex procedures?" | Builds on: B.1, B.1.4, A.3.1. |
| B.1.6 | Γ_work — Work as Spent Resource | Stable | Keywords: work, resource aggregation, cost, energy consumption, Resrc-CAL. Queries: "How to calculate the total cost of a process?", "How are resources aggregated in FPF?" | Builds on: B.1, A.15.1, C.5. |
| B.2 | Meta-Holon Transition (MHT): Recognizing Emergence and Re-identifying Wholes | Stable | Keywords: emergence, MHT, meta-system, new whole, synergy, system of systems. Queries: "How does FPF model emergence?", "What is a Meta-Holon Transition?", "When does a collection become more than the sum of its parts?" | Builds on: B.1, A.1. Prerequisite for: All B.2.x. |
| B.2.1 | BOSC Triggers | Draft | Keywords: BOSC, triggers for emergence, boundary, objective, supervisor, complexity. Queries: "What triggers an MHT?", "What are the BOSC criteria for emergence?" | Builds on: B.2. |
| B.2.2 | MST (Sys) — Meta-System Transition | Stable | Keywords: system emergence, super-system, physical emergence. Queries: "How do new systems emerge from parts?", "What is a Meta-System Transition?" | Builds on: B.2, B.2.1, A.1. |
| B.2.3 | MET (KD) — Meta-Epistemic Transition | Stable | Keywords: knowledge emergence, meta-theory, paradigm shift, scientific revolution. Queries: "How do new theories emerge?", "What is a Meta-Epistemic Transition?" | Builds on: B.2, B.2.1, A.1. |
| B.2.4 | MFT (Meta-Functional Transition) | Stable | Keywords: functional emergence, capability emergence, adaptive workflow, new process. Queries: "How do new capabilities or workflows emerge?", "What is a Meta-Functional Transition?" | Builds on: B.2, B.2.1, A.3.1. |
| B.2.5 | Supervisor–Subholon Feedback Loop | Stable | Keywords: control architecture, feedback loop, supervisor, stability, layered control. Queries: "How does FPF model control systems?", "What is the supervisor-subholon pattern?" | Builds on: B.2, A.1. |
| B.3 | Trust & Assurance Calculus (F–G–R with Congruence) | Stable | Keywords: trust, assurance, reliability, F-G-R, formality, scope, congruence, evidence. Queries: "How is trust calculated in FPF?", "What is the F-G-R model?", "How does FPF handle evidence and confidence?" | Builds on: A.10. Prerequisite for: All B.3.x, D.4. |
| B.3.1 | Components & Epistemic Spaces | Draft | Keywords: F-G-R components, measurement templates, epistemic space. Queries: "How are F, G, and R measured?", "What are epistemic spaces?" | Builds on: B.3. |
| B.3.2 | Evidence & Validation Logic (LOG-use) | Draft | Keywords: verification, validation, confidence, logic, proof. Queries: "What is the logic for validating claims in FPF?", "Difference between verification and validation." | Builds on: B.3, C.6. |
| B.3.3 | Assurance Subtypes & Levels | Stable | Keywords: assurance levels, L0-L2, TA, VA, LA, typing, verification, validation. Queries: "What are the assurance levels in FPF?", "How does an artifact mature in FPF?" | Builds on: B.3. |
| B.3.4 | Evidence Decay & Epistemic Debt | Stable | Keywords: evidence aging, decay, freshness, epistemic debt, stale data. Queries: "How does FPF handle outdated evidence?", "What is epistemic debt?" | Builds on: B.3. |
| B.3.5 | CT2R-LOG — Working-Model Relations & Grounding | Stable | Keywords: grounding, constructive trace, working model, assurance layer, CT2R, Compose-CAL. Queries: "How are FPF models grounded in evidence?", "What is the CT2R-LOG?" | Builds on: B.3, E.14, C.13. |
| B.4 | Canonical Evolution Loop | Stable | Keywords: continuous improvement, evolution, Run-Observe-Refine-Deploy, PDCA, OODA. Queries: "How do systems evolve in FPF?", "What is the canonical evolution loop?" | Builds on: A.4, A.12. Prerequisite for: All B.4.x. |
| B.4.1 | System Instantiation | Stable | Keywords: field upgrade, physical system evolution, deployment. Queries: "How are physical systems updated in FPF?" | Builds on: B.4, A.1. |
| B.4.2 | Knowledge Instantiation | Stable | Keywords: theory refinement, knowledge evolution, scientific method. Queries: "How are scientific theories refined in FPF?" | Builds on: B.4, A.1. |
| B.4.3 | Method Instantiation | Stable | Keywords: adaptive workflow, process improvement, operational evolution. Queries: "How do workflows or methods evolve in FPF?" | Builds on: B.4, A.3.1. |
| B.5 | Canonical Reasoning Cycle | Stable | Keywords: reasoning, problem-solving, Abduction-Deduction-Induction, scientific method. Queries: "How does FPF model problem-solving?", "What is the canonical reasoning cycle?" | Builds on: A.10. Prerequisite for: All B.5.x. |
| B.5.1 | Explore → Shape → Evidence → Operate | Stable | Keywords: development cycle, lifecycle, state machine, Explore, Shape, Evidence, Operate. Queries: "What are the development stages of an artifact in FPF?" | Builds on: B.5. |
| B.5.2 | Abductive Loop | Stable | Keywords: abduction, hypothesis generation, creativity, innovation. Queries: "How does FPF model creative thinking?", "What is the abductive loop?" | Builds on: B.5. |
| B.5.2.1 | Creative Abduction with NQD (binding) | Stable | Keywords: NQD, novelty, quality, diversity, open-ended search, Pareto front, E/E-LOG. Queries: "How to systematically generate creative ideas?", "What is NQD in FPF?" | Builds on: B.5.2, C.17, C.18, C.19. |
| B.5.3 | Role-Projection Bridge | Stable | Keywords: domain-specific vocabulary, concept bridge, mapping, terminology. Queries: "How does FPF integrate domain-specific language?", "What is a Role-Projection Bridge?" | Builds on: A.2, C.3. |
| B.6 | Characterisation Families (CHR-use) | Draft | Keywords: characterization, templates, CHR architheories, measurement. Queries: "How to use CHR architheories?" | Builds on: Part C (CHR). |
| B.7 | Common Logic Suite (LOG-use) | Draft | Keywords: logic, inference, trust propagation, LOG-CAL. Queries: "How to apply formal logic in FPF?" | Builds on: Part C (LOG-CAL). |
Part C — Architheory Specifications
| § | ID & Title | Status | Keywords & Search Queries | Dependencies |
|---|---|---|---|---|
| Cluster C.I – Core CALs / LOGs / CHRs | ||||
| C.1 | Sys‑CAL | Draft | Keywords: physical system, composition, conservation laws, energy, mass, resources, U.System. Queries: "How to model physical systems in FPF?", "What are conservation laws in FPF?", "Modeling a pump or engine." | Builds on: A.1 Holonic Foundation, A.14. Coordinates with: Resrc-CAL. Prerequisite for: M-Sys-CAL. |
| C.2 | KD‑CAL | Stable | Keywords: knowledge, epistemic, evidence, trust, assurance, F-G-R, Formality, ClaimScope, Reliability, provenance. Queries: "What is F-G-R?", "How does FPF handle evidence and trust?", "How to model a scientific theory?". | Builds on: A.1, A.10, B.3. Prerequisite for: All patterns using F-G-R, M-KD-CAL. |
| C.2.1 | U.Episteme — Semantic Triangle via Components | Stable | Keywords: semantic triangle, object, concept, symbol, carrier, meaning, representation. Queries: "What is a knowledge artifact in FPF?", "How does FPF separate meaning from its physical form?". | Builds on: C.2. Refines: A.1, A.7. |
| C.2.3 | Unified Formality Characteristic F | Stable | Keywords: Formality, F-scale, F0-F9, rigor, proof, specification, formal methods. Queries: "What are the FPF formality levels?", "How to measure the rigor of a specification?". | Builds on: C.2. Constrains: All patterns referencing F-G-R. |
| C.3 | Kind‑CAL — Kinds, Intent/Extent, and Typed Reasoning | Stable | Keywords: kind, type, intension, extension, subkind, typed reasoning, classification, vocabulary. Queries: "How does FPF handle types?", "What is a 'Kind'?", "Difference between 'scope' and 'type'?". | Builds on: A.1, A.2.6 (USM). Prerequisite for: LOG-CAL, ADR-Kind-CAL, and any pattern needing typed guards. |
| C.3.1 | U.Kind & U.SubkindOf (Core) |
Stable | Keywords: kind, subkind, partial order, type hierarchy. Queries: "What is U.Kind in FPF?", "How to model 'is-a' relationships?". | Builds on: A.1, A.2.6 (USM). Prerequisite for: C.3.2, C.3.3. |
| C.3.2 | KindSignature (+F) & Extension/MemberOf |
Stable | Keywords: KindSignature, intension, extension, MemberOf, Formality F, determinism. Queries: "How to define the meaning of a Kind?", "What is the difference between intent and extent in FPF?". | Builds on: C.3.1. Prerequisite for: C.3.3, C.3.4. |
| C.3.3 | KindBridge & CL^k — Cross‑context Mapping of Kinds |
Stable | Keywords: KindBridge, type-congruence, CL^k, cross-context mapping, R penalty. Queries: "How to map types between domains?", "What is a KindBridge?". | Builds on: C.3.1, C.3.2, A.2.6, C.2.2. |
| C.3.4 | RoleMask — Contextual Adaptation of Kinds (without cloning) |
Stable | Keywords: RoleMask, context-local adaptation, constraints, subkind promotion. Queries: "How to adapt a Kind for a local context?", "What is a RoleMask in FPF?". | Builds on: C.3.1, C.3.2. |
| C.3.5 | KindAT — Intentional Abstraction Facet for Kinds (K0…K3) |
Stable | Keywords: KindAT, abstraction tier, K0-K3, informative facet, planning. Queries: "What are the abstraction tiers for Kinds?", "How to plan formalization effort?". | Builds on: C.3.1. |
| C.3.A | Typed Guard Macros for Kinds + USM (Annex) | Stable | Keywords: Typed guard, ESG, Method-Work, USM, Kind-CAL, regulatory profile. Queries: "How to write a typed guard?", "How do Kinds and USM interact in gates?". | Builds on: All C.3.x, A.2.6. |
| C.4 | Method‑CAL | Draft | Keywords: method, recipe, procedure, workflow, SOP, MethodDescription, operator. Queries: "How to model a process or workflow?", "What is a MethodDescription in FPF?". | Builds on: A.3, A.15. Coordinates with: Γ_method (B.1.5). |
| C.5 | Resrc‑CAL | Draft | Keywords: resource, energy, material, information, cost, budget, consumption, Γ_work. Queries: "How does FPF model resource usage?", "How to track costs of a process?". | Builds on: A.15.1 (Work). Coordinates with: Sys-CAL. |
| C.6 | LOG‑CAL – Core Logic Calculus | Draft | Keywords: logic, inference, proof, modal logic, trust operators, reasoning. Queries: "What is the base logic of FPF?", "How does FPF handle formal proofs?". | Builds on: Kind-CAL. Is used by: B.7. |
| C.7 | CHR‑CAL – Characterisation Kit | Draft | Keywords: characteristic, property, measurement, metric, quality. Queries: "How to define a new measurable property in FPF?", "What is a CHR architheory?". | Builds on: A.17, A.18. Prerequisite for: Agency-CHR, Creativity-CHR. |
| Cluster C.II – Domain‑Specific Architheories | ||||
| C.9 | Agency‑CHR | Draft | Keywords: agency, agent, autonomy, decision-making, active inference. Queries: "How to measure autonomy?", "What defines an agent in FPF?". | Builds on: CHR-CAL, A.13. |
| C.10 | Norm‑CAL | Draft | Keywords: norm, constraint, ethics, obligation, permission, deontics. Queries: "How to model rules and constraints?", "Where are ethical principles defined in FPF?". | Builds on: A.10. Is used by: Part D. |
| C.11 | Decsn‑CAL | Draft | Keywords: decision, choice, preference, utility, options. Queries: "How does FPF model decision-making?", "How to represent preferences and utility?". | Builds on: A.13. |
| Cluster C.III – Meta‑Infrastructure CALs | ||||
| C.12 | ADR‑Kind-CAL | Draft | Keywords: versioning, rationale, DRR, architecture decision record. Queries: "How are changes to kinds managed?". | Builds on: Kind-CAL, E.9. |
| C.13 | Compose‑CAL — Constructional Mereology | Stable | Keywords: mereology, part-whole, composition, sum, set, slice, extensional identity. Queries: "How does FPF formally construct parts and wholes?", "What is Compose-CAL?". | Builds on: A.14. Is used by: B.3.5 (CT2R-LOG). |
| Cluster C.IV – Composite & Macro‑Scale | ||||
| C.14 | M‑Sys‑CAL | Draft | Keywords: system-of-systems, infrastructure, large-scale systems, orchestration. Queries: "How to model a complex infrastructure like a power grid?". | Builds on: Sys-CAL, B.2.2. |
| C.15 | M‑KD‑CAL | Draft | Keywords: paradigm, scientific discipline, meta-analysis, knowledge ecosystem. Queries: "How to model an entire field of science?". | Builds on: KD-CAL, B.2.3. |
| C.16 | MM‑CHR — Measurement & Metrics Characterization | Stable | Keywords: measurement, metric, unit, scale, CSLC, U.DHCMethodRef, U.Measure. Queries: "How are metrics defined in FPF?", "What is the CSLC discipline?". | Builds on: A.17, A.18. Is a prerequisite for: All CHR architheories. |
| C.17 | Creativity‑CHR — Characterising Generative Novelty & Value | Stable | Keywords: creativity, novelty, value, surprise, innovation, ideation. Queries: "How does FPF measure creativity?", "What defines a novel idea?". | Builds on: CHR-CAL, MM-CHR. Coordinates with: NQD-CAL, E/E-LOG. |
| C.18 | NQD‑CAL — Open‑Ended Search Calculus | Stable | Keywords: search, exploration, hypothesis generation, novelty, quality, diversity (NQD). Queries: "How does FPF support structured brainstorming?", "What is NQD search?". | Builds on: KD-CAL. Coordinates with: B.5.2.1, Creativity-CHR, E/E-LOG. |
| C.18.1 | SLL — Scaling‑Law Lens (binding) | Stable | Keywords: scaling law, scale variables (S), compute‑elasticity, data‑elasticity, resolution‑elasticity, exponent class, knee, diminishing returns. Queries: "How to make search scale‑savvy?", "Where to declare scale variables and expected elasticities?" | Builds on: C.16, C.17, C.18. Coordinates with: C.19, G.5, G.9, G.10. |
| C.19 | E/E‑LOG — Explore–Exploit Governor | Stable | Keywords: explore-exploit, policy, strategy, decision lens, portfolio management. Queries: "How to balance exploration and exploitation?", "What is an EmitterPolicy?". | Builds on: Decsn-CAL. Coordinates with: NQD-CAL. |
| C.19.1 | BLP — Bitter‑Lesson Preference (policy) | Stable | Keywords: general‑method preference, iso‑scale parity, scale‑probe, deontic override. Queries: "What is the default policy when a domain‑specific trick competes with a scalable general method?" | Builds on: C.19, C.24. Coordinates with: G.5, G.8, G.9, A.0. |
| C.20 | Discipline‑CAL — Composition of U.Discipline |
Stable | Keywords: discipline, U.AppliedDiscipline, U.Transdiscipline, episteme corpus, standards, institutions, Γ_disc. Queries: "How to compose and assess a discipline in FPF?" | Builds on: C.2 KD‑CAL, G.0, Part F (Bridges/UTS). Coordinates with: C.21, C.23. |
| C.21 | Discipline‑CHR · Field Health & Structure | Stable | Keywords: discipline, field health, reproducibility, standardisation, alignment, disruption. Queries: "How to measure the health of a scientific field?", "What is reproducibility rate?". | Builds on: C.16, C.2, A.2.6, B.3. Coordinates with: C.20, G.2. |
| C.22 | Problem‑CHR · Problem Typing & TaskSignature Binding | Stable | Keywords: problem typing, TaskSignature, selector, eligibility, acceptance, CHR‑typed traits. Queries: "How does FPF type problems for selection?", "What is a TaskSignature?". | Builds on: C.16, G.5, G.0. Coordinates with: G.4, C.23. |
| C.23 | Method‑SoS‑LOG — MethodFamily Evidence & Maturity | Stable | Keywords: MethodFamily, evidence, maturity, SoS-LOG, admit, degrade, abstain, selector. Queries: "How is method family maturity assessed?", "What is the SoS-LOG for selection?". | Builds on: G.5, G.4, C.22, B.3. |
| C.24 | C.Agent-Tools-CAL — Agentic Tool-Use & Call-Planning | Architheory specification (CAL) for scalable, policy‑aware sequencing of agentic tool calls under budgets and trust gates; instantiates Bitter‑Lesson Preference and the Scaling‑Law Lens. | ||
| C.25 | Q-Bundle — Structured Treatment of “-ilities” (Quality Families) | Stable | Clarifies how to model common “-ilities” (availability, reliability, etc.) either as single measurable Characteristics or as composite bundles combining Measures [CHR] + Scope [USM] + Mechanism/Status slots. | Builds on: A.2.6 (USM), A.6.1 (Mechanism), C.16 (MM-CHR) |
Part D – Multi-scale Ethics & Conflict-Optimisation
| § | ID & Title | Status | Keywords & Search Queries | Dependencies |
|---|---|---|---|---|
| D.1 | Axiological Neutrality Principle | Stub | Keywords: axiology, values, ethics, neutrality, morals, preference lattice, objective function. Queries: "Does FPF have built-in ethics?", "How to model different value systems in FPF?", "What is axiological neutrality?" | Builds on: E.2 (Pillars). Enables: D.2, D.4. |
| D.2 | Multi-Scale Ethics Framework | Stub | Keywords: ethics, scale, levels, scope, responsibility, agent, team, ecosystem, planet. Queries: "How to apply ethics at different scales?", "FPF model for team ethics vs. individual ethics." | Builds on: D.1, A.9 (Cross-Scale Consistency). Constrains: D.2.1-D.2.4. |
| D.2.1 | Local-Agent Ethics | Stub | Keywords: individual ethics, duties, permissions, agent, system. Queries: "Modeling duties for a single agent." | Builds on: D.2. |
| D.2.2 | Group-Ethics Standards | Stub | Keywords: collective norms, team ethics, veto, subsidiarity. Queries: "How to define rules for a team in FPF?" | Builds on: D.2. |
| D.2.3 | Ecosystem Stewardship | Stub | Keywords: externalities, tragedy of the commons, inter-architheory. Queries: "Modeling ethical impact on an ecosystem." | Builds on: D.2. |
| D.2.4 | Planetary-Scale Precaution | Stub | Keywords: catastrophic risk, long-termism, precautionary principle. Queries: "How does FPF handle long-term ethical risks?" | Builds on: D.2. |
| D.3 | Holonic Conflict Topology | Stub | Keywords: conflict, clash, disagreement, resolution, resource conflict, goal conflict, epistemic conflict. Queries: "How to model conflicts between systems in FPF?", "Types of conflicts in FPF." | Builds on: A.1 (Holon), B.1 (Aggregation). Enables: D.3.1, D.4. |
| D.3.1 | Conflict Detection Logic (LOG-use) | Stub | Keywords: conflict detection, logic, predicates, conflictsWith. Queries: "Formal logic for detecting conflicts." |
Builds on: D.3. |
| D.3.2 | Hierarchical Escalation Protocol | Stub | Keywords: escalation, mediation, negotiation, DRR. Queries: "How does FPF escalate unresolved conflicts?" | Builds on: D.3. |
| D.4 | Trust-Aware Mediation Calculus | Stub | Keywords: mediation, negotiation, conflict resolution, trust score, assurance, algorithm. Queries: "How does FPF resolve conflicts using trust?", "What is the algorithm for mediation?", "Using B.3 scores for decision making." | Builds on: D.3, B.3 (Trust & Assurance Calculus). Uses: C.5 (Resrc-CAL). |
| D.4.1 | Fair-Share Negotiation Operator | Stub | Keywords: fair division, negotiation, Nash bargaining, bias correction. Queries: "Modeling fair negotiation between agents." | Builds on: D.4. |
| D.4.2 | Assurance-Driven Override | Stub | Keywords: safety override, assurance, utility, risk management. Queries: "When does safety override performance in FPF?" | Builds on: D.4. |
| D.5 | Bias-Audit & Ethical Assurance | Stable | Keywords: bias, audit, ethics, assurance, fairness, review cycle, taxonomy, AI ethics, responsible AI. Queries: "How does FPF handle bias?", "What is the Bias-Audit Cycle?", "How to ensure a model is fair?", "Ethical review process in FPF." | Builds on: E.5.4 (Cross-Disciplinary Bias Audit). Complements: B.3.3 (Assurance Levels). |
| D.5.1 | Taxonomy-Guided Audit Templates | Stub | Keywords: bias taxonomy, audit checklist, template. Queries: "Templates for conducting a bias audit." | Builds on: D.5. |
| D.5.2 | Assurance Metrics Roll-up | Stub | Keywords: ethical risk index, metrics, evidence, roll-up. Queries: "How to calculate an overall ethical risk score in FPF?" | Builds on: D.5, B.3. |
Part E – The FPF Constitution and Authoring Guides
| § | ID & Title | Status | Keywords & Search Queries | Dependencies |
|---|---|---|---|---|
| Cluster E.I — The FPF Constitution | ||||
| E.1 | Vision & Mission | Stable | Keywords: vision, mission, operating system for thought, purpose, scope, goals, non-goals. Queries: "What is FPF?", "What is the purpose of the First Principles Framework?", "What problem does FPF solve?". | Prerequisite for: All other patterns, especially E.2. |
| E.2 | The Eleven Pillars | Stable | Keywords: principles, constitution, pillars, invariants, core values, rules, P-1 to P-11. Queries: "What are the core principles of FPF?", "What are the eleven pillars?". | Builds on: E.1. Prerequisite for: E.3 and all normative patterns. |
| E.3 | Principle Taxonomy & Precedence Model | Stable | Keywords: taxonomy, precedence, conflict resolution, hierarchy, principles, classification, Gov, Arch, Epist, Prag, Did. Queries: "How does FPF resolve conflicting principles?", "What is the hierarchy of FPF rules?". | Builds on: E.2. Constrains: All patterns and DRRs. |
| E.4 | FPF Artefact Architecture | Stable | Keywords: artifact, families, architecture, conceptual core, tooling, pedagogy, canon, tutorial, linter. Queries: "How are FPF documents structured?", "What is the difference between the core spec and tooling?". | Builds on: E.1. Constrained by: E.5.3. |
| E.5 | Four Guard-Rails of FPF | Stable | Keywords: guardrails, constraints, architecture, rules, safety, GR-1 to GR-4. Queries: "What are the main architectural constraints in FPF?". | Builds on: E.2, E.3. Prerequisite for: E.5.1, E.5.2, E.5.3, E.5.4. |
| E.5.1 | DevOps Lexical Firewall | Stable | Keywords: lexical firewall, jargon, tool-agnostic, conceptual purity, DevOps, CI/CD, yaml. Queries: "Can I use terms like 'CI/CD' in FPF core patterns?". | Refines: E.5. Constrains: All Core patterns. |
| E.5.2 | Notational Independence | Stable | Keywords: notation, syntax, semantics, tool-agnostic, diagram, UML, BPMN. Queries: "Does FPF require a specific diagram style?", "How is meaning defined in FPF?". | Refines: E.5. Constrains: All Core patterns. |
| E.5.3 | Unidirectional Dependency | Stable | Keywords: dependency, layers, architecture, modularity, acyclic, Core, Tooling, Pedagogy. Queries: "What are the dependency rules between FPF artifact families?". | Refines: E.5. Constrains: E.4. |
| E.5.4 | Cross-Disciplinary Bias Audit | Stable | Keywords: bias, audit, ethics, fairness, trans-disciplinary, neutrality, review. Queries: "How does FPF handle bias?", "Is there an ethics review process in FPF?". | Refines: E.5. Constrains: All Core patterns. Links to: Part D. |
| Cluster E.II — The Author’s Handbook | ||||
| E.6 | Didactic Architecture of the Spec | Stable | Keywords: didactic, pedagogy, structure, narrative flow, on-ramp, learning. Queries: "How is the FPF specification structured for learning?", "What is the 'On-Ramp first' principle?". | Builds on: E.2 (P-2 Didactic Primacy). |
| E.7 | Archetypal Grounding Principle | Stable | Keywords: grounding, examples, archetypes, U.System, U.Episteme, Tell-Show-Show. Queries: "How are FPF patterns explained?", "What are the standard examples in FPF?". | Builds on: E.6. Constrains: All [A] patterns. |
| E.8 | FPF Authoring Conventions & Style Guide | Stable | Keywords: authoring, style guide, conventions, template, S-rules, narrative flow. Queries: "How to write a new FPF pattern?", "What is the FPF style guide?". | Builds on: E.6, E.7. Constrains: All new patterns. |
| E.9 | Design-Rationale Record (DRR) Method | Stable | Keywords: DRR, design rationale, change management, decision record, context, consequences. Queries: "How are changes to FPF managed?", "What is a DRR?". | Builds on: E.2 (P-10 Open-Ended Evolution). Constrains: All normative changes. |
| E.10 | LEX-BUNDLE: Unified Lexical Rules for FPF | Stable | Keywords: lexical rules, naming, registers, rewrite rules, process, function, service. Queries: "What is the complete set of FPF naming rules?". | Builds on: A.7, E.5, F.5. Coordinates with: A.2, A.10, A.15, B.1, B.3, Part F. |
| E.10.P | Conceptual Prefixes (policy & registry) | Stable | Keywords: prefixes, U., Γ_, ut:, tv:, namespace, registry. Queries: "What do the prefixes like 'U.' mean in FPF?". | Depends on: E.9. Constrains: E.5.1, E.5.2. |
| E.10.D1 | Lexical Discipline for “Context” (D.CTX) | Stable | Keywords: context, U.BoundedContext, anchor, domain, frame. Queries: "What is the formal meaning of 'Context' in FPF?". | Builds on: A.7, A.4. Coordinates with: F.1, F.2, F.3, F.7, F.9. |
| E.10.D2 | Intension–Description–Specification Discipline (I/D/S) | Stable | Keywords: intension, description, specification, I/D/S, testable, verifiable. Queries: "Difference between a description and a specification in FPF?". | Builds on: A.7, E.10.D1, C.2.3. Constrains: F.4, F.5, F.8, F.9, F.15. |
| E.11 | Authoring-Tier Scheme (ATS) | stable | Keywords: authoring tiers, AT0, AT1, AT2, AT3, gate-crossings. Queries: "What are the FPF authoring tiers?", "How does FPF separate applied work from architheory authoring?". | Builds on: E.10, G.0. |
| E.12 | Didactic Primacy & Cognitive Ergonomics | Stable | Keywords: didactic, cognitive load, ergonomics, usability, Rationale Mandate, HF-Loop. Queries: "How does FPF ensure it's understandable?", "What is the 'So What?' test in FPF?". | Builds on: E.2 (P-2). Complements: E.13. |
| E.13 | Pragmatic Utility & Value Alignment | Stable | Keywords: pragmatic, utility, value, Goodhart's Law, Proxy-Audit Loop, MVE. Queries: "How does FPF ensure solutions are useful, not just correct?", "What is a Minimally Viable Example (MVE)?". | Builds on: E.2 (P-7). Complements: E.12. |
| E.14 | Human-Centric Working-Model | Stable | Keywords: working model, human-centric, publication surface, grounding, assurance layers. Queries: "What is the main interface for FPF users?", "How does FPF separate human-readable models from formal assurance?". | Builds on: E.7, E.8, C.2.3. Coordinates with: B.3.5, C.13, E.10. |
| E.15 | Lexical Authoring & Evolution Protocol (LEX-AUTH) | stable | Keywords: lexical authoring, evolution protocol, LAT, delta-classes. Queries: "How are FPF patterns authored and evolved?", "What is a Lexical Authoring Trace (LAT)?". | Builds on: E.9, E.10, B.4, C.18, C.19, A.10, B.3, F.15. |
| E.16 | RoC‑Autonomy: Budget & Enforcement | normative | Keywords: autonomy, budget, guard, override, ledger, SoD, SpeechAct. Queries: "How is autonomy bounded and tested?", "How are overrides enforced?" | Builds on: E.8, E.10, E.11; ties F.4/F.6/F.15/F.17; G.4/G.5/G.9. |
Part F — The Unification Suite (U‑Suite): Concept‑Sets, SenseCells & Contextual Role Assignment
| § | ID & Title | Status | Keywords & Search Queries | Dependencies |
|---|---|---|---|---|
| F.0.1 | Contextual Lexicon Principles | Stable | Keywords: local meaning, context, semantic boundary, bridge, congruence, lexicon, U.BoundedContext. Queries: "How does FPF handle ambiguity?", "What is the principle of local meaning?", "How do different contexts communicate?". | Builds on: A.1.1. Prerequisite for: All patterns in Part F. |
| Cluster F.I — context of meaning & Raw Material | ||||
| F.1 | Domain‑Family Landscape Survey | Stable | Keywords: domain‑family survey, context map, canon, scope notes, versioning, authoritative source. | |
| F.2 | Term Harvesting & Normalisation | Stable | Keywords: term harvesting, lexical unit, normalization, provenance, surface terms. Queries: "How to extract terminology from a standard?", "What is a local lexical unit?", "How to handle synonyms within one domain?". | Builds on: F.1. Prerequisite for: F.3. |
| F.3 | Intra‑Context Sense Clustering | Stable | Keywords: sense clustering, disambiguation, Local-Sense, SenseCell, counter-examples. Queries: "How to group similar terms within a single domain?", "What is a SenseCell?", "How to handle words with multiple meanings in one context?". | Builds on: F.2. Prerequisite for: F.4, F.7, F.9. |
| Cluster F.II — Concept-Sets & Role Assignment/Description (definition, naming, decision) | ||||
| F.4 | Role Description (RCS + RoleStateGraph + Checklists) | Stable | Keywords: role template, status template, invariants, RoleStateGraph (RSG), Role Characterisation Space (RCS). Queries: "How to define a role in FPF?", "What is a Role Description?", "How to specify the states of a role?". | Builds on: F.3, A.2.1. Prerequisite for: F.6, F.8. |
| F.5 | Naming Discipline for U.Types & Roles | Stable | Keywords: naming conventions, lexical rules, morphology, twin registers, U.Type naming. Queries: "What are the rules for naming roles in FPF?", "How to create clear and consistent names for concepts?". | Builds on: F.4, E.10. |
| F.6 | Role Assignment & Enactment Cycle (Six-Step) | Stable | Keywords: role assignment, enactment, conceptual moves, asserting status. Queries: "What is the process for assigning a role?", "How is a role enacted in FPF?", "What are the six steps of role assignment?". | Builds on: F.4, A.2.1, A.15. |
| F.7 | Concept‑Set Table Construction | Stable | Keywords: Concept-Set, cross-context comparison, sense alignment, relation types (≡/⋈/⊂/⟂). Queries: "How to compare concepts from different domains?", "What is a Concept-Set table?", "How to build a unified view of a concept?". | Builds on: F.3, F.9. Prerequisite for: F.8. |
| F.8 | Mint or Reuse? (U.Type vs Concept-Set vs Role Description vs Alias) | Stable | Keywords: decision lattice, type explosion, reuse, minting new types, parsimony. Queries: "When should I create a new U.Type?", "How to avoid creating too many roles?", "Decision guide for new concepts.". | Builds on: F.4, F.7. |
| Cluster F.III — Cross‑Context Alignment & Applied Bindings | ||||
| F.9 | Alignment & Bridge across Contexts | Stable | Keywords: bridge, alignment, congruence-loss (CL), cross-context mapping, policies. Queries: "How to connect concepts between different domains?", "What is an Alignment Bridge?", "How to handle information loss during translation?". | Builds on: F.3. Prerequisite for: F.7, F.10. |
| F.10 | Status Families Mapping (Evidence • Standard • Requirement) | Stable | Keywords: status, evidence, standard, requirement, polarity, applicability windows. Queries: "How to map different types of status like 'evidence' and 'requirement'?", "How does FPF handle compliance?". | Builds on: F.9, B.3. |
| F.11 | Method Quartet Harmonisation | Stable | Keywords: Method, MethodDescription, Work, Actuation, Role–Method–Work alignment. Queries: "How to align the concepts of 'method' and 'work' across domains?", "What is the method quartet?". | Builds on: F.9, A.15. |
| F.12 | Service Acceptance Binding | Stable | Keywords: Service Level Objective (SLO), Service Level Agreement (SLA), acceptance criteria, binding, observation. Queries: "How to bind an SLO to actual work?", "How is service acceptance modeled in FPF?". | Builds on: F.9, A.2.3, KD-CAL. |
| Cluster F.IV — Lexical Development Cycle, Growth Control, Tests & Examples | ||||
| F.13 | Lexical Continuity & Deprecation | Stable | Keywords: evolution, deprecation, renaming, splitting terms, merging terms. Queries: "How to manage changes to terminology over time?", "What is the process for renaming a concept?". | Builds on: F.5. |
| F.14 | Anti‑Explosion Control (Roles & Statuses) | Stable | Keywords: vocabulary growth, guard-rails, separation-of-duties, bundles, reuse. Queries: "How to prevent having too many roles and statuses?", "What are the strategies for controlling vocabulary size?". | Builds on: F.4, F.8. |
| F.15 | SCR/RSCR Harness for Unification | Stable | Keywords: static checks, regression tests, acceptance tests, validation, SenseCell testing. Queries: "How is the unification process validated?", "What are SCR/RSCR tests in FPF?". | Builds on: All of F.1-F.14. |
| F.16 | Worked‑Example Template (Cross‑Domain) | Stable | Keywords: didactic template, example, pedagogy, cross-domain illustration. Queries: "What is the standard format for a worked example in FPF?", "How to show a concept applied across different fields?". | Builds on: All of F.1-F.12. |
| F.17 | Unified Term Sheet (UTS) | Stable | Keywords: Unified Term Sheet, UTS, summary table, glossary, publication, human-readable output. Queries: "What is the final output of the FPF unification process?", "Where can I find a summary of all unified terms?". | Builds on: F.1-F.12. |
| F.18 | Local-First Unification Naming Protocol | Stable | Keywords: naming protocol, Name Card, local meaning, context-anchored naming. Queries: "What is the formal protocol for naming concepts?", "What is a Name Card in FPF?". | Builds on: F.1-F.5. |
Part G – Discipline SoTA Architheory Kit
| § | ID & Title | Status | Keywords & Search Queries | Dependencies |
|---|---|---|---|---|
| G.0 | CG-Spec · Frame Standard & Comparison Gate | Stable | Keywords: CG-Frame, governance, Standard, comparability, comparison gate, evidence, trust folding, Γ-fold, rules, policy. Queries: "How does FPF ensure metrics are comparable?", "What are the rules for comparing data across different models?", "What is a CG-Spec?". | Builds on: B.3 (Trust), A.17-A.19 (MM-CHR), Part F (Bridges). Prerequisite for: G.1, G.2, G.3, G.4, G.5. |
| G.1 | CG-Frame-Ready Generator | Stable | Keywords: generator, SoTA, variant candidates, scaffold, F-suite, artifact creation, UTS, Role Description. Queries: "How to create new FPF artifacts for a domain?", "What is the process for extending FPF with a new theory?", "How does FPF generate candidate solutions?". | Builds on: G.0, C.17 (Creativity-CHR), C.18 (NQD-CAL), C.19 (E/E-LOG). Produces: Artifacts for Part F. |
| G.2 | SoTA Harvester & Synthesis | Stable | Keywords: SoTA, harvester, synthesis, literature review, state-of-the-art, competing Traditions, triage, Bridge Matrix, Claim Sheets. Queries: "How does FPF incorporate existing research?", "How to model competing scientific theories?", "What is a SoTA Synthesis Pack?". | Builds on: F.9 (Bridges). Prerequisite for: G.3, G.4. |
| G.3 | CHR Authoring: Characteristics · Scales · Levels · Coordinates | Stable | Keywords: CHR, authoring, characteristics, scales, levels, coordinates, CSLC, measurement, metrics, typing. Queries: "How do I define a new metric in FPF?", "What are the rules for creating characteristics?", "What is the CHR layer?". | Builds on: G.2, A.17-A.19 (MM-CHR), C.16. Prerequisite for: G.4. |
| G.4 | CAL Authoring: Calculi · Acceptance · Evidence | Stable | Keywords: CAL, calculus, operators, acceptance clauses, evidence, logic, rules, predicates. Queries: "How to define new rules or logic in FPF?", "What is a CAL architheory?", "How to specify acceptance criteria for a method?". | Builds on: G.3, B.3 (Trust). Prerequisite for: G.5. |
| G.5 | Multi-Method Dispatcher & MethodFamily Registry | Stable | Keywords: dispatcher, selector, method family, registry, No-Free-Lunch, policy, selection, multi-method. Queries: "How does FPF choose the right algorithm for a problem?", "What is the multi-method dispatcher?", "How to handle competing methods in FPF?". | Builds on: G.2, G.3, G.4, C.19 (E/E-LOG). |
| G.6 | Evidence Graph & Provenance Ledger | Stable | Keywords: EvidenceGraph, provenance, path, anchor, lane, SCR, RSCR, PathId, PathSliceId. Queries: "How does FPF trace claims to evidence?", "What is an EvidenceGraph?", "How are evidence paths identified?". | Builds on: A.10, B.3, G.4, F.9, C.23. Prerequisite for: G.5. |
| G.7 | Cross-Tradition Bridge Matrix & CL Calibration | stub | Keywords: Bridge Matrix, Tradition, Congruence Level (CL), CL^k, calibration, sentinel, loss notes, ReferencePlane. Queries: "How to compare competing scientific theories in FPF?", "What is a Bridge Matrix?", "How is Congruence Level calibrated?". | Builds on: G.2, F.9, B.3, E.10, E.11. Prerequisite for: G.5. |
| G.8 | SoS-LOG Bundles & Maturity Ladders | Stable | Keywords: SoS-LOG, maturity ladder, admissibility ledger, selector, admit, degrade, abstain, portfolio, archive, dominance policy, illumination. Queries: "How to package SoS-LOG rules?", "What is a MethodFamily maturity ladder?", "How does the selector get its rules?". | Builds on: C.23, G.4, G.6, G.5, C.22, C.18, C.19, F.9, G.7, E.11, E.10. |
| G.9 | Parity / Benchmark Harness | Stable | Keywords: parity, benchmark, harness, selector, portfolio, iso‑scale parity, scale‑probe, edition pins, freshness windows, comparator set, lawful orders, Pareto, Archive, gauges. Queries: "How to compare competing MethodFamilies?", "What is a parity run?", "How to ensure a fair and scale‑fair benchmark in FPF?". | Builds on: G.5, G.6, G.4, C.23, C.22, C.18/C.18.1/C.19/C.19.1, G.7, F.15, F.9, E.11, E.5.2. |
| G.10 | SoTA Pack Shipping (Core Publication Surface) | Stable | Keywords: SoTA-Pack, shipping surface, publication, parity pins, PathId, PathSliceId, telemetry, UTS, selector-ready. Queries: "What is the final output of the G-suite?", "How are SoTA packs published?", "What is a selector-ready portfolio?". | Builds on: G.1–G.8, F.17–F.18, B.3, E.5.2, E.11, C.18/C.19/C.23. |
| G.11 | Telemetry-Driven Refresh & Decay Orchestrator | Stable | Keywords: telemetry, refresh, decay, PathSlice, Bridge Sentinels, edition-aware, epistemic debt, selector, portfolio. Queries: "How does FPF keep SoTA packs up-to-date?", "What triggers a model refresh?", "How is epistemic debt managed?". | Builds on: G.6, G.7, G.5, G.8, G.10, C.18/C.19, C.23, B.3.4, E.11. |
| G.12 | DHC Dashboards · Discipline-Health Time-Series (lawful gauges, generation-first) | Stable | Keywords: dashboard, discipline health, DHC, time-series, lawful gauges, generation-first, selector, portfolio, Illumination. Queries: "How to measure the health of a discipline?", "What are DHC dashboards?", "How to create lawful time-series reports?". | Builds on: C.21, G.2, G.5, G.6, G.8, G.10, G.11, C.18/C.19, C.23, F.17/F.18, E.5.2. |
| G.13 | External Interop Hooks for SoTA Discipline Packs (conceptual) | INF | Keywords: interop, external index, SoTA, mapper, telemetry, OpenAlex, ORKG, PRISMA, generation-first. Queries: "How does FPF integrate with external knowledge bases like OpenAlex?", "What is an InteropSurface?", "How to map external claims into FPF?". | Builds on: G.2, G.5, G.6, G.7, G.8, G.9, G.10, G.11, G.12, C.21, C.23, E.5.2, E.11. |
Part H – Glossary & Definitional Pattern Index
| § | ID & Title | Tag | Status | Concise reminder |
|---|---|---|---|---|
| H.1 | Alphabetic Glossary | INF | stub | Every U.Type, relation & operator with four‑register naming. |
| H.2 | Definitional Pattern Catalogue | [D] | stub | One‑page micro‑stubs of every [D] pattern for quick lookup. |
| H.3 | Cross‑Reference Maps | INF | stub | Bidirectional links: Part A ↔ Part C ↔ Part B terms. |
Part I – Annexes & Extended Tutorials
| § | ID & Title | Tag | Status | Concise reminder |
|---|---|---|---|---|
| I.1 | Deprecated Aliases | INF | stub | Legacy names kept for backward compatibility. |
| I.2 | Detailed Walk‑throughs | INF | stub | Step‑by‑step modelling of a pump + proof + dev‑ops pipeline. |
| I.3 | Change‑Log (auto‑generated) | INF | stub | Version history keyed to DRR ids. |
| I.4 | External Standards Mappings | INF | stub | Trace tables to ISO 15926, BORO, CCO, Constructor‑Theory terms. |
Part J – Indexes & Navigation Aids
| § | ID & Title | Tag | Status | Concise reminder |
|---|---|---|---|---|
| J.1 | Concept‑to‑Pattern Index | INF | stub | Quick jump from idea (“boundary”) to pattern (§, id). |
| J.2 | Pattern‑to‑Example Index | INF | stub | Table listing every archetypal grounding vignette. |
| J.3 | Principle‑Trace Index | INF | stub | Maps each Pillar / C‑rule / P‑rule to concrete clauses. |
Part K - Lexical Debt
| § | ID & Title | Tag | Status | Concise content reminder — “what belongs here” |
|---|---|---|---|---|
| K.1 | Mandatory Replacement of Measurement Terms | [A] | stub | Retires "axis/dimension" in favor of "Characteristic" and aligns other measurement terms. |
| K.2 | Migration Debt from A.2.6 (USM) | [A] | stub | Specifies the required edits across the FPF to align with the new Unified Scope Mechanism (USM). |
FPF is a first principle based architecture decisions for transdisciplinary SoTA methods of evolving holons: systems, epistemes, communities.
FPF is designed to serve three primary roles: the Engineer, who builds reliable systems; the Researcher, who searches for and grows trustworthy knowledge; the Manager, who organizes the collective thinking process of the Engineers and Researchers. Therefore FPF stands on a deliberately cross‑disciplinary scaffold. What follows traces the ideas that most visibly shaped its kernel, holonic constructive algebra, transdisciplinary thinking methods (architheories) with conceptual standards of publication/presentation results of this thinking.
Format of architecture decisions for transdisciplinary thinking architecture: similar to ADR (architecture decision records). First Principles Framework (FPF) proposes: a pattern language that is generative rather than prescriptive—a toolkit for constructing thought. Each pattern follows the Alexanderian quartet (problem context - problem - solution - checklist - consequences - rationale, plus dependences); Patterns interlock to form an operating system for thought that is designed to evolve (Open‑Ended Evolution, A.4).
Most engineering and management standards, methodologies and frameworks pick a side. They either optimise for assurance — audits, evidence, safety gates — or they celebrate open-ended evolution/agility based on creativity — ideas, leaps, pivots. First Principles Framework (FPF) is built to do both at once. It gives you a disciplined way to collectiverly generate and mature novel ideas with trust.
On the imagination rail, FPF is equally deliberate. It does not treat creativity as a black box or a personality trait. It provides a named choreography for creative work:
- Abduct first. Start with the “what could be true?” move—the Abductive Loop—to propose bold candidate explanations or designs before you overfit to today’s data. Search widely, then focus. Use an open‑ended search style to illuminate “adjacent possibles,” then apply an explore–exploit governor to decide when to roam for surprises and when to double‑down on promising directions. Shape → Evidence → Operate. Turn a promising sketch into a concrete shape, collect the right evidence to test it, and run it for real. Then loop.
FPF also measures creative quality. It distinguishes novelty for its own sake from valuable novelty. Work is scored along simple, universal characteristics—Is it new? Is it useful? Does it fit the constraints?—so that teams can compare options without collapsing into taste or hierarchy.
On the assurance rail, FPF makes trust a first‑class concern. Claims are anchored to evidence; formality can scale from plain checks to machine‑verified proofs; confidence is computed, not intuited. Meaning is kept local to an explicit frame of reference so “the same word” can’t quietly shift under your feet. The result is a reasoning trail that explains why a decision is justified—clear enough to audit, conservative enough for safety, and evolvable over time. One of important questions is “What does ‘good’ look like?” to pass/fail decision be against declared acceptance criteria. Created portfolio/collection of candidates scored Novelty, Use‑Value, Surprise, Constraint‑Fit on a Pareto fronties. And then we can evolve our holons-of-interest in small, auditable steps; record rationale for changes. Run open‑ended searches early, then govern the switch from exploring to refining.
In a lab: a puzzling anomaly isn’t “noise”; it is a prompt. You generate alternate explanations, explore them widely, then pick a direction with a clear explore–exploit rule. Each candidate must face a fit‑for‑purpose test; only those with evidence advance. In a product team: concept sketches are not meetings in disguise; they are first‑class artifacts that move through Explore → Shape → Evidence → Operate. Creativity is expected; untested cleverness is not. In operations: procedures are safe by design, yet the framework leaves Context for abductive fixes when reality throws a curve ball—provided they are later folded back into the evidence trail.
Assurance without imagination calcifies. Imagination without assurance drifts. FPF’s Standard is to separate the moves cleanly—so you can be genuinely inventive without losing your audit trail—and to reconnect them on purpose—so good ideas survive contact with the world. The framework’s creative patterns make generation systematic; its assurance patterns make selection and adoption reliable. That is how a team becomes both safe and original.
Synthesis. FPF treats creativity as a governed search and assurance as a repeatable reckoning. Together they form an engine for changing collective's mind responsibly—and then changing physical world.
FPF also adopts an explicit Bitter‑Lesson Preference and a Scaling‑Law Lens for all open‑ended search and portfolio‑selection work:
- BLP default (policy). When a domain‑specific heuristic competes with a general, scale‑amenable search/learning method, prefer the general method unless (i) a declared deontic constraint forbids it, or (ii) a scale‑probe (two or more points along declared Scale Variables) shows the heuristic dominates in the relevant scale window for this context.
- Scale‑savvy exploration. In open‑ended generation, declare the Scale Variables (S) that govern improvement (e.g., parameterisation breadth, data exposure, iteration budget, temporal/spatial resolution) and the expected elasticities; early exploration samples along scale‑paths to estimate diminishing‑returns regimes.
- Strategy read‑out. Portfolios and SoTA packs are reported as sets with scale‑aware fronts (utility × novelty × constraint‑fit × scale‑elasticity classes), not as single winners at frozen budgets; exploitation phases inherit the declared scale policy. (Formalisation: C.18.1 SLL; C.19.1 BLP.)
A fundamental challenge in any rigorous thinking is how to handle incomplete information. To build reliable systems and make trustworthy claims, we must make decisive judgments based on what we know, while remaining aware of the vast ocean of what we don't. This tension is formally captured by two opposing assumptions about the world: the Open-World Assumption and the Closed-World Assumption. FPF does not force a choice between them; instead, it provides a principled architecture for using both where they are most appropriate.
The distinction is best understood through a simple analogy:
-
The Open-World Assumption (OWA): Absence of proof is not proof of absence. If a name is not on a party guest list, we cannot conclude they are not coming. The list might simply be incomplete. This is the assumption of science, exploration, and the internet. It is a world of unbounded possibility, where new facts can always be discovered.
-
The Closed-World Assumption (CWA): What is not known to be true is considered false. If a name is not on a flight manifest, the airline and the security services will conclude they are not on the plane. For safety and operations, the list is assumed to be complete and authoritative. This is the assumption of databases, legal Standards, and safety-critical engineering. It is a world of bounded certainty, where we need to make reliable decisions based on a defined set of facts.
FPF is a hybrid system, architected to operate within the reality of an open world while enabling the construction of the reliable, locally-closed worlds necessary for engineering.
How FPF Embraces the Open World? The framework is fundamentally designed to acknowledge that our knowledge is never complete. This OWA stance is embedded in its core principles:
- Open-Ended Evolution (P-10): FPF is built on the premise that any holon—a system, a theory, a method—is perpetually incomplete and can be improved. New evidence can always emerge.
- Open-Ended Kernel (A.5): The architecture of a minimal kernel with plug-in architheories is an admission that the core cannot and should not attempt to describe everything. The world is too rich for any single, final ontology.
- The Abductive Loop (B.5.2): The very first step of the reasoning cycle is to generate a new hypothesis. This act is a formal recognition that our current model is insufficient to explain an anomaly—a clear OWA posture. It operationalised by B.5.2.1 via C.17–C.19.
How FPF Constructs and Manages Closed Worlds? While the universe is open, engineering requires us to build systems that are safe, predictable, and auditable. To do this, we must be able to draw a line and declare that, for a specific purpose, our knowledge within that line is complete. FPF provides the formal tools to build and govern these "islands of CWA":
U.BoundedContext(A.1.1): This is the primary mechanism for establishing a local CWA. Within a Bounded Context, a specific set of models, rules, and invariants is declared to be authoritative. Any statement that violates an invariant within that context is considered false.U.Boundary(A.1): The boundary of a holon is the physical or conceptual wall of the CWA island. It makes the distinction between the managed "inside" and the unmanaged "outside" explicit, turning an abstract assumption into a concrete architectural feature.- Conformance Checklists: Each pattern's checklist acts as a set of CWA rules. A model that fails a check is not "of unknown status"; it is formally non-conformant.
- Assurance Levels (B.3.3): The assurance calculus makes a decisive CWA judgment on trust. A claim without an explicit evidence anchor is not "of unknown reliability"; it is assigned
AssuranceLevel: L0 (Unsubstantiated). For the purpose of making decisions, it is not trusted.
In essence, FPF does not attempt the impossible task of transforming the open world into a closed one. It provides the architectural discipline to draw a firm line in the sand, make a reliable decision based on what's inside that line, and always remain aware of the open, unbounded world that lies beyond it.
A method of thinking is itself a system. Like any system, it can be designed with ad-hoc, brittle connections that fail under pressure, or it can be architected for resilience, clarity, and growth. The First Principles Framework is not merely a collection of concepts or a static ontology; it is a formal architecture for a method of trans-disciplinary thinking. Its very structure—a collection of interconnected Architectural and Definitional Patterns presented as a series of an architecture/design records — is a deliberate choice that mirrors its function.
This concept is directly analogous to the modern practice of Evolutionary Architecture in software engineering. An evolutionary architecture is one designed to support incremental, guided change across multiple dimensions. It acknowledges that the systems we build are never "finished" and must be able to adapt to new requirements and a changing environment without catastrophic rewrites. The architecture itself provides the stable pathways and guiding principles—the "fitness functions"—that allow the system to evolve gracefully.
FPF applies this same architectural thinking to the dynamic of reasoning itself. It provides a set of load-bearing patterns and constitutional principles that act as the fitness functions for our thoughts. By building our reasoning within this architecture, we are not just seeking a correct answer in the moment; we seeking a collection/portfolio of answers at Pareto frontier in multi-criterial optimisation. This is SoTA answers that regularily need to re-check due to moving this Pareto frontier due to progress in science and engineering. Open-endedness and evolvability is The Rule.
The value of this architectural approach lies in its ability to explicitly protect and sustain the critical characteristics of rigorous thought, holding them from the natural degradation they suffer in complex, long-running projects. Where traditional critical thinking identifies failures in these characteristics, FPF provides the mechanisms to build them in by design. Open-ended creative generativity is explicitly instrumented.
Part of FPF architecture for open-ended evolution is counterintuitive. E.g., to determine SoTA systems, knowledge, communities, methods, disciplines and other entities, you need to compare them. Therefore FPF has measurement and comparability theory that starts all thinking with designing of a comparability-gauge frame (CG-frame). To discuss dynamics of holon change, FPF talks about holon's characteristics that are measurable within CG-frames and trajectories in characteristic spaces.
| Architectural Characteristic of Thought | What it protects / why it matters | The FPF Mechanisms that Preserve It |
|---|---|---|
| Auditability & Traceability | The unbreakable chain from a claim back to its evidence. This is the quality of being able to answer "Why is this true?" at any point. | Evidence Anchoring (A.10), the Design-Rationale Record (DRR) Method (E.9), and the entire Trust & Assurance Calculus (B.3). The architecture makes untraceable claims a modeling violation. |
| Evolvability | The capacity of a model or system to adapt to new information or requirements without losing its conceptual integrity. | The Open-Ended Evolution Principle (P-10), the Canonical Evolution Loop (B.4), and the DRR Process (E.9). Change is not a bug; it is a formally managed, first-class feature of the architecture. |
| Creativity (Generative Novelty & Value) | The ability to reliably generate, select, and mature novel hypotheses/designs that are both new and fit to purpose—exploration without losing auditability or safety. | Creativity‑CHR (C.17) for measurable Novelty / Use‑Value / Surprise / Constraint‑Fit; NQD‑CAL (C.18) for open‑ended, illumination‑style search; E/E‑LOG (C.19) to govern explore↔exploit policies; Creative Abduction with NQD (B.5.2.1) / Abductive Loop (B.5.2) to structure hypothesis generation; Design‑Rationale Record (E.9) to capture decisions so creativity stays auditable. |
| Composability & Modularity | The ability to construct complex, reliable ideas from simpler, independently verifiable components. | The Open-Ended Kernel (A.5), Architheory Signatures (A.6), Universal Γ (B.1), plus Boundary‑Inheritance Standard (BIC) and the Cut‑Stable Boundary Axiom for safe structural cuts, and the Method Interface Standard (MIC) for typed method I/O and conservation constraints. Together they make composition predictable and auditable. |
| Falsifiability | The quality that every claim is structured so it can be rigorously tested and potentially proven false. | Conformance Checklists embedded in every pattern and the Trust & Assurance Calculus (B.3). Every normative artifact must declare success/failure criteria and null tests. |
| Cross-Scale Coherence | The guarantee that the same fundamental logic applies to a single component, an integrated system, and a system‑of‑systems. | Cross-Scale Consistency (A.9), Universal Γ (B.1) with proof obligations for context/time reasoning (Proof Kit), and declared Γ‑fold policies over WLNK/COMM/LOC/MONO + time policy (no free‑hand averages). These preserve invariants across zoom levels and eras. |
| Design–Run Separation (Temporal Integrity) | Prevents “design/run chimeras”, keeps assumptions/versioned specs separate from runtime evidence; enables reproducible state over time. | A.4 design–run split (used across CHR/creativity), KD‑CAL CC‑KD‑08 (no episteme mutation in Work), Γ_time rules (T‑1..T‑3), DRR (E.9) for rationale/versioning, Canonical Evolution Loop (B.4) for orderly change. |
| Lexical & Representation Discipline | Guards against category errors and notation lock‑in; keeps language unambiguous and tool‑neutral across contexts. | Strict Distinction (didactic distillation of SD), LEX‑BUNDLE (E.10), and Guard‑Rails E.5.* (DevOps Lexical Firewall, Notational Independence, Unidirectional Dependency, Bias‑Audit). All meanings live in a U.BoundedContext and cross only via Bridges. |
| Measurement Typing & Units | Ensures metrics are correctly typed (ordinal/interval/ratio), unitful, and safe to operate on; forbids “ordinal averages”. | A.17/A.18 measurement discipline + MM‑CHR (C.16) templates; KD‑CAL CC‑KD‑12 (units/envelopes/windows). |
| Order/Time‑Safe Orchestration | Separates structure from control‑flow and time; prevents hidden order/time bugs in authored models. | Γ_ctx (NC‑1..3) and Γ_time (T‑1..T‑3) laws; CT2R‑LOG “no order/time in parts”; E.14 “no order/time in structure” for authoring conformance. |
| Trust Calibration & Cross‑Context Integrity | Keeps claims honest when moved across Contexts; reduces over‑optimism via weakest‑link and CL penalties. | Trust & Assurance Calculus (B.3) (F‑G‑R characteristics), Bridges with CL (KD‑CAL CC‑KD‑07), and creativity rules that lower R (not scale) when crossing contexts. |
| Agency & Accountability (SoD) | Makes “who acts” explicit; enforces Separation‑of‑Duties so evidence isn’t self‑authored. | A.2 Role suite & A.15 run‑alignment (roles vs evidence/work), SoD gates in creativity flows (“fails SoD — same author as reviewer”). |
| Scope Safety & Encapsulation | Prevents scope‑creep and category bleed; each claim applies only within its declared Context/context and exits only via governed bridges. | Γ_ctx (NC‑1..3) and U.BoundedContext for hard context walls; Bridges with CL (KD‑CAL CC‑KD‑07) for governed crossings; CG‑frame (A.19) to declare scope of comparability. |
| Reproducibility & Deterministic Replay | Ability to re‑obtain the same result given the same inputs, model version, and time policy; enables trustworthy debugging and audit. | A.4 Design–Run split, Γ_time (T‑1..T‑3), CT2R‑LOG (“no order/time in parts”), E.14 (“no order/time in structure”), DRR (E.9) for versioned rationale, Evidence Anchoring (A.10). |
| Change‑Impact Predictability (Blast‑Radius Control) | Changes have bounded, knowable effects; reviewers can see which CG‑frames, bridges, and claims are touched. | Canonical Evolution Loop (B.4) with explicit deltas, DRR (E.9) change graph and decision record, Evidence Anchoring (A.10) for provenance links, Trust & Assurance Calculus (B.3) to update risk post‑change, CG‑frame (A.19) to localize roll‑ups. |
| Exploration Health (Portfolio Coverage) | Avoids local maxima and groupthink; measures how widely we explore. | Creativity‑CHR (C.17) Diversity_P + coverage maps (illumination), NQD‑CAL (C.18) IlluminationSummary, E/E‑LOG (C.19) **explore_share/policy. |
| Constraint Safety & Ethical Assurance | Ensures non‑negotiable constraints (safety/ethics/standards) gate enactment; prevents “novelty theft”. | ConstraintFit (C.17 §5.4) as eligibility, D‑cluster Bias‑Audit & Ethical Assurance (D.5); attribution tracked via AttributionIntegrity. |
| Didactic Clarity & Working‑Model Primacy | Keeps the human‑readable canon primary; assurance flows downward; readers can reason without tool lock‑in. | E.12 Didactic Primacy & Cognitive Ergonomics, E.14 Human‑Centric Working‑Model (conformance checklist), E.7 Tell‑Show‑Show. |
| Typed Reasoning (Kinds & Intent/Extent) | Prevents category confusions; enables typed, context‑local reasoning and safe Cross‑context mappings. | Kind‑CAL (C.3) — U.Kind & SubkindOf, KindSignature & Extension, KindBridge & CL^k for Cross‑context mapping. |
| Comparability & Roll‑up Integrity (CG‑frames) | Makes “same number” meaningful across teams; preserves invariants in aggregation. | CG‑frame (A.19) comparability modes and explicit Γ‑fold declarations (WLNK/COMM/LOC/MONO + time policy); integrates with Bridges with CL for Cross‑context moves; benefits include safe roll‑ups and RSG‑ready gates. |
Therefore, FPF should be understood not as a passive library of terms, but as an engineered method for thinking. Its patterns are the architectural decisions that shape this method. Its ultimate value is not in any single model it can produce, but in the enduring quality of the reasoning process it sustains—a discipline that is auditable, evolvable, and coherent by design.
The modern discipline of critical thinking has rightly focused on identifying and mitigating a long list of cognitive biases—the predictable glitches in our intuitive reasoning, from confirmation bias to the availability heuristic. The practice of "bias hunting" is a valuable diagnostic tool for improving our intellectual hygiene. However, it suffers from a fundamental limitation: it is primarily corrective, not constructive. It teaches us how to find flaws in existing arguments but offers little guidance on how to build a robust, complex argument from first principles.
This reactive approach is like trying to improve road safety by handing drivers a list of 50 common mistakes. While helpful, it is an incomplete solution. It relies on the driver's constant vigilance to avoid an ever-growing catalog of potential errors—a cognitive "whack-a-mole" that is both exhausting and ultimately fallible.
The First Principles Framework (FPF) proposes a different, complementary approach. It is not concerned with correcting the driver's psychology, but with designing a safer car and establishing the rules of the road. FPF is a generative architecture for thought. Its primary purpose is not to diagnose errors, but to provide a structural scaffold that makes entire classes of errors difficult or impossible to commit in the first place.
This architectural approach shifts the focus from the internal, fallible state of the thinker to the external, verifiable structure of their thoughts. Where the study of cognitive biases offers a map of mental pitfalls, FPF provides the engineering blueprints for building a bridge over them. The following table illustrates how FPF's architectural solutions provide structural protection against common cognitive failure modes—many of which are deeper and more systemic than those on the classic lists of biases.
| Cognitive Failure Mode | The Conventional Approach (Diagnostic) | The FPF Solution (Architectural & Generative) |
|---|---|---|
| Conflation of Plan and Reality | Reminds us to be aware of the Planning Fallacy or Confirmation Bias, where we seek evidence that our plan is working and ignore contradictory data. | Temporal Duality (A.4) and the strict distinction between design-time artifacts (MethodDescription, WorkPlan) and run-time artifacts (Work). This is not a psychological reminder; it is a category error to mix them. The architecture enforces the separation. |
| Ambiguity and Equivocation | Warns against using vague terms or shifting the meaning of a word mid-argument. | Lexical Discipline (E.10) and U.BoundedContext (A.1.1). FPF bans overloaded terms like "process" from its core and requires that all domain terms be explicitly projected onto precise FPF concepts within a bounded context. Ambiguity is architecturally constrained, not just advised against. |
| Causality Collapse & Lack of Accountability | Points out the Fundamental Attribution Error or describes situations where causes are poorly understood. | External Transformer Principle (A.12). FPF makes it an architectural invariant that every change must be attributed to an external agent (System in a U.RoleAssignment). "It configured itself" is not a cognitive bias; it is a modeling violation. Causality is non-negotiable. |
| Inconsistent Aggregation & Scope Neglect | Highlights biases where we incorrectly generalize from parts to a whole or ignore the scale of a problem. | Cross-Scale Consistency (A.9) and the Universal Algebra of Aggregation (Γ) with its Invariant Quintet (B.1). FPF provides a formal, conservative algebra (e.g., the Weakest-Link bound) for aggregation, making naive or optimistic roll-ups a provable error in the model. |
| Creative Mode Collapse (Premature Convergence) | Advises teams to “brainstorm more,” add ideation checklists, or warn against fixation—creativity is audited post‑hoc. | Creative Abduction (B.5.2) bound to NQD‑CAL (C.18) and governed by E/E‑LOG (C.19) keeps hypothesis generation formally open (illumination‑style emitters, exploration quotas, selection lenses), while Creativity‑CHR (C.17) scores outputs on Novelty, Use‑Value, Surprise, and ConstraintFit inside a U.BoundedContext. Premature convergence becomes a policy/modeling violation (insufficient exploration or missing lenses), not a soft reminder. |
FPF does not make a thinker immune to cognitive biases. Rather, it provides a disciplined, external environment for reasoning that channels cognitive effort productively. It provides the Canonical Reasoning Cycle (B.5)—a constructive path from a novel idea (Abduction) to a validated conclusion (Induction)—rather than just a set of warnings about wrong turns. Creative ideation is first‑class: B.5.2.1 together with C.17–C.19 replaces ad‑hoc brainstorming with measurable Novelty–Quality–Diversity search, complementing the assurance calculus.
In this way, FPF is not a replacement for critical thinking and creative thinking but its engineering reinforcement. It provides the architectural integrity, shared vocabulary, and formal discipline necessary to move from merely avoiding mistakes and generate ad hoc ideas to reliably generating trustworthy and auditable insights.
A core challenge of any rigorous intellectual effort is that thought itself is intangible. While many frameworks focus on managing data, process, or team activities, FPF uniquely focuses on architecting the act of reasoning itself. It achieves this by providing a discipline of "thinking through writing"—a method for giving thought a concrete, shareable, and auditable form. The diverse formats found within the framework—the Cards, Tables, Records, and Specifications—are the instruments for this discipline.
At its heart, FPF requires what might be metaphorically called "pencil and paper." To engage with the framework is to externalize one's reasoning, moving it from the fleeting space of internal cognition to a persistent medium where it can be inspected, challenged, and refined. This "writing" is not a by-product of thinking; it is the thinking. The act of filling out a Role Description Card or constructing a Concept-Set Table is not mere documentation; it is the cognitive work of making distinctions, declaring invariants, and justifying relationships. These forms give shape and persistence to thought.
This discipline is operationalized through a rich vocabulary of conceptual forms, each tailored for a specific cognitive task. Cards serve to define and scope individual concepts: a Context Card (F.1) fixes the semantic boundaries of a domain, while a Role Description Card (F.4) specifies the invariants of a particular behavioral role or status. Tables are used to compare and synthesize knowledge across these boundaries, with the Unified Term Sheet (UTS) (F.17) providing the canonical, human-readable summary of how concepts align. Records, such as the Design-Rationale Record (DRR) (E.9), create a durable, auditable history of why a decision was made, capturing the context and trade-offs. Finally, Standards and Specifications make rules explicit, from the high-level Architheory Signature (A.6) that governs a plug-in's behavior to the detailed Conformance Checklists that conclude every pattern. Each form is a distinct instrument in the FPF toolkit, designed to isolate and clarify a specific aspect of a complex problem.
It is critical, however, to understand the precise nature of this "writing." The FPF constitution is built on a deliberate separation of concerns that grants teams maximum freedom in their operational practices.
-
FPF is Not a Tooling or Notation Mandate. The "pencil and paper" are a metaphor. FPF is fundamentally agnostic to the medium. Whether a team uses a physical whiteboard, a shared text document, a wiki, a version-controlled set of Markdown files, or a sophisticated modeling tool is an implementation detail that lies outside the conceptual core. The framework's value resides in the structure of the thought that these forms demand, not in any specific rendering. This is the essence of the Notational Independence guard-rail (E.5.2).
-
FPF is Not a Team Workflow or Data Governance Policy. The framework does not prescribe how a team should run its meetings, manage its repositories, or version its files. It is not a substitute for methodologies like Agile or for data governance policies. Rather, FPF provides the conceptual content that these processes act upon. A team can use its existing Agile workflow to manage the creation of a Design-Rationale Record (DRR), and its existing data governance policy to manage the storage of an Unified Term Sheet (UTS). FPF provides the what—the structure of a sound argument—not the how of team logistics.
The purpose of this discipline is to augment both individual and collective cognition. For the individual, the written artifact acts as an extension of working memory, making it possible to hold and manipulate far more complex models than one could in their head alone. For the team, these shared, tangible artifacts create a common conceptual space. They become the stable ground upon which collective reasoning can occur—a shared object that can be debated, annotated, and iteratively improved.
This flexibility is by design. The conceptual Standard of a Role Description Card is fixed by FPF, but its physical implementation is a project-level decision. One team might manage their cards in a simple spreadsheet, another in a relational database, and a third in a formal ontology. All can be fully FPF-conformant because they honor the conceptual structure, regardless of the underlying data-handling choices.
Ultimately, the diverse forms within FPF are not bureaucratic artifacts to be produced; they are conceptual instruments to be used. They provide the minimal necessary structure to turn fleeting insights into durable, shareable, and contestable knowledge. They are the grammar that allows a team to write its thoughts, and then, together, to edit them towards truth.
The First Principles Framework (FPF) shares a goal with classical upper ontologies (e.g., Basic Formal Ontology (BFO), DOLCE): to provide a universal, unified language that cuts across disciplinary silos. Yet they pursue this from fundamentally different starting points. Understanding this distinction is key to grasping FPF’s unique purpose.
A classical upper ontology aims to create a logically consistent inventory of what exists. Its primary task is descriptive metaphysics: partitioning reality into fundamental categories (like continuants vs. occurrents, objects vs. processes) and defining their relations. The result is a rigorous, hierarchical map optimized for data integration and preventing category errors. It tells you, with formal precision, that an engine is not a process of running, and that a hole is a quality, not an object.
FPF, by contrast, is a thinking-oriented architecture. Its primary task is not to describe the world but to orchestrate the process of reasoning about the world. It is less a map and more a compass and checklist, guiding an agent's attention toward the decisive aspects of a problem—objectives, trust, emergence, and dynamics—before any taxonomy is imposed. This resolves a core tension: descriptive ontologies become static encyclopedias, while FPF's generative patterns interlink into an evolvable language for action.
The following contrasts highlight this shift:
| Characteristic | Classical Upper Ontology | FPF's Thinking Architecture |
|---|---|---|
| Core Task | Logically consistent inventory of entity types. | Generative scaffold for reasoning and decision-making. |
| Primary Question | “What is this?” | “How do we reason about this, and why does it matter?” |
| Guiding Artefact | Taxonomy & logical axioms. | Patterns (context ▲ problem ▲ solution + CC). |
| Validation Mode | Consistency in formal reasoners. | Satisfying Conformance Checklist for goals, trust, emergence. |
| Change Driver | Domain evolution → new classes. | Cognitive evolution → new reasoning patterns. |
| Cross-Disciplinarity | Challenging: each domain = new branch. | Built-in: patterns span ≥3 domains (C-1 Universality). |
| Physical Grounding | Optional; often abstract. | Mandatory: material Transformer anchor (e.g., in Pattern D.1 Mereology). |
Empirical progress since 2015 supports the “Bitter Lesson” (Sutton, 2019): systems that leverage more data, more compute, and more freedom (less hand‑coded domain procedure) tend to outperform bespoke rule‑engineered solutions. Scaling‑law work (e.g., 2020–2022) shows that broader models benefit from compute/data scaling; instruction‑following and tool‑use methods (2019–2024) let general models adapt across tasks without per‑task re‑engineering (e.g., ReAct‑style tool use, self‑reflection/Reflexion, autonomous open‑world exploration such as Voyager/Auto‑GPT‑class agents).
FPF separates goals and constraints from procedures. We prefer Rule‑of‑Constraints (RoC) — explicit prohibitions, budgets, and safety envelopes — over Instruction‑of‑Procedure (IoP) — detailed step‑by‑step scripts. RoC keeps the design–run separation intact: designers declare what must not happen and what budgets apply; agents have freedom of choose how to act within those bounds at run‑time.
Implications for architecture (normative hooks inside FPF):
- Express behavior as goals, constraints, and budgets. Prefer RoC to IoP. When you must prescribe a procedure (regulatory/safety), document the exception in the Design‑Rationale Record and pair it with run‑time monitors (see Observability‑first templates).
- Autonomy budgets. For each agent/holon, declare allowed tools, call‑rates, cost/time ceilings, and risk thresholds. Enforce via policy/telemetry cells; record usage in the Comparability‑Gauge (CG) frame so that uplift/regret can be compared over runs.
- Agentic tool use. Orchestrate function calls via agentic planning/reflective loops instead of fixed pipelines: the agent can choose order, retry strategies, and escalation paths (cf. ReAct‑style tool use, self‑reflection, autonomous exploration in 2022–2024 SoTA). This keeps logic in prompts/policies, not in brittle DAGs.
- Compute and data elasticity. Keep bench/test packs versioned; enable periodic model refresh without rewriting logic (Chinchilla‑style scaling insight, 2022). Treat data > code when feasible; ensure refresh does not break parity/comparability by pinning to the CG‑frame.
- Feedback‑in‑the‑loop. Build preference/critique channels (human‑, AI‑, or environment‑in‑the‑loop), shadow modes, and safe A/B gating. Use these to continuously adjust prompts/policies rather than continuously fine‑tuning bespoke sub‑models.
- Safety first. Encode rules‑as‑prohibitions (create Constitution-based framework) and risk budgets as RoC; keep them small, explicit, and testable. Combine with design‑run separation to prevent prompt drift from violating safety envelopes.
A Rule‑of‑Constraints (RoC) is a compact, versioned policy bundle: (a) scope (holon/agent + tools), (b) budgets (cost/time/call‑rate), (c) prohibitions (red lines), (d) escalation (who/what to consult), (e) telemetry (metrics to log into the CG‑frame). RoC is enforced at run‑time but never prescribes the exact procedure.
Why not just add more rules? Because micro‑ontologies and brittle flow‑charts do not generalize. FPF uses rules to define boundaries and measurement frames while giving agents freedom to search within them using general models. The inner loop remains empirical: measure → reflect → adjust RoC/prompts → run.
Expected outcomes. Faster iteration (minutes‑to‑change via prompt/policy edits), resilience to model refresh, lower authoring cost, and higher autonomy at comparable risk thanks to budgets + telemetry + CG‑framed comparability.
- Holonic kernel with physical anchoring — everything that composes is a
U.Holon; every change is enacted by an external transformer (A.1; A.12). - Role–Method–Work split with time duality — prevents the endemic plan/reality conflation; only
U.Workcarries actuals (A.4; A.15.1–.2). - Assurance as a first‑class calculus — evidence roles, decay, and weakest‑link composition make “trust” computable and auditable (B.3; A.10).
- Algebra of aggregation (Γ) with cross‑scale invariants — conservative composition that generalizes from pumps to proofs (B.1).
- Local meaning, global alignment —
U.BoundedContextislands and explicit Bridges with congruence‑loss turn “it depends” into a Standard (A.1.1; F.9). - Micro‑kernel + architheories — CAL/LOG/CHR plug‑ins extend capability without contaminating the core (A.5–A.6; Part C).
- Publication Standard & guard‑rails — Core ↔ Tooling ↔ Pedagogy split, notational independence, and Lexical Discipline prevent conceptual drift (E.5; E.10).
- Open‑ended evolution by design — evolve not only solutions but also problem frames; work not only on holons‑of‑interest but also across the diversity of their environments.
- ** Creativity with Novelty and Quality Diversity optimisation** — DRR, evidence refresh, and explicit creative search (NQD + E/E‑LOG) keep the system alive without ossification (A.4; B.4; C.18; C.19; E.6; E.9; B.3.4).
What FPF is: a generative, testable architecture for open-ended evolutionary thinking that any domain can inhabit. What FPF is not: a repository of domain facts, a rule‑chaining engine, a methodology du jour, or a notation.
Modern complexity lives at the junction of silos. A climate model borrows genetics to track pathogens; a venture‑capital pitch cites thermodynamic “runway.” Yet each field guards its own mathematics, and translation costs soar. FPF answers this tension by treating transdisciplinarity as a meta‑theory of thinking itself — a language for designing reasoning, not another specialist dialect.
An FPF architheory is a theory about theories: holonic Calculus abstracts part‑whole composition; Knowledge Dynamics captures changes in trust to knowledge about holons. These patterns act as generative scaffolds: a biologist modelling adaptation, an engineer designing resilience, and a strategist planning pivot options all reach for the same invariant trio — objective, feedback loop, trust metric. FPF names that trio explicitly (U.Objective, Canonical Evolution Loop, Unified Trust Model) and requires universality (Principle C‑1: at least three heterogeneous domains).
The synthesis is physical, not metaphoric. Constructive mereology (Kit Fine) and Constructor Theory (Deutsch & Marletto) insist that every whole arises through a material Transformer as transformer of matter and information—a sensor grid that binds “crowd‑flow” to joules, a data pipeline tying employee action to market response. Part B formalises this anchor; without it, abstractions cannot cross scales.
Modern projects live at the junction of silos: software SREs speak of incidents and SLOs, manufacturing lines of acceptance and tolerances, scientists of evidence and replication. The same surface word often means different things across these local traditions, and unguarded reuse of labels silently corrupts designs, audits, and decisions. Part F provides a local‑first discipline for meaning that keeps senses inside a U.BoundedContext and requires any cross‑context reading to travel through an explicit Bridge with a declared congruence level (CL) and loss notes. In short: translate across contexts; never collapse them.
Part F is the framework’s publication surface for cross‑domain alignment. It turns harvested terms into SenseCells (context‑scoped senses), relates them via Bridges (with kind, direction, CL, loss), bundles aligned senses into Concept‑Sets, and publishes the result as a single, human‑readable Unified Term Sheet (UTS)—“one table that a careful mind can hold.” This sheet is how engineers, managers, and researchers talk precisely about the same things while preserving local rigor. Disciplines divide the world; trans-disciplinary theories that captured in FPF's architheories remind us it is one conversation.
Part G turns “state‑of‑the‑art” from a moving target into a governed, selector‑ready portfolio. It does this by (i) fixing what may be compared and under which evidence minima; (ii) generating and harvesting SoTA alternatives across rival traditions; (iii) authoring lawful measurements and calculi; (iv) registering method families and selecting among them without semantic flattening; and (v) shipping edition‑aware packs with telemetry so that refresh is principled rather than ad‑hoc. In short: G formalises SoTA as an auditable, updatable object, not a leaderboard snapshot.
A thoughtful reader encountering concepts like Open-Ended Evolution, Minimally Viable Examples, or the Explore-Exploit trade-off within FPF might rightly observe: "These are not new ideas. They are foundational principles in fields from Agile development to strategic management." This observation is not only correct; it is central to understanding FPF's unique value.
FPF does not seek to invent the fundamental ingredients of rigorous thought. Its purpose is not to discover that evolution is effective or that empirical testing is valuable. Its mission is to provide a transdisciplinary architectural synthesis of these powerful, "obvious" ideas, transforming them from disconnected heuristics into a coherent, interoperable, and fully-governed "operating system for thought."
A useful analogy is the distinction between an individual cook following a recipe and a professional kitchen organized for the collective, high-quality production of diverse dishes in a dynamic environment:
- The fundamental concepts (MVP, evolution, exploration/exploitation) are like fundamental ingredients: flour, eggs, salt, heat. They are universal and essential.
- A domain-specific methodology (like Lean Startup or a specific scientific method) is like a cookbook: it provides excellent recipes for using those ingredients to create a specific dish, such as a software product or a research paper.
- The First Principles Framework (FPF) is the architecture of the kitchen itself—the system established by Auguste Escoffier as the brigade de cuisine.
Escoffier did not invent the ingredients, nor did he create every recipe. He designed a system with defined roles (Saucier, Pâtissier), standardized techniques (sauté, julienne), and a clear workflow that could reliably produce a vast range of complex dishes to a consistently high standard. The architecture of the kitchen, not any single recipe, is what enables culinary excellence at scale.
FPF provides this same architectural layer for the process of thinking. It operationalizes these "obvious" ideas by giving them a formal place and a normative function within a larger, cohesive system.
| Culinary Architecture | First Principles Framework (FPF) | The Value of the Architecture |
|---|---|---|
| Defined Roles (e.g., Pâtissier) | U.Role & U.RoleAssignment (A.2) |
Separates concerns and assigns clear, context-dependent responsibilities to agents. |
| Standardized Techniques (e.g., sauté) | U.Method & U.MethodDescription (A.3) |
Provides a universal, representation-agnostic way to describe how an action is performed, from a physical process to a line of reasoning. |
| Workflow & Composition (plating a dish) | Universal Algebra of Aggregation (Γ) (B.1) | Guarantees that components (whether physical parts or logical premises) can be composed into a coherent whole in a predictable and auditable way. |
| Trans-Culinary Applicability | Transdisciplinarity (C-1) | The same architecture that "cooks" a U.System can be used to "cook" a U.Episteme or a personal development strategy, because the underlying principles of composition, evolution, and assurance are universal. |
Therefore, when one author applies the concept of "exploration vs. exploitation" by drawing from business literature and another by referencing FPF, they may arrive at similar practical advice. The difference is that the FPF user is operating within an architecture where that single concept is already connected to a rich, formal network of other principles. Their decision is implicitly wired into a system of evidence anchoring, trust calculus, and open-ended evolution, making it more robust, auditable, and seamlessly composable with other rigorously-defined concepts.
Предложение: FPF does not claim ownership of the timeless ingredients of good thinking. It provides the timeless architecture that enables a world-class kitchen for collective thought.
This naturally leads to a crucial question: if a skilled practitioner, without formal knowledge of FPF, can produce a solution of comparable quality, where does the framework's value truly lie?
The answer lies at the threshold of complexity. For a well-defined problem solved by a single, expert agent, well-honed heuristics and tacit knowledge often suffice. The solutions proposed by such an expert and by FPF may indeed appear indistinguishable, much like a master chef's personal recipe for a single dish is impeccable without needing a formal kitchen architecture. FPF shines not in delivering a superior single-shot response, but in sustaining and evolving answers over time in collective thinkibng environment through its built-in cycles of reasoning and refinement with auditable trace and knowledge hands-off standardisation. While an initial pass through these cycles may yield comparable quality with or without FPF — drawing on common sense, ubiquitous knowledge and ad hoc intuition — the framework's true value emerges in the long term, where its evolvability, auditability, and mechanisms for managing epistemic debt ensure that solutions adapt, compound, and scale without fragmentation or decay.
FPF's utility begins to scale exponentially when the problem itself crosses a Pareto frontier of complexity, where the "general cultural knowledge" of even a brilliant individual becomes suboptimal. This frontier is defined not by mere computational difficulty, but by the emergence of several non-computational dimensions:
- Compositional Complexity: The need to integrate numerous, heterogeneous, and often conflicting components—be they physical parts, software modules, or logical premises—into a coherent and reliable whole.
- Collaborative Complexity: The need to align the mental models and coordinate the work of a diverse team, ensuring that a shared understanding is maintained without stifling individual contribution.
- Temporal Complexity: The need for a solution to live, adapt, and evolve over long periods, maintaining its conceptual integrity and remaining auditable for future generations of stakeholders.
- Assurance Complexity: The need to provide explicit, auditable, and often formal proof that a solution is safe, reliable, and fair, especially when the cost of failure is high.
- Generative Complexity: The need not to find a single correct answer, but to systematically explore a vast solution space, manage a portfolio of diverse options, and drive open-ended evolution.
An expert's intuition can find a single, excellent point on this multi-dimensional frontier. FPF provides the architectural discipline to navigate the entire frontier. It is the necessary scaffold for building solutions that are not only clever, but also composable, collaborative, evolvable, trustworthy, and perpetually creative at scale.
Complex problems fail more often from mis‑aligned competencies than from missing facts. Inside one brain—or one team—model builders, testers, and decision makers can behave like separate departments. The Intellect Stack offers a layered map of cognitive skills, showing how FPF’s architheories combine into an “operating system for thought.”
The stack is pedagogical, not prescriptive: you may enter at any layer, but mastery grows when the layers reinforce one another. Each rung names a domain‑agnostic capability (U.Capability) and points to the patterns that realise it.
Conceptually, the Intellect Stack is formalized as a non-normative Characterization (CHR) package. This package defines types such as U.IntellectLayer (e.g., Logician, Strategist) and U.Competency, which are then linked to the kernel's U.Capability via a hasCapability mapping. This ensures that while the stack remains a flexible teaching tool, its structure is coherent and formally grounded.
| Layer | Core question | Key patterns & exemplary domains |
|---|---|---|
| 1 · Structure & Reality | What exists and how is it bounded? | Kind-CAL for universal categories; Sys‑CAL for system boundaries. Physics (control volumes), Software (static types), Ecology (trophic levels). |
| 2 · Knowledge & Reasoning | Why should we trust this claim? | KD‑CAL (F‑G‑R characteristics), Arg‑LOG for formal argument. AI (model validation), Evidence‑based policy. |
| 3 · Action & Execution | How do we turn intent into change? | Agent‑CHR, Method‑CAL, Resrc‑CAL. Robotics (action plans), DevOps (pipelines), Urban planning (resource flows). |
| 4 · Strategy & Rationality | Which option wins under uncertainty? | Decsn‑CAL—U.Decision, causal models. Finance (risk fronts), Military wargaming. |
| 5 · Governance & Purpose | Why act at all; what is permissible? | Norm‑CAL—U.Objective, value conflicts. Bioethics, Sustainability metrics. |
Every layer remains physically grounded: an abstract method references a material Transformer (Pattern D.1) such as a laboratory rig or CI runner that proves the method can exist. Without that anchor, the skill is rhetoric, not capability.
The stack mirrors software’s architecture layer stacks. A.5 Open‑Ended Kernel & Architheory Layering lets new layers emerge via Design Rationale Records (E.9), keeping the map alive.
A full description of the Intellect Stack and its layers resides in the Pedagogical Companion.
“A stack without mastery is scaffolding; mastery without a stack is improvisation—FPF supplies the ladder that turns skills into intelligence.”
A framework that aims at everything excels at nothing. To keep Cognitive Elegance (P‑1) and Pragmatic Utility (P‑7) intact, FPF draws a deliberate line around what it serves—and what it refuses to be.
Purpose – an operating system for thought FPF’s mission is to supply a generative scaffold that carries a raw idea—whether from a physicist, a product‑manager, or an AI agent—toward a reproducible, auditable impact on the physical world. It does so by offering:
- a micro‑kernel of first principles—postulates that are universal (SCR in ≥ 3 heterogeneous domains per C‑1), falsifiable, and non‑derivable inside the framework;
- architheories as meta‑theories of thinking, such as Systemic Calculus for composition and Knowledge Dynamics for epistemic evolution;
- patterns with Conformance Checklists that quantify objectives, trust, emergence, and evolution;
- Design Rationale Records (DRRs) that govern safe, auditable evolution of the Canon;
- a Constitution—the Eleven Pillars (E.2) plus the Guard‑Rails (E.5.*)—that constrains all normative content.
Scope – tool‑agnostic, normative patterns only This Core Specification defines:
- Universal concepts (
U.Type,U.Objective,U.Decision, …). - Algebras of composition (aggregation, role‑projection, metasystem transition).
- Invariants of change—rules that safeguard cross‑scale consistency as systems evolve.
Everything here is free of implementation detail; verification lives in Tooling, guidance in Pedagogy. Physical grounding is mandatory: every abstraction must reference a material Transformer (Pattern D.1).
Explicit Non‑Goals – enforced by guard‑rails
| Non‑Goal | Rationale / Pattern link |
|---|---|
| Domain encyclopaedia | FPF hosts no physics constants or finance taxonomies; import such knowledge via Type & Role Calculus (D‑0). |
| Single mathematical dogma | Patterns are expressible in multiple formalisms; Notational Independence (E.5.2) forbids locking into OWL, JSON‑LD, or category theory. |
| Prescribed tool stack | Implementation choices belong to the Tooling Reference; the Core never cites CI pipelines or file formats (DevOps Lexical Firewall E.5.1). |
| Step‑by‑step tutorial | Pedagogical Companion carries worked examples and Intellect‑Stack exercises; the Core remains concise and normative. |
This boundary avoids the fate of “grand unifiers” that collapsed under their own encyclopaedic weight. FPF instead follows the lesson of Euclidean geometry and the TCP/IP suite: a small set of powerful, generative rules outlives any single domain fashion.
One‑screen purpose (manager‑first). This pattern gives newcomers a plain‑language starter kit for FPF’s generative engine so they can run a lawful problem‑solving / search loop on day one. It explains the few terms you must publish when you generate, select, and ship portfolios (not single “winners”), and points to the formal anchors you’ll use later. (OEE is a Pillar; NQD/E/E‑LOG are the engine parts.)
Builds on. E.2 (P‑10 Open‑Ended Evolution; P‑2 Didactic Primacy), A.5, C.17–C.19 · Coordinates with. E.7, E.8, E.10; F.17 (UTS); G.5, G.9–G.12 · Constrains. Any pattern/UTS row that describes a generator, selector, or portfolio.
Keywords & queries. novelty, quality‑diversity (NQD), explore/exploit (E/E‑LOG), portfolio (set), illumination map (gauge), parity run, comparability, ReferencePlane, CL^plane, ParetoOnly default
Engineer‑managers meeting FPF for the first time need a plain, on‑ramp vocabulary for the framework’s generative engine so they can run an informed problem‑solving/search loop on day one—before formal architheories. Without that, Part G and Part F read as assurance/alignment only, and teams default to single “best” options. This undercuts P‑10 Open‑Ended Evolution and weakens adoption.
In current practice:
- Single‑winner bias. Teams look for “the best” option and publish a leaderboard, suppressing coverage & diversity signals essential to search.
- Metric confusion. “Novelty” and “quality” are used informally; units/scales are omitted; ordinal values are averaged, breaking comparability.
- Hidden policies. Explore/exploit budgets and governor rules are implicit; results are irreproducible and refresh‑unsafe (no edition/policy pins).
- Tool lock‑in. Implementation terms (pipelines, file formats) leak into the Core, violating Guard‑Rails.
FPF needs a short, normative glossary that names the generative primitives in Plain register and ties each to its formal anchor—so portfolios, not single scores, become the default publication.
| Force | Tension |
|---|---|
| Readability vs Rigor | One‑liners for managers ↔ lawful definitions with editions and scale types. |
| Creativity vs Assurance | Open‑ended search (OEE/QD) ↔ conformance, parity, and publication discipline. |
| Comparability vs Locality | Shared N‑U‑C‑D terms ↔ context‑local CG‑frames and bridges with CL. |
| Tool‑agnostic Core | Conceptual publication in UTS ↔ engineering teams’ urge to cite specific tools. |
| Term | Plain definition (on‑ramp) | See |
|---|---|---|
| Novelty (N) | How unlike the known set in your declared CharacteristicSpace. Compute lawfully (declared DescriptorMapRef + DistanceDefRef; no ad‑hoc normalisation). |
C.17, C.18 |
| Use‑Value (U / ValueGain) | What it helps you achieve now under your CG‑Frame; tie to acceptance/tests; publish units, scale kind, polarity, ReferencePlane. | C.17, C.18 |
| Constraint‑Fit (C) | Satisfies must‑constraints (Resource/Risk/Ethics); legality via CG‑Spec; unknowns propagate (never coerce to zero). | C.18, G.4 |
| Diversity_P (portfolio) | Adds a new niche to the portfolio; measured against the active archive/grid, not a single list; declare ReferencePlane for each head. | C.17, C.18 |
| E/E‑LOG | Named, versioned explore↔exploit policy; governs when to widen space vs refine candidates; policy‑id is published. | C.19 |
| ReferencePlane | Where a value lives: world (system), concept (definition), episteme (about a claim). Plane‑crossings add CL^plane (penalties to R only); cite policy‑id. | F.9, G.6 |
| Scale Variables (S) | The monotone knobs along which improvement is expected (e.g., parameterisation breadth, data exposure, iteration budget, resolution). Declare S for any generator/selector claimed to scale. | C.18.1 |
| Scale Elasticity (χ) | Qualitative class of improvement when moving along S (e.g., rising, knee, flat in the declared window). Used as a selection lens; numeric laws live in domain contexts. | C.18.1 |
| BLP (Bitter‑Lesson Preference) | Default policy that prefers general, scale‑amenable methods over domain‑specific heuristics, unless forbidden by deontics or overturned by a scale‑probe. | C.19.1, C.24 |
| Iso‑Scale Parity | Fair comparison across candidates at equalised scale budgets along S; may also include scale‑probes (two points) to test elasticity. | G.9, C.18.1 |
(Registers & forbidden forms per LEX‑BUNDLE; avoid “axis/dimension/validity/process” for measurement and scope.)
- UTS surface (Part F). When a UTS row describes a generator, selector, or portfolio, it MUST surface N, U, C, Diversity_P, E/E‑LOG
policy‑id,ReferencePlane, with units/scale/polarity typed under MM‑CHR / CG‑Spec, and lawful references toDescriptorMapRef/DistanceDefRef. (Row schema: F.17; shipping via G.10.) - Parity & edition pins (Part G). When QD/OEE is in scope, pin
DescriptorMapRef.editionandDistanceDefRef.edition(and, where applicable,CharacteristicSpaceRef.edition,TransferRulesRef.edition) and recordpolicy‑id+PathSliceId. Treat illumination/coverage as gauges; publish an Illumination Map where G‑kit mandates parity artefacts. Declare S (Scale Variables) and run at least one scale‑probe (two points along S) when claiming scale‑amenability. Dominance policy defaults toParetoOnly; including illumination in dominance MUST cite a CAL policy‑id. - Tell‑Show‑Show (E.7/E.8). Any [A] pattern that claims generative behaviour MUST embed both a U.System and a U.Episteme illustration using this glossary (manager‑first didactics).
- Declare CG‑Frame (what “quality” means; lawful units/scales) and ReferencePlane.
- Pick 2–4 Q components + a simple DescriptorMap (≥2 dims) for N/D; publish editions.
- Choose an E/E‑LOG policy (explore↔exploit budget); record policy‑id.
- Call the selector under G.5 with parity pins; return a set (Pareto/Archive), not a single score.
- Publish to UTS + PathIds/PathSliceId; Illumination Map is a gauge by default.
Informative; manager‑first (E.7/E.8 Tell‑Show‑Show).
Show‑A · SRE capacity plan (selector returns a set).
Frame. We must raise service headroom for Q4 without breaking latency SLOs.
Portfolio. {cache‑expansion, read‑replicas, query‑shaping, circuit‑breaker tuning, schema‑denorm}.
Glossary in action. U = latency@p95 & error‑rate, C = budget ≤ $X, risk ≤ R, N = dissimilarity to current playbook, Diversity_P = adds a previously empty niche in our archive (e.g., “shifts load to edge”). E/E‑LOG starts Explore‑heavy, flips Exploit‑heavy once ≥ K distinct niches are lit. (Publish UTS row + parity pins; illumination stays a gauge.)
Show‑B · Policy search with QD archive (MAP‑Elites‑class).
Frame. Robotics team explores gaits that trade stability vs energy use.
Glossary in action. CharacteristicSpace = {step‑frequency, lateral‑stability}, ArchiveConfig = CVT grid, N from descriptor distance, U = task reward, Diversity_P = coverage gain; PortfolioMode=Archive. Families include MAP‑Elites (2015), CMA‑ME/MAE (2020–), Differentiable QD/MEGA (2022–), QDax (2024); publish editions and policy‑ids; treat illumination as a gauge.
(Optional) Show‑C · OEE parity (POET/Enhanced‑POET).
Co‑evolve {environment, method} portfolios; publish coverage/regret as gauges; pin TransferRulesRef.edition; return sets, not a single winner.
Show‑Epi · Evidence synthesis (U.Episteme).
Frame. A living review compares rival causal identification methods (e.g., IV vs. DiD vs. RCT‑adjacent surrogates) across policy domains.
Glossary in action. U = external‑validity gain @ F/G‑declared lanes, C = ethics & data‑licence constraints, N = dissimilarity in **ClaimGraph** transformations, D_P = coverage of identification niches in the archive. ReferencePlane = episteme. Illumination/coverage stays a gauge; selection returns a portfolio of methods per niche. (Publish UTS rows; cite Bridges + CL for cross‑domain reuse; edition‑pin Descriptor/Distance defs where QD applies.)
Scope. Trans‑disciplinary; glossary applies to both System and Episteme work. Known risks & mitigations. Over‑aggregation: forbid mixed‑scale sums; use CG‑frame and MM‑CHR. Terminology drift: enforce LEX‑BUNDLE registers; ban tool jargon in Core. Optimization monoculture: require portfolio publication where G‑kit mandates parity; illumination stays a gauge unless CAL authorises otherwise.
| ID | Requirement | Purpose |
|---|---|---|
| CC‑A0‑1 | If a pattern/UTS row describes a generator, selector, or portfolio, it MUST surface N, U, C, Diversity_P, ReferencePlane, and E/E‑LOG policy‑id; units/scale/polarity MUST be declared. |
Makes generative claims comparable and auditable (UTS as publication surface). |
| CC‑A0‑2 | When QD/OEE is in scope, pin editions: DescriptorMapRef.edition, DistanceDefRef.edition (and, where applicable, CharacteristicSpaceRef.edition, TransferRulesRef.edition); log PathSliceId and policy‑ids. |
Enables lawful parity/refresh; edition‑aware telemetry. |
| CC‑A0‑3 | No mixed‑scale roll‑ups; ordinal data SHALL NOT be averaged; any roll‑up MUST live under a declared CG‑frame. | Prevents illegal scoring; keeps comparisons lawful. |
| CC‑A0‑4 | Where the G‑kit requires parity, publish an Illumination Map (coverage per niche); single‑number leaderboards are non‑conformant on the Core surface when a ParityReport is required. | Portfolio‑first publication; avoids single‑winner bias. |
| CC‑A0‑5 | Keep illumination/coverage as gauges; dominance policy defaults to ParetoOnly; any change is CAL‑authorised and cited by policy‑id. |
Separates fit from exploration; preserves auditability. |
| CC‑A0‑6 | Apply E.7/E.8: include a U.System and a U.Episteme illustration when claiming generative behaviour; obey E.10 register hygiene; use the exact subsection title “Archetypal Grounding.” | Locks didactic primacy; prevents jargon drift. |
| CC‑A0‑7 | ReferencePlane declared for every N/U/C/Diversity_P head and CL^plane penalties route to R only; Φ_plane policy‑id published when planes differ. | Prevents plane/stance category errors; aligns with Bridge/ATS guards. |
| CC‑A0‑8 | Diversity_P ≠ Illumination. Diversity_P may enter dominance; Illumination remains a gauge unless explicitly promoted by CAL policy‑id. | Matches QD triad semantics and parity defaults. |
| CC‑A0‑9 | If a generator/selector is claimed scale‑amenable, declare S (Scale Variables) and an E/E‑LOG scale policy‑id; otherwise mark S = N/A. | Makes scale assumptions explicit and comparable across contexts. |
| CC‑A0‑10 | For scale‑amenable claims, execute a scale‑probe (≥ 2 points along S) and report a Scale Elasticity class (rising/knee/flat) in the UTS row. | Forces early strategy‑relevant evidence without over‑specifying numerics. |
| CC‑A0‑11 | Apply Iso‑Scale Parity in parity runs when S is declared; where infeasible, state the loss notes and treat results as non‑parity with an explicit penalty in R. | Keeps comparisons fair and auditable under scale constraints. |
| CC‑A0‑12 | BLP default. If a domain‑specific heuristic is selected over a general, scale‑amenable method, record a BLP‑waiver reason: deontic, scale‑probe overturn, or context‑specific. | Prevents silent violations of the Bitter Lesson; improves selector transparency. |
Benefits. • Immediate usability for engineer‑managers (plain one‑liners) with formal anchors for auditors. • Portfolio‑first culture (sets & illumination) instead of brittle leaderboards. • Edition‑aware comparability; parity/refresh is routine, not ad‑hoc.
Trade‑offs & mitigations. • Slightly longer UTS rows → mitigated by consistent schema and copy‑paste snippets. • Requires discipline on units/scales → mitigated by CG‑frame templates.
This pattern instantiates P‑10 Open‑Ended Evolution by making generation‑selection‑publication operational at the on‑ramp: readers get just enough shared vocabulary to run search as standard practice. It aligns with Didactic Primacy (P‑2) and LEX‑BUNDLE (E.10) by keeping definitions plain‑first and scale‑lawful, and with Plug‑in Layering (P‑5) by pointing to C.17–C.19 for formal anchors without tool lock‑in. The post‑2015 line (MAP‑Elites → CMA‑ME/MAE → Differentiable QD/MEGA → QDax; POET/Enhanced‑POET/Darwinian Goedel Machine) normalised quality‑diversity and open‑endedness as first‑class search objectives; this glossary surfaces those ideas as publication standards, not tool recipes.
Builds on. E.2 Pillars (P‑10, P‑2, P‑6), A.5 (Open‑Ended Kernel), B.5/B.5.2.1 (Abductive loops + NQD binding), C.17–C.19 (Creativity‑CHR, NQD‑CAL, E/E‑LOG).
Coordinates with. E.7/E.8 (Archetypal Grounding; Authoring template), E.10 (LEX‑BUNDLE), F.17 (UTS), G.5/G.9–G.12 (set‑returning selectors, iso‑scale parity, shipping & refresh). Constrains. Any generator/selector/portfolio publication on the Core surface: N‑U‑C‑Diversity_P + policy‑ids; S/Scale‑probe where applicable; parity pins; lawful scales; portfolio‑first where mandated. (Ties into UTS rows and parity artefacts.) Editor’s cross‑reference. For agentic orchestration of scalable tool‑calls under BLP/SLL, see C.24 (Agent‑Tools‑CAL).
This pattern is an on‑ramp: it does not replace C.17–C.19. It binds Plain definitions to publication/telemetry expectations so newcomers can use NQD/E/E‑LOG immediately while experts follow the formal trails.
“Name the thing without smuggling in its parts.”
The first epistemic act in any discipline is to point: “that thing, not the background.” Physics calls the pointed object a system, biology an organism, information science an artifact, philosophy an entity. Reusing any one of these across domains drags hidden assumptions and yields nonsense like “What is the mass of a system of equations?” or “Where is the network interface of a moral theory?” FPF therefore starts from a minimal, domain‑agnostic root that makes such category errors impossible by construction and gives engineers and managers a clean, uniform handle for composition, boundaries and interfaces.
If FPF treats system as the universal root, two recurrent failure modes appear:
- Category Error — physical affordances get projected onto abstract artifacts (ports on theories; kilogram‑mass of paradigms).
- Mereological Over‑reach — part–whole calculus is applied to genuinely atomic entities (prime numbers, elementary charges), producing meaningless “sub‑parts.”
A robust kernel must separate identity from structure: first say what can be singled out, then say what has parts.
| Force | Tension |
|---|---|
| Universality vs Intuition | Precision of a new root term (Holon) ↔ practitioner expectation of familiar words (System, Theory). |
| Purity vs Pragmatism | Clean formalism ↔ immediate usability for engineers, scientists, managers. |
| Structure vs Identity | Need to talk about atoms with zero parts ↔ need full mereology for composites. |
FPF adopts a three‑tier root ontology refining Koestler’s “holon,” with crisp boundaries and safe composition. The presentation follows the [A]‑template and style mandates (Context → Problem → Forces → Solution → …; mandatory Archetypal Grounding).
4.1 U.Entity — Primitive of Distinction
Anything that can be individuated and referenced. No structural assumptions. Use when you must name “a something” without committing to having parts.
4.2 U.Holon — Unit of Composition
A U.Entity that is simultaneously (a) a whole composed of parts and (b) a part within a larger whole. Formally, U.Holon ⊑ U.Entity.
Operational requirements:
- A holon has exactly one
U.Boundarythat separates it from its environment. - The universal aggregation operator Γ is only defined on sets of
U.Holon(never on bareU.Entity). These constraints make composition rules uniform across domains and prevent Γ from being misapplied.
4.3 Interface primitives: U.Boundary & U.Interaction
Every holon is defined by how it is separated and what crosses the separation.
U.Boundary— physical or conceptual surface delimiting the holon’s scope.U.Interaction— any flow of matter, energy, or information that crosses a boundary. Canonical boundary kinds (with twin archetypes):
| Kind | Permitted exchanges | U.System archetype |
U.Episteme archetype |
|---|---|---|---|
| Open | Matter, energy, information | Microservice exposing a public API | Public wiki editable by anyone |
| Closed | Energy, information (no matter) | Sealed cooling loop in a server | Version‑locked theory accepting new evidence but fixed axioms |
| Permeable | User‑filtered subset | Cell membrane regulating ions | Legal code allowing specific amendment classes only |
This pair (Boundary, Interaction) makes interfaces explicit, reviewable, and testable across domains.
4.4 Inside/Outside decision procedure To decide whether an entity E is inside a holon H, apply:
- Dependency test: removing E breaks a core invariant of H.
- Interaction test: E participates in causal loops wholly within H’s boundary.
- Emergence test: E contributes to a novel collective property warranting H as a single unit. Fail all three → E is outside. This practical triage prevents “scope creep” and forces explicit modeling of environment vs interior.
4.5 Archetypal sub‑holons FPF fixes two archetypal specializations to ground cross‑domain universality:
| Subtype | Essence | Home architheory |
|---|---|---|
U.System ⊑ U.Holon |
Physical, operational holon obeying conservation laws. | Sys‑CAL |
U.Episteme ⊑ U.Holon |
Knowledge holon (axioms, evidence, argument graph). | KD‑CAL |
Naming guideline: keep “System” and “Episteme” for practitioner comfort; reserve Holon for meta‑level discourse and formal signatures.
| Holonic slot | U.System — Water‑pump |
U.Episteme — Scientific theory |
|---|---|---|
| Identity | Pump #37 stamped on the name‑plate | “Newtonian Gravitation”, 1726 edition |
| Boundary | Cast‑iron casing; inlet/outlet flanges | Axiomatic scope and vocabulary |
| Parts | Motor, impeller, seals, housing | Axioms, definitions, theorems, datasets |
| Whole | Operable assembly that moves fluid | Coherent body of knowledge predicting phenomena |
Showing the same structural slots filled by a machine and a theory demonstrates the substrate‑independent universality of U.Holon. This is the didactic “Tell–Show–Show” anchor required by the Style‑Guide for [A]‑patterns.
- “Ports on a theory.” Treating a proof corpus as if it had physical connectors. Fix: model
U.Interactiononly across boundaries; for epistemes, interactions are symbolic flows via carriers and citations (see A.10), not power or mass. - “Document edited itself.” Assigning actions to an episteme. Fix: actions are executed by a system bearing a role (A.12/A.15); epistemes are transformed via external transformers acting on their symbol carriers.
- “Parts everywhere.” Forcing a part–whole onto atomic entities (e.g., prime numbers). Fix: if no meaningful parts exist, stay at
U.Entity; apply Γ only toU.Holon. - “Scope ≡ section.” Using “scope” as a text region rather than a modeled boundary. Fix: define a
U.Boundaryand state what crosses it (U.Interaction).
When in doubt: first decide what is a holon, then state its boundary, then list what crosses. Roles and methods come after (see A.2 and A.15).
| ID | Requirement | Purpose / Notes |
|---|---|---|
| CC‑A1.1 | Any modelled object that exhibits a part–whole structure MUST be typed as U.Holon or its subtype. |
Prevents applying Γ to atomic entities; makes aggregation well‑typed. |
| CC‑A1.2 | Each U.Holon MUST reference exactly one U.Boundary and declare its boundary kind (open / closed / permeable). |
Enables boundary inheritance and environmental Standards; aligns with the canonical boundary kinds introduced in A.1. |
| CC‑A1.3 | Domain architheories MUST explicitly subtype their root concept (U.System, U.Episteme, …) from U.Holon. |
Ensures cross‑domain compatibility of aggregation and emergence patterns. |
| CC‑A1.4 | Inside/Outside decisions for any candidate part SHALL be justified by the three‑step test (Dependency → Interaction → Emergence) and recorded with the boundary reference. | Makes holon membership auditable and repeatable; uses A.1’s decision procedure. |
| CC‑A1.5 | Behavioural roles (including TransformerRole) SHALL attach only to U.System (the bearer), not to U.Holon in general and not to U.Episteme. |
Preserves Strict Distinction and prevents category errors; episteme roles are classificatory only. |
| CC‑A1.6 | Do not model acting groups as sets. If a grouping is expected to act, it SHALL be modelled as a collective system (with boundary, role, Method/Work). | Distinguishes MemberOf (collection) from mereology; prepares for A.14 Portions/Phases. |
| CC‑A1.7 | The universal aggregation operator Γ SHALL be applied only to sets of U.Holon within a single declared temporal scope (design or run) and context. |
Prevents “chimera” graphs; routes order/time to Γ_ctx / Γ_time (B.1.4). |
| CC‑A1.8 | Prose and diagrams SHALL follow the naming guideline: use Holon for meta‑level discourse; prefer System / Episteme in practitioner‑level statements. | Reduces jargon friction; keeps signatures precise and text readable. |
Audit tip. CC‑A1.5 is frequently violated when authors write “holon bearing TransformerRole”. Rewrite to “system bearing TransformerRole” or provide the explicit
U.RoleAssignment. See A.2/A.15 for role mechanics.
| Benefits | Trade‑offs / Mitigations |
|---|---|
| Eliminates category errors across physical and abstract realms by cleanly separating identity (Entity), structure (Holon), and behaviour (Role/Method/Work). | Introduces the unfamiliar term Holon; mitigated by Tell‑Show‑Show pedagogy and dual archetypal examples (System/Episteme). |
| Unifies aggregation: a single algebra Γ composes pumps, proofs, genomes, and teams under one roof. | Requires refactoring legacy “System‑only” language; addressed by A.2/A.3 role calculus and the Γ‑family in B.1. |
| Predictable extension point: CAL/LOG/CHR architheories add constraints without touching the core types. | Imposes discipline on boundary declarations; mitigated by boundary kinds and the Inside/Outside test. |
The separation Entity → Holon → {System, Episteme} is not only ontologically clean; it is empirically validated across domains since 2015:
- Compositional open systems. Category‑theoretic treatments show that boundaried components compose safely (decorated cospans, open systems). This mirrors Γ’s reliance on declared boundaries. (Fong & Spivak, 2019; Baez & Courser, 2017)
- Microservices & bounded contexts. Modern software architecture stresses strong service boundaries and local reasoning as the route to evolvability—our
U.Boundaryand Inside/Outside test encode the same discipline. (Newman, 2021; Vernon, 2022) - FAIR & provenance. Data/knowledge communities require explicit distinction between content and carrier, and auditable provenance—precisely the System/Episteme + SCR split used in A.1/A.10. (Wilkinson et al., 2016; Boeckhout et al., 2018)
- Digital Twin / Thread. Engineering practice since late‑2010s emphasises the run↔design seam and boundary‑consistent aggregation of subsystems—formalised in our Γ‑family and boundary inheritance rules. (Grieves & Vickers, 2017; NIST DT/Thread reports 2019‑2021)
- Layered control of CPS. Standard‑based, multi‑rate architectures justify explicit holon boundaries and scale transitions—feeding directly into B.2 Meta‑Holon Transition. (Matni et al., 2024)
These streams converge on one point: make boundaries and composition first‑class and separate what a thing is from what it is doing here‑and‑now—the heart of A.1/A.2.
-
Builds / Grounds:
- A.2 Role Taxonomy — A.1 provides the substantial characteristic (
Holon), A.2 introduces the functional characteristic (RoleandU.RoleAssignment). Together they prevent role/type explosion and keep agency contextual. - A.7 Strict Distinction (Clarity Lattice) — A.1 supplies the slots (Entity/Holon/System/Episteme); A.7 guards their separation in prose and models, stopping Object ≠ Description ≠ Carrier conflations.
- A.14 Advanced Mereology: Portions & Phases — A.1’s holon substrate is the target of A.14’s edge discipline (
ComponentOf,ConstituentOf,PortionOf,PhaseOf); only mereological subtypes build holarchies.
- A.2 Role Taxonomy — A.1 provides the substantial characteristic (
-
Interacts with the Γ‑family (B‑cluster):
- B.1 Universal Algebra of Aggregation — Γ is defined on holons and respects CC‑A1.*; Γ_ctx/Γ_time carry order and temporal composition, Γ_work handles resource ledgers.
- B.2 Meta‑Holon Transition (MHT) — uses A.1’s boundary and Inside/Outside rules to decide when aggregation yields a new whole with novel properties.
- B.3 Trust & Assurance Calculus — evidence attaches to carriers (SCR/RSCR) of epistemes; assurance levels depend on A.1/A.10 alignment.
- B.4 Canonical Evolution Loop — operationalises the design↔run seam at holon boundaries; observation itself is an external transformation across a boundary.
-
Specialised by architheories:
U.System(Sys‑CAL) andU.Episteme(KD‑CAL) are archetypal sub‑holons that supply domain‑specific invariants while inheriting A.1’s boundary and aggregation duties.
Without the holon, parts drift; without the role, purpose evaporates. (Carry this epigraph with A.1 to cue the A.2 hand‑off.)
- Term:
U.BoundedContext - Definition: A
U.BoundedContextis aU.Holonthat serves as an explicit semantic frame of reference. It defines a boundary within which a specific set of models, roles, rules, and language is self-consistent and authoritative. It is the FPF's formal mechanism for localizing meaning and managing complexity by partitioning a larger conceptual space into smaller, coherent, and independently governable domains.
To prevent a common category error, Domain and U.BoundedContext are not synonyms in FPF.
| Characteristic | Domain (e.g., Physics, Law, Automotive Engineering) | U.BoundedContext (e.g., AirTrafficControl_2025, Theory:QuantumMechanics) |
|---|---|---|
| Nature | An external field of practice/knowledge that exists independently of any model. | An internal FPF holon: a named semantic “Context” with its own vocabulary and rules about a slice of a domain. |
| Role in FPF | Provides grounding for applicability and cross‑domain universality of the kernel. | Provides local meaning and separation: a semantic firewall where words and rules are coherent and unambiguous. |
| Relationship | One domain can have many valid perspectives. | A bounded context hosts one such perspective (Glossary, Invariants, local Role taxonomy, Bridges). |
Normative anchor (didactic form):
The context field in any U.RoleAssignment MUST reference a U.BoundedContext, never a broad domain label.
Think “specific room” (e.g., Hospital.OR_2025), not “the whole building” (e.g., “Healthcare”).
Manager’s one‑liner: A Domain is the territory; a Bounded Context is a purpose‑made map of that territory.
The concept of a U.BoundedContext, inspired by Domain-Driven Design (DDD) but elevated to a universal first principle, is essential for several reasons:
- To Manage Complexity: Real-world systems are too complex to be described by a single, monolithic model. A
U.BoundedContextallows us to break down this complexity into manageable parts. Inside a context (e.g., "Air Traffic Control"), the term "flight" has a precise, unambiguous meaning. Outside, it could mean many things. - To Enable Pluralism: Different teams, disciplines, or even historical eras have different models of reality. A
U.BoundedContextallows these different worldviews to coexist without contradiction. As demonstrated in A.2, Pluto can be aPlanetRolein the context ofEarly20thCenturyAstronomyand aDwarfPlanetRolein the context ofIAU_2006_Definition. The context is what resolves the apparent paradox. - To Make Roles Meaningful: As established in A.2.1
U.RoleAssignment, aU.Roleis only meaningful inside a context. The role of "Lead Engineer" is defined by the rules and expectations of "Project Phoenix," not by some universal law of engineering. The context provides the "stage" upon which the role is performed.
In short, a U.BoundedContext is not just a "scope" or a "namespace." It is a holon of meaning, a self-contained universe of discourse with its own local truth.
A U.BoundedContext is a composite holon defined by the following key components, which collectively constitute its model:
-
Glossary(Local Lexicon): A set ofU.Lexemeentries (from Lang-CHR) that defines the specific vocabulary used within this context. It maps local terms to canonical FPFU.Typesand clarifies any domain-specific jargon. This is where a context declares, "Inside here, the word 'ticket' means aU.WorkItem, not aU.TravelPermit." -
Invariants(Local Rules): A set of machine- and human-readable rules (U.ConstraintRulefrom Norm-CAL) that must hold for any artifact or process within this context. This is the most powerful component, as it defines the "local physics" of the context.- Example Invariant (Role Compatibility): "Within this context, a
holdercannot simultaneously play theAuditorRoleand theDeveloperRole." - Example Invariant (State Transition): "A
U.WorkItemin this context can only transition from 'In Progress' to 'In Review', never directly to 'Done'."
- Example Invariant (Role Compatibility): "Within this context, a
-
Roles(Local Taxonomy): A partial order ofU.Roles that are defined and valid only within this context. It specifies the "job titles" available on this "stage." -
Bridges(Optional Mappings): A set of explicit mappings (U.Alignment) to otherU.BoundedContexts. A bridge defines how concepts and terms are translated when information flows from one context to another. This is the formal mechanism for inter-context communication.- Example Bridge: A mapping that states, "The
UserStoryconcept in theAgileDevelopmentcontext is functionally congruent (CL=1) to theRequirementconcept in theFormalEngineeringcontext."
- Example Bridge: A mapping that states, "The
-
As a
U.Holon: AU.BoundedContextis itself a holon. It has a boundary (the semantic line between what's inside and outside its model), parts (itsGlossary,Invariants, etc.), and can be part of a larger whole (a context can be nested within another, more general context). -
As the Anchor for
U.RoleAssignment: Thecontextfield of aU.RoleAssignmentMUST reference a validU.BoundedContext. This ensures that every role assignment is explicitly anchored to a well-defined semantic frame. -
As the Scope for
U.Objective: AU.Objective(from Norm-CAL) is often defined relative to a context. The goal of "maximize velocity" is meaningful within the context of anAgileSprint, but might be meaningless or even counterproductive in aResearchDiscoverycontext. -
As a Target for
U.Transformer: AU.BoundedContextcan be changed. The evolution of a team's process, a scientific theory, or a project's rules is modeled as aTransformeracting on theU.BoundedContextholon itself (e.g., by adding a new invariant or deprecating a role).
The concept of a U.BoundedContext is universal and applies to both physical/operational domains and purely abstract/epistemic ones. Understanding these two archetypes clarifies its role as a fundamental FPF primitive.
| Archetype | Holder of the Context | U.BoundedContext Example |
Core Components Illustrated |
|---|---|---|---|
U.System Archetype |
A modern software engineering team | AgileProject:Phoenix |
Glossary: Defines "Story Point," "Sprint," "Velocity." Invariants: "Daily stand-up must not exceed 15 minutes." "A Story cannot move to 'Done' without a linked Test Case." Roles: ProductOwnerRole, ScrumMasterRole, DeveloperRole. Bridges: Maps Velocity metric to the FinanceDept context's CostCenter:BudgetBurnRate. |
U.Episteme Archetype |
A scientific community | Theory:SpecialRelativity |
Glossary: Defines "Inertial Frame," "Lorentz Transformation," "Proper Time." Invariants: "The speed of light in a vacuum is constant for all observers." "The laws of physics are the same in all inertial frames." Roles: Postulate#AxiomaticCoreRole, Experiment#EvidenceRole. Bridges: Maps its concept of "Spacetime" to the GeneralRelativity context's more complex concept of "Curved Spacetime." |
Key takeaway from grounding:
This illustrates that a U.BoundedContext is not an abstract container but a holon with tangible content. For the engineering team, it's their project's "operating system." For the scientific theory, it's the "intellectual constitution." In both cases, the context defines what is true, what is possible, and what words mean locally.
To ensure U.BoundedContext is used consistently and rigorously, the following normative checks apply.
| ID | Requirement (Normative Predicate) | Purpose / Rationale |
|---|---|---|
| CC-A1.1.1 (Holon Nature) | A U.BoundedContext MUST be modeled as a U.Holon with a defined U.Boundary. |
Reinforces that contexts are well-defined entities, not vague groupings. Enables reasoning about contexts themselves as systems. |
| CC-A1.1.2 (Role Localization) | Every U.Role MUST be defined within the Roles taxonomy of at least one U.BoundedContext. A "global" role is forbidden. |
Ensures that roles are never context-free. Prevents ambiguity by forcing every role to be anchored to a specific semantic frame. |
| CC-A1.1.3 (Invariant Scope) | An Invariant defined within a context MUST only apply to holons and processes operating within that context. |
Prevents the leakage of local rules into the global space, which is critical for modularity and managing complexity. |
| CC-A1.1.4 (Bridge Explicitness) | Any interaction or mapping between two U.BoundedContexts MUST be modeled as an explicit Bridge artifact. |
Forbids "backdoor" or implicit communication between contexts. Makes all inter-context dependencies visible and auditable. |
| CC-A1.1.5 (RoleAssignment Context Anchor) | Every U.RoleAssignment MUST reference exactly one U.BoundedContext in its context field. |
Guarantees that every role assignment is unambiguous and its meaning is fully determined by a single, authoritative context. |
| Benefits | Trade-offs / Mitigations |
|---|---|
| Enables True Modularity: By encapsulating models, FPF can support large, complex systems where different teams can work on their own bounded contexts in parallel with minimal interference. | Modeling Overhead: Requires architects to explicitly think about and define the boundaries of their models, which can feel like extra work initially. Mitigation: This upfront effort is a strategic investment that prevents the much higher cost of integration chaos and semantic ambiguity later in the project. |
| Resolves Ambiguity and Paradox: Provides a formal mechanism to manage synonyms, homonyms, and conflicting models (like the Pluto example). It transforms "it depends" into a precise, queryable structure. | Bridge Maintenance: As contexts evolve, the bridges between them must be maintained. Mitigation: FPF tooling should support "link integrity" checks to automatically flag broken or outdated bridges. |
Makes Rules Explicit: The Invariants component of a context makes the local rules and invariants for a project or theory explicit, documented, and auditable. |
- |
| Foundation for Scalable Autonomy: In multi-agent systems, each agent can operate within its own bounded context, communicating with others through well-defined bridges. This is a prerequisite for building robust, decentralized systems. | - |
Lineage and fit with Domain‑Driven Design (DDD). FPF generalizes the proven DDD idea of a Bounded Context from software into a universal modeling primitive:
| DDD concept | FPF counterpart | Generalization in FPF |
|---|---|---|
| Bounded Context | U.BoundedContext (a holon) | Used for systems and knowledge; first‑class object with explicit Glossary, Invariants, local Roles, Bridges. |
| Ubiquitous Language | Glossary of the context | The shared vocabulary is an explicit component, not just narrative. |
| Context Map | Bridges/Alignment between contexts | Cross‑context relations are modeled explicitly rather than assumed globally. |
Why this matters here.
U.BoundedContext gives U.RoleAssignment (A.2.1) its footing: role meanings are local by design, conflicts are checked inside the same context, and differences across contexts are handled by explicit bridges* instead of “global truth.”
The introduction of U.BoundedContext as a first-class holon is a direct implementation of several core FPF principles and is strongly supported by contemporary practice.
- Philosophical Grounding: The idea that meaning is always local and context-dependent is a cornerstone of late 20th-century philosophy of language (e.g., Wittgenstein's "language-games"). FPF operationalizes this insight.
- Domain-Driven Design (DDD): The concept is a direct borrowing and generalization from Eric Evans' seminal work on DDD, where the Bounded Context is the central strategic pattern for managing complexity in large-scale software. Its success over the past two decades in the software industry provides powerful empirical validation for its utility. FPF elevates it from a software design pattern to a universal ontological primitive.
- Architectural Necessity: For FPF to fulfill its promise of being an "operating system for thought," it needs a mechanism analogous to an OS's "process separation." A
U.BoundedContextis precisely that: a protected "memory space" for a model, preventing different models from corrupting each other. - Enabler for Key Patterns: The
Contextual Role Assignmentpattern (A.2.1) would be incoherent without a formal definition of "Context." This pattern provides that necessary foundation, making the entire role-based architecture sound.
In essence, U.BoundedContext is the architectural pattern that allows FPF to be both universal in its core principles and specific and pluralistic in its applications. It is the mechanism that tames complexity and makes large-scale, multi-paradigm modeling possible.
- Constitutes: The foundational "semantic space" for patterns like
A.2 Role TaxonomyandA.13 The Agential Role. - Builds on:
A.1 Holonic Foundation, as aU.BoundedContextis itself aU.Holon. - Interacts with:
Norm-CAL: A context'sInvariantsare typically expressed asU.ConstraintRules.Lang-CHR: A context'sGlossaryis a collection ofU.Lexemes.Decsn-CAL: Decisions and objectives are often scoped to a specific context.
- Enables: The resolution of conflicts as modeled in
D.3 Holonic Conflict Topology, by showing that many conflicts are context-dependent.
A holon’s essence tells us what it is; its roles tell us what it is being, here and now.
Pattern A.1 established the substantial characteristic of the core (Entity → Holon → {System, Episteme, …}), cleanly separating identity from structure and aggregation. The present pattern introduces the functional characteristic: how a holon participates in purposes within a bounded context and for some interval. This extends the early sketch of A.2 and tightens its alignment with A.7 (Strict Distinction): roles are not parts and not behaviours; they are contextual masks that a holon wears while behaviours are handled by Method/Work.
Without an explicit role calculus:
- Type explosion & conflation. Each new purpose breeds a new “subtype” (
PumpAsCoolingLoop,PumpAsFuelLoop, …), violating parsimony and fusing substance with function. - Agency opacity. It becomes unclear whether any system may act as a transformer/agent, or only pre-declared special kinds.
- Epistemic blindness. Knowledge artefacts (papers, proofs) cannot be given roles, blocking modelling of citation, evidence, or design-time justification.
| Force | Tension | |
|---|---|---|
| Identity vs Function | A holon’s make‑up ↔ its transient, contextual purpose. | |
| Static vs Dynamic classification | Fixed type lattice ↔ late‑binding of new roles. | |
| Universality vs Familiarity | One mechanism for pumps and papers ↔ domain‑specific role names. | |
| Simplicity vs Expressiveness | Minimal primitives ↔ multi‑role, multi‑holder scenarios. |
We elevate Role to a first‑class, context‑indexed concept and make the binding between a holon and a role explicit.
U.Role— a context-bound capability/obligation schema that a holon may bear (play) for a time interval. A role has no parts and no resource deltas of its own. (A7 guard)U.RoleAssignment— a first-class object recording that a holon bears (plays) a role in a bounded context, optionally with authority, justification, and provenance:
U.RoleAssignment {
holder : U.Holon,
role : U.Role,
context : U.BoundedContext,
timespan? : Interval,
justification?: U.Episteme, // why (standard, SOP, evidence)
provenance? : U.Method // how assignment/verification was done
}
Short form (readable): Holon#Role:Context [@Interval].
Why a first‑class binding? It keeps identity (holon), function (role), context (semantics), and time (run‑window) separate yet linked, preventing the substance/function conflation identified above. The early
playsRoleOf(Holon, Role, span)relation in the draft is subsumed byU.RoleAssignmentand extended with Context (and optional governance fields).
- Method exists only while some Work is underway; MethodDescription persists as Episteme. A Role binds to Method (design‑time), and Work performs Method under that Role (run‑time). This preserves the role ≠ behaviour split and the design ↔ run duality.
- Only Work carries resource deltas (feeds Γ_work); a Role never does.
- Locality.
role ∈ Roles(context). Outside its context, a role’s meaning is undefined. - Non‑mereological. No Role (nor Method/MethodDescription) may appear in any
partOfchain; holarchies are for substantial holons only. - Multiplicity. A holder may bear multiple roles concurrently; a role may be borne by many holders—subject to each context’s compatibility rules.
- Time anchoring.
timespan(if present) is non‑empty and finite for run‑time claims; design‑time bindings are timeless but versioned viaMethodDescriptionidentity. - Behavioural coherence. During a Work interval, the performer shall play the Role that binds the executed Method. (No hidden role swaps.)
Within each U.BoundedContext, role names are organised as a partial order (refinements) plus an incompatibility relation (mutually exclusive roles). Typical substrate‑neutral anchors:
| Kernel Role | Intent | System archetype | Episteme archetype | |
|---|---|---|---|---|
TransformerRole |
Changes other holons via Method/Work. | Robot arm assembling casings. | Prover constructing a new lemma. | |
ObserverRole |
Collects evidence / metrics. | Sensor array on a test‑rig. | Reviewer annotating an article. | |
SupervisorRole |
Governs subordinate holons. | PLC orchestrating a line. | Meta‑analysis curator combining studies. |
Domains refine these anchors: e.g.,
CoolingCirculatorRole,CitationSourceRole,LemmaRole.
System case — Cooling loop
PumpUnit#3#HydraulicPump:Plant‑A
HydraulicPump ↦binds↦ ChannelFluid (design)
run‑2025‑08‑08 isExecutionOf centrifugal_pump_curve.ld and performedBy PumpUnit#3 (run)
(All behavioural/resource facts live in Work; the Role is the mask.)
Episteme case — Standard in design
RFC‑9110.pdf#ProtocolStandard:WorldWideWeb justifies MethodDescription selection; the system bearing TransformerRole is the design service that executed the selection work. The episteme did not act.
Collective vs set (safety pitfall)
A set {Alice, Bob, 3.14} has no behaviour; a team is a system with boundary, coordination Method, and supervision Work; only the latter can bear agentic roles.
-
“Transformer as system subtype.” ✗ “
U.TransformerSystembuilds pumps.” ✓ “RobotArm R‑45#Transformer:Plant‑Aexecuted Work W.” (Role is a mask; behaviour is Method/Work.) -
“Role as part.” ✗ “The pump’s role is one of its components.” ✓ Roles are never parts; components are substantial. Keep all
partOfchains role‑free. -
“Episteme acts by itself.” ✗ “The PDF enforced the SOP.” ✓ An episteme can hold roles like
ProtocolStandardin context, but only a system performs the Method/Work that uses it. -
“Context leakage.” ✗ “Pluto is Planet and DwarfPlanet.” (in one tacit space) ✓ “
Pluto#Planet:Early20thCenturyAstronomy;Pluto#DwarfPlanet:IAU_2006_Definition.” No contradiction—different bounded contexts. (Illustrative ofU.RoleAssignmentsemantics carried forward from the A.2.1.)
- Name roles for intent, not mechanics. Prefer
CoolingCirculatorRoleoverChannelFluidWithCentrifugalProfile. - Pin the context early. If two teams disagree, split contexts and (optionally) define an alignment bridge; do not over‑generalise the role.
- Document the binding chain. For any operational claim, be ready to point to:
RoleAssigning → Method ↔ MethodDescription → Work. (Readers’ dictionary: BPMN workflow → MethodDescription; operation/job → Work.)
| ID | Requirement | Practical test (manager‑oriented) |
|---|---|---|
| CC‑A2.1 | A Role SHALL NOT be a mereological part of any holon; roles are never constituents of holarchies. | If a diagram shows Role →(part‑of)→ Holon, the model fails. Replace the edge with playsRoleOf(Holon, Role, span) (A.14 governs parts). |
| CC‑A2.2 | Only a System can bear behavioural roles (e.g., TransformerRole, AgentialRole) and thus bind Method/Work; an Episteme MAY bear non‑behavioural roles (e.g., ReferenceRole, ConstraintSourceRole) only. |
Lint the model: any U.Episteme that bindsMethod or is a performedBy target fails; move behaviour to a system bearing the role and act on episteme carriers (A.7, A.12, A.15). |
| CC‑A2.3 | Every non‑abstract Role SHALL bindsMethod ≥ 1; roles with no bound method are abstract and non‑executable. |
If a role participates in Work without some Method ⟷ MethodDescription chain, flag “unbound role” and add a binding (A.15). |
| CC‑A2.4 | Every role reference in normative text SHALL be context‑indexed by a declared Bounded Context (local to the pattern or glossary). Local shorthand “Transformer” is permitted only if the pattern’s Glossary re‑binds it to “System bearing TransformerRole”. | If prose says “Transformer updates the spec”, the pattern MUST define the local alias and its target; otherwise rewrite to the canonical long form (E.10, A.7). |
| CC‑A2.5 | Each playsRoleOf relation SHALL declare a TimeInterval (span); open intervals are allowed but must be explicit. |
Search for playsRoleOf without a span; add @t₀..t₁ or an open bound. This prevents ambiguous overlaps (A.2 Solution). |
| CC‑A2.6 | If two roles are declared incompatible inside the same context, a bearer SHALL NOT hold them over overlapping spans. | Check the context’s role‑compatibility grid; if overlaps exist, either split the Work by PhaseOf or change staffing (A.14; B.1.4/Γ_time). |
| CC‑A2.7 | For any Work item, the performedBy system MUST be the bearer of the executing role throughout the Work’s timespan. |
Join performedBy(Work, Holon) with playsRoleOf(Holon, Role, span) and assert span ⊇ timespan(Work). Split Work if the bearer changes (A.15 §8.1). |
| CC‑A2.8 | Every Method bound to a role SHALL be isDescribedBy ≥ 1 MethodDescription (U.Episteme) and every Work SHALL be isExecutionOf exactly one MethodDescription version. |
If a Work lacks isExecutionOf, or a Method lacks MethodDescription, the audit fails (A.15; A.10 evidencing hook). |
| CC‑A2.9 | Evidence for claims about roles and execution MUST anchor to symbol carriers (SCR/RSCR); self‑evidence is forbidden. | A role effectiveness claim without SCR/RSCR or with cyclic provenance fails (A.10). |
| CC‑A2.10 | When a Role assignment implies order or temporal structure, the pattern SHALL defer to Γ_ctx/Γ_time rather than overloading role edges. | If argument order matters, use Γ_ctx folds and record OrderSpec; version/evolution goes via Γ_time (B.1.3 §4.5). |
| CC‑A2.11 | Use of legacy nouns “creator/actor/agent” in Core text is prohibited unless they are explicitly typed as roles with bearers; the term “Transformer” is a local alias, not a type. | Scan for bare nouns; replace with “system bearing TransformerRole” or define an alias in the Glossary (A.7 canonical rewrites; E.10 registers). |
| CC‑A2.12 (advisory) | A reified RoleAssigning object SHOULD capture context, timespan, optional authority, justification (U.Episteme), and provenance (U.Method). |
Recommended for governance‑heavy domains; it improves explainability without changing Core semantics (ties to A.10; B.3 Trust). |
Note. CC‑A2.2 aligns with A.7 Role‑domain guards (“behavioural roles’ domain = system; epistemes bear non‑behavioural roles only”).
| Benefit | Why it matters | Trade‑off / Mitigation |
|---|---|---|
| Category‑error immunity | Clear firewall between identity (holarchies) and function (roles) prevents mixing “parts” with “masks”. | Slight modelling overhead; templates provide checklists (A.7, A.14). |
| Operational clarity | Who does what, when, and under which mask becomes audit‑ready (performedBy ⊆ playsRoleOf). |
Requires spans on Role assignments; mitigated by default “open‑ended” spans in drafts. |
| Epistemic hygiene | Knowledge holons contribute as evidence or constraints, never as doers. | Authors must rewrite anthropomorphic prose; canonical rewrites help. |
| Cross‑context pluralism | Same bearer can hold different roles across contexts without contradiction; differences are explicit in the binding. | Requires declaring the bounded context; E.10 eases the ceremony with registers/aliases. |
| Γ‑coherence | Order/time/aggregation stay in Γ‑operators, not overloaded into “role" edges. | Authors learn when to call Γ_ctx/Γ_time; the Part B on‑ramp is short. |
Why insist on roles as contextual masks and externalised transformers?
- Constructor Theory (2015–2022). Post‑2015 work by Deutsch & Marletto re‑centres physics on possible tasks and constructors rather than objects, mirroring FPF’s TransformerRole: behaviour is attached to “who can realise a task,” not to substance per se. Our separation of SubstantialHolon vs Role and the insistence on external transformers directly echo this shift. (Conceptual alignment noted in A.2 Solution and A.12 intent.)
- Layered Control Architectures (Matni–Ames–Doyle, 2024). The modern control stack cleanly externalises regulators and planners relative to plants. FPF’s obligatory “system bearing TransformerRole” (A.7, A.12) is isomorphic to that separation, keeping supervision and actuation outside the controlled holon.
- Active‑Inference / Agency spectrum (2017–2023). Contemporary models treat agency as graded and contextual (percept‑act loops tuned by free‑energy bounds). A.13 adopts exactly this: AgentialRole is a role worn by a holon, with graded measurements via Agency‑CHR, not a static type.
- Basal Cognition & multi‑scale organisation (2019–2024). Fields & Levin argue for cross‑scale control and information flows; FPF encodes this via Γ‑flavours and the Meta‑Holon Transition triggers, ensuring Role assignments compose across scales without collapsing identity into function.
- Knowledge ecosystems & safety cases (2018–2025). Modern assurance relies on traceable evidence and conservative integration (no “truth averaging”): our A.10 anchors (SCR/RSCR) and Γ_epist’s weakest‑link fold implement that discipline and forbid self‑evidence.
Summing up: post‑2015 science and engineering converge on roles as contextual capabilities, externalised control, and traceable evidence. A.2 codifies these insights in a substrate‑neutral way, keeping the Core small yet powerful.
-
Builds on: A.1 Holonic Foundation (role/mereology split), A.7 Strict Distinction (role ≠ behaviour; episteme ≠ carrier), A.14 Advanced Mereology (no roles in holarchies).
-
Specialises / Coordinates with: A.13 Agential Role & Agency Spectrum (behavioural roles over systems; graded agency), A.15 Role–Method–Work Alignment (bindsMethod / isExecutionOf discipline).
-
Constrains / Used by B‑cluster: B.1 Universal Algebra of Aggregation (Γ) (keep order/time in Γ_ctx/Γ_time; keep provenance in Γ_epist), B.2 Meta‑Holon Transition (promotion when supervision/closure appears), B.3 Trust & Assurance (evidence & congruence).
-
Interlocks with E‑cluster (governance & language): E.10 Lexical Discipline (registers, tier disambiguation, local aliases like “Transformer”), E.5.1 DevOps Lexical Firewall (ban tooling tokens in Core patterns).
-
Reinforces: A.10 Evidence Anchoring (external transformer; SCR/RSCR), A.12 External Transformer Principle (agent externalisation).
with Role Performance View, U.RoleStateGraph (RSG), and Role Characterisation Space (RCS) hooks
Status. Definitional pattern [D], kernel‑level.
Builds on: A.1 Holonic Foundation, A.1.1 U.BoundedContext, A.2 Role Taxonomy.
Coordinates with. A.13 Agential Role & Agency Spectrum, A.15 Role–Method–Work Alignment, E.10.D1 D.CTX (Context discipline), E.10.D2 Strict Distinction.
Lexical discipline. Context ≡ U.BoundedContext (E.10.D1). Appointment is colloquial only and MUST NOT appear in normative clauses. Canonical term: Role Assignment.
Intent. Provide one, universal, context‑local way to say who is being what, where (and when) without altering what the thing is. The same grammar works for people, machines, software, teams, and also for knowledge artefacts (epistemes) when they hold statuses rather than perform actions.
Scope.
- Defines
U.RoleAssignment(binding a holder holon to a role inside a bounded context, optionally within a time window). - Separates that binding from
U.RoleEnactment(the run‑time fact that a piece of Work was performed under that assignment). - Names the Role Characterisation Space (RCS) and the Role State Graph (RSG) as intensional facets of a Role (recorded in its
RoleDescription, upgraded toRoleSpeconly after tests exist). - Declares eligibility constraints so Roles apply to the right holon kinds, without chains like “Transformer is assigned to be Agent”. Role families are independent.
Non‑goals. No storage models, no workflows, no org charts. This is a thinking Standard; all semantics are notation‑free.
- Type explosion. Baking transient function into rigid types (“CoolingPump”, “AuditDeveloper”) violates parsimony and makes change brittle.
- Context drift. Labels like Operator, Process Owner, Standard slide in meaning across teams/years when not tied to a Context.
- Actor vagueness. Work logs state that things happened but not who, in what capacity, under which local rules.
- Category leaks. Documents “do” tasks; deontic statuses are treated like run‑time states; capabilities are confused with permissions.
- Role chains. Attempting “System ↦ TransformerRole ↦ AgentRole” mixes independent role families and hides design intent.
| Force | Resolution in this pattern |
|---|---|
| Universality vs locality | One mechanism (U.RoleAssignment), but every meaning is context‑local (Context); Cross‑context sameness only via Bridge (F.9). |
| Stability vs change | Identity of holder stable; assignments come/go via windows; enactments are punctual facts attached to Work. |
| Clarity vs brevity | Full definition + the mnemonic shorthand Holder#Role:Context@Window. |
| Behavior vs status | Only systems enact behavior; epistemes hold statuses. Different role families, no chains. |
| Specification vs description | Role RCS/RSG are recorded in RoleDescription; upgrade to RoleSpec only after a test harness exists (E.10.D2). |
U.RoleAssignment is a context‑local binding:
RoleAssignment ::= 〈holder: U.Holon, role: U.Role, context: U.BoundedContext, window?: U.Window〉
Invariants (normative).
-
RA‑1 (Locality).
roleMUST be defined incontext(its meaning is exactly the one recorded in that Context’s RoleDescription/RoleSpec). -
RA‑2 (No role‑of‑role).
holderMUST be aU.Holon(System, Episteme, or composite). Assigning roles to roles is forbidden. -
RA‑3 (Eligibility by role family).
- Behavioural / Agential / Constructor roles:
holderMUST be aU.System. Only systems can enact Methods and produce Work. - Epistemic‑status / Normative‑status roles (e.g., Evidence, Standard, Requirement):
holderMUST be aU.Episteme. - Service‑governance roles (e.g., ServiceOffering, SLO‑Clause as status carriers): holder is typically a
U.Episteme; execution still requires a System in a behavioural role. (Contexts may refine eligibility, but cannot weaken these families.)
- Behavioural / Agential / Constructor roles:
-
RA‑4 (Window discipline). If
windowis present, enactments (below) MUST occur within it; if absent, interpret as “open‑ended from assignment time”. -
RA‑5 (Separation). A RoleAssignment confers the capacity/authorization to act (or the status to be recognised), but it is not behaviour (no Work implied), not capability (intrinsic ability lives elsewhere), and not structure (may never appear in a BoM).
Never use "appointment" as a synonym to "assignment".
Didactic read. Think badge (who wears which mask, where, when). The rules for the mask live in the room (Context).
Two assignment modes A RoleAssignment can be: (a) Authoritative — issued by an authority or policy in the Context (often via a SpeechAct Work); it can open the Green‑Gate for steps that require explicit authorization. (b) Observational — an evidence‑backed classification that the holder occupies a Role in this Context (e.g., “Moon as SatelliteRole:IAU_2006”). Observational assignments never by themselves open operational Green‑Gates; they can gate decisions and analysis.
U.RoleEnactment captures the run‑time fact that a specific piece of Work was performed under a specific Role Assignment:
RoleEnactment ::= 〈work: U.Work, by: U.RoleAssignment〉
Invariants.
- RE‑1 (Actor reality).
by.holderMUST be aU.System. (Epistemes never enact Work.) - RE‑2 (Temporal fit).
work.windowMUST overlapby.window(orby.windowis open and containswork.window). - RE‑3 (Method gate). For the
MethodSteprealised bywork,by.roleMUST satisfy the step’srequiredRolesin that same Context (directly or via≤specialization inside the Context). - RE‑4 (Traceability). Every
U.WorkMUST reference exactly oneU.RoleEnactment(hence a determinate RoleAssignment) as its performer.
Reading: Assignments authorize; enactments happen. That single sentence prevents months of muddled logs.
Role Enactment is the occurrence of U.Work performed by a holder while a valid U.RoleAssignment for the required Role is in an enactable state of its RoleStateGraph (A.2.5) within the same Context. Enactment is generic: it includes operational work (e.g., actuation) and communicative work (speech acts such as approvals).
These are intensional facets of a Role, not containers “inside” the Role. They are recorded in the RoleDescription (or RoleSpec once harnessed), per E.10.D2.
-
RCS (Role Characterisation Space). A set of named characteristics that parameterise how the Role is understood in a Context (e.g., AgencyLevel ∈ {None, Assisted, Delegated, Autonomous}; SafetyCriticality ∈ {SC0…SC3}).
-
RSG (Role State Graph). A directed graph of named states (nodes) and admissible transitions (edges) for the Role within the Context (e.g., {Eligible → Authorized → Active → Suspended → Revoked}).
- Each state has a Conformance Checklist (set of observable cues) supporting Evaluations (“X ∈ Authorized@context in W”).
- RSG governs role state transitions, independent of any Work instance.
Discipline. Say “Role is characterised by RCS/RSG recorded in RoleDescription”, never “Role contains its states.”
Use the canonical compact form in prose and diagrams:
Holder#Role:Context@Window
Examples:
PLC_17#Transformer:PipelineOps@2025‑04‑01..2025‑06‑30ISO_26262v2018#NormativeStandard:AutoSafetyCase(status role on an Episteme; no enactment)
The shorthand is didactic; the semantics are those of §§4.1–4.3.
Role families (e.g., Agential, Constructor/Transformer, Observer/Measurer, Status) are independent. A Context may state that Surgeon ≤ Clinician within the same family, but MUST NOT model “Transformer is an Agent” by chaining RoleAssignments. When a holder must satisfy both an Agential and a Transformer requirement, the MethodStep requires both roles; the holder wears two badges, not a badge‑of‑a‑badge.
A Role’s family constrains who can wear its badge. Eligibility is part of didactic hygiene and prevents chains like “Transformer → Agent”.
U.System— any acting holon (person, device, software service, team, organization, socio‑technical unit).U.Episteme— any knowledge unit (document, dataset, model, standard, Standard).U.Holon— supertype; only Systems enact Work; Epistemes can only hold status roles.
| Role family (examples) | May be held by U.System |
May be held by U.Episteme |
Notes (eligibility refinements live in Context) |
|---|---|---|---|
| Agential (e.g., Agent, Decision‑Maker, Approver) | ✓ | ✗ | Requires RCS characteristic AgencyLevel; RSG must expose Authorized/Active states. |
| Transformer/Constructor (e.g., Welder, ETL‑Runner) | ✓ | ✗ | Performs Methods; produces Work; often requires Capability evidence. |
| Observer/Measurer (e.g., Observer, Monitor) | ✓ | ✗ | Produces U.Observation; may be passive (probe) or active (test rig). |
| Communicator/Speech (e.g., Authorizer, Notifier) | ✓ | ✗ | A subtype of behavioral roles; produces U.Work typed as SpeechAct. |
| Service‑Governance (e.g., ServiceOffering, SLO‑ClauseCarrier) | (rare) | ✓ | Usually Episteme (catalog entry, policy). If a System “offers”, the offer is a SpeechAct; the offering is an Episteme. |
| Epistemic‑Status (e.g., Evidence, Definition, AxiomaticCore) | ✗ | ✓ | Status roles for knowledge; never enact Work. |
| Normative‑Status / Deontic (e.g., Requirement, Standard) | ✗ | ✓ | Source of obligations; Work is checked against them, not enacted by them. |
Invariant — RA‑3 (eligibility) (restated): Assignments MUST respect this matrix. A Context may tighten (e.g., “Approver must be human”), never loosen.
Conformance checks (easy to remember).
- CC‑ELIG‑1. If
role.family ∈ {Agential, Transformer, Observer, Speech}, thenholder : U.System. - CC‑ELIG‑2. If
role.family ∈ {Epistemic‑Status, Normative‑Status, Service‑Governance}, thenholder : U.Episteme. - CC‑ELIG‑3. No “role of a role”:
roleis bound to a holder, not to another role or assignment.
Role algebra relates role types inside one U.BoundedContext. It is not mereology.
- Notation:
RoleS ≤ RoleG - Semantics (normative): For any
U.RoleAssignmentwithrole = RoleSin this Context, the holder also satisfies requirements forRoleGin this Context. - Use: Stable expertise ladders, privilege inheritance.
- CC‑ALG‑1. Engines that check
requiredRolesMUST treat≤as admissible substitution.
- Notation:
RoleA ⊥ RoleB - Semantics (normative): A single holder MUST NOT have overlapping
windows for assignments to both roles in this Context. -
- CC‑ALG‑2. Validation MUST reject overlapping assignments that violate
⊥.
- CC‑ALG‑2. Validation MUST reject overlapping assignments that violate
- Notation:
RoleC := Role1 ⊗ Role2 ⊗ … - Semantics:
RoleCis satisfied iff the holder has simultaneous valid assignments for each conjunct role (in this Context). - Use: “On‑call Incident Commander” = Engineer ⊗ Communicator ⊗ Decision‑Maker.
- CC‑ALG‑3. Checking
requires: [RoleC]MUST expand to conjunctive checks.
Didactic guardrails. Use
≤for lasting ladders,⊥for critical safety/governance,⊗for frequent conjunctions. Prefer listing multiplerequiredRoleson Method steps to avoid ornate lattices.
Assignments authorize, enactments happen — in time. RSG governs the role’s state transitions; window governs the binding’s validity.
- Window form:
@t_start..t_end(ends may be open). - RE‑2 (temporal fit) (restated):
work.windowMUST lie within (or overlap appropriately with)assignment.window. - Handover pattern: Close
A#[email protected]and openB#Role@t..— never delete history. - CC‑WIN‑1. Historic assignments MUST NOT be erased; close the window instead.
Each Role’s RoleDescription/RoleSpec defines an RSG with named states. Some states are enactable.
- RSG‑1 (state types). A state MUST declare whether it permits enactment (enactable: true/false).
- RSG‑2 (checklists). Each state MUST list a Conformance Checklist (E.10.D2) — observable cues to support U.Evaluation yielding a StateAssertion.
- RE‑5 (RSG gate). A
U.RoleEnactmentis valid iff at enactment time theU.RoleAssignmentcan be supported by a valid StateAssertion that the holder is in an enactable state of the Role’s RSG in this Context. - Example. SurgeonRole states: Eligible → Authorized → Active → Suspended → Revoked. Only Active is enactable. A pre‑op checklist produces
StateAssertion(SurgeonRole, Active).
Practical reading. Badge valid (window) ∧ state is right (RSG) ⇒ you may act.
- Suspend: transition to a non‑enactable state (e.g., Suspended). Keep the assignment’s window open; enactment is blocked by RE‑5.
- Revoke: either (a) close the window, or (b) transition to Revoked (non‑enactable).
- Probation: a dedicated RSG state with limited enactability (e.g., only under supervision, modelled as an extra required role on Method steps).
- CC‑RSG‑1. RSG transitions MUST be explicit; no implicit “back to Active”.
- Shift rotation.
A#Role@08:00..16:00,B#Role@16:00..24:00— clean handover, no⊥issues. - Shadowing.
Trainee#Role@..+Mentor#SupervisorRole@..; Method steps require both roles. - Emergency bundle.
SoloOperator := Incision ⊗ Hemostasis ⊗ Suturing; activate only under declared emergency (Context‑level policy).
| # | Anti‑pattern | Symptom | Why it’s harmful | FPF fix (conceptual move) |
|---|---|---|---|---|
| A1 | Global role label | “Alice is Lead Engineer” (nowhere) | Meaning drifts; untestable | Always anchor to Context: Alice#LeadEngineer:ProjectPhoenix |
| A2 | Role as part | BoM lists “Cooling Function” | Category error (structure vs role) | Keep BoM structural; model Pump#Cooling:ThermalMgmt |
| A3 | Document acts | “The SOP closed the ticket” | Epistemes don’t enact Work | Give the doc a status role; make a System enact the step |
| A4 | Role chains | “Transformer assigned to be Agent” | Collapses independent families | Require both roles on Method step; one holder, two badges |
| A5 | Hidden state | Acting while Authorized? Active? unclear | Safety & audit gaps | Use RSG with StateAssertions gating enactment |
| A6 | Edition blur | Context “ITIL” with no version | Sense slippage | Context card must carry edition (E.10.D1/F.1) |
| A7 | Bridge‑by‑name | Equating roles across Contexts by label | Cross‑context drift | Use F.9 Bridge with CL & loss notes |
Goal. Show that the same binding Holder#Role:Context@Window, plus RCS (Role‑Characterisation Space) and RSG (Role‑State Graph), works uniformly for operational systems, software/service operations, and knowledge governance.
Natural systems note. Spontaneous physical phenomena (e.g., Moon orbiting Earth) are modeled as U.Dynamics, not as U.Work. An observational RoleAssignment like Moon#SatelliteRole:IAU_2006 is valid classification but does not imply enactment of a method.
Role (family). WelderRole (Transformer)
*RCS (illustrative characteristics).
ProcessClass ∈ {MIG, TIG, Spot}QualifiedMaterial ∈ {Al, SS, Ti, …}MaxCurrentAmp ∈ ℝ⁺SafetyProfile ∈ {Standard, HotWork, ConfinedSpace}
*RSG (named states).
Unqualified → Qualified → Authorized → Active → Suspended → Revoked
(enactable: Active only)
Assignments.
Robot_SN789#WelderRole:AssemblyLine_2025@2025‑02‑01..openRobot_SN790#WelderRole:AssemblyLine_2025@2025‑02‑01..open
StateAssertions (via checklists).
StateAssertion(WelderRole, Qualified, AssemblyLine_2025, @2025‑02‑01..2026‑02‑01)— training & test weld coupons.StateAssertion(WelderRole, Active, AssemblyLine_2025, @2025‑03‑01..open)— daily pre‑shift checks + gas/torch inspection.
Enactment (gated by RSG).
A U.Work entry W#Seam134 is valid only if performedBy = Robot_SN789#WelderRole:AssemblyLine_2025 and an Active StateAssertion covers the timestamp. If the torch‑health checklist fails, RSG transitions Active → Suspended; further seams are blocked by RE‑5.
Roles (families).
DeployerRole(Transformer) — authorises execution of deployment Methods.IncidentCommanderRole(Agential/Speech) — directs response and issues SpeechActs (declares incident states).
RCS (illustrative).
DeployerRole:Env ∈ {staging, prod},ChangeWindow,RollbackAuthority ∈ {self, peer, CAB}.IncidentCommanderRole:OnCallTier ∈ {L1,L2,L3},ServiceScope,PageDuty ∈ {primary, secondary}.
*RSGs (named states).
DeployerRole:Eligible → Authorized → Active → Suspended(enactable: Active).IncidentCommanderRole:OnCall → Engaged → Handover → Rest(enactable: Engaged).
Assignments.
sCG‑Spec_ci_bot#DeployerRole:CD_Pipeline_v7@2025‑04‑01..openAlex#IncidentCommanderRole:SRE_Prod@2025‑04‑10T08:00..2025‑04‑10T20:00
StateAssertions (via checklists).
DeployerRole/Active: completed change ticket, green pre‑deploy tests, peer‑review check mark.IncidentCommanderRole/Engaged: accepted page, situational brief read, comms‑channel opened.
Enactment.
- A deployment
Workis valid only withperformedBy: sCG‑Spec_ci_bot#DeployerRole:CD_Pipeline_v7andActivestate asserted for the moment of start. - Declaring
Incident SEV‑1is a SpeechAct Work performed byAlex#IncidentCommanderRole:SRE_Prodin Engaged state; it changes deontic conditions (e.g., elevatesRollbackAuthority).
Roles (families).
NormativeStandardRole(Normative‑Status Episteme) — a document that is the standard in this Context.RequirementRole(Deontic‑Status Episteme) — a statement that binds behaviour in this Context.
RCS (illustrative).
NormativeStandardRole:Scope,Edition,ApplicabilityWindow.RequirementRole:BindingClass ∈ {shall, should, may},TargetRole,AcceptanceClauseRef.
*RSGs (named states).
NormativeStandardRole:Proposed → Adopted → Effective → Superseded(enactable: N/A — Episteme roles are non‑enactable; they gate others).RequirementRole:Draft → Approved → Effective → Retired(non‑enactable).
Assignments.
ISO_26262_ed2.pdf#NormativeStandardRole:AutoSafetyCase_2025@2025‑01‑01..openREQ‑BRAKE‑001.md#RequirementRole:AutoSafetyCase_2025@2025‑03‑05..open
Effects (gating, not acting).
- A system’s Work (e.g., HIL test run) is evaluated against clauses referenced by
RequirementRole. - An Approval SpeechAct (by a CAB chair who is a
U.System) may transitionRequirementRole: Draft → Approved. The Episteme does not “act”; Systems act, Epistemes hold status.
Use these micro‑moves to think and speak cleanly; no tooling required.
-
“Who can do this step?” On a
MethodStep, writerequires: [RoleX]. In your head, expand: “AnyperformedBywhoserole ≤ RoleX, with a valid window and enactable RSG state.” Example:requires: [SurgeonRole]andIncisionOperatorRole ≤ SurgeonRole⇒Dr.Kim#IncisionOperatorRole:Hospital.OR_2025is admissible iff Active. -
Handover without history loss. Close one window, open another. Never delete.
Alex#IncidentCommander:SRE_Prod@08:00..12:00Riya#IncidentCommander:SRE_Prod@12:00..20:00 -
Independence by construction (SoD). Declare
Developer ⊥ IndependentAuditor. Then it’s impossible (by validation) to have overlapping windows on one holder for both roles. -
Supervision as bundle. Model apprenticeship by requiring
Trainee ⊗ Supervisoron sensitive steps, or by RSG state Probation that flipsenactableonly ifSupervisorRoleis also present. -
Same badge name in two Contexts.
LeadEngineer:ProjectPhoenix≠LeadEngineer:DivisionR&D. If you must relate, create a Bridge with CL & loss notes; never rely on the name. -
Documents don’t act; they frame. Replace “the SOP executed X” with:
SOP_v4#RequirementRole:SafetyCaseand aSpeechAct“approve run” byQA_Officer#AuthorizerRole:Plant_2025. -
Window + state ⇒ permission. Quick mental check: badge valid? (window) ∧ state OK? (RSG) ⇒ go; else no‑go.
-
Communicative enactment (approval)
CAB_Chair#ApproverRole:ChangeControl@2026-05-01T10:05performs a SpeechAct Work “Approve Change-4711”. Effect: moves ApproverRole’s RSG state from Authorized?→Approved and
- opens the Green‑Gate for the operational step “Deploy Change-4711” (performed by a different RoleAssignment).
Pass these and your RoleAssignments are sound.
Anchoring & locality
- CC‑CTX‑1. Every
rolereference names a role defined in the sameU.BoundedContextas the assignment. - CC‑CTX‑2. No cross‑Context equivalence is assumed by label; relations across Contexts live only in Bridges (F.9).
Eligibility & families
3. CC‑ELIG‑1. Behavioral roles (Agential/Transformer/Observer/Speech) MUST bind U.System holders.
4. CC‑ELIG‑2. Status roles (Epistemic‑Status/Normative/Service‑Governance) MUST bind U.Episteme holders.
Role algebra (in‑Context)
5. CC‑ALG‑1. ≤ permits substitution for requiredRoles.
6. CC‑ALG‑2. ⊥ forbids overlapping windows on the same holder.
7. CC‑ALG‑3. ⊗ expands to conjunctive checks at performance time.
Time & gating
8. CC‑WIN‑1. Historic RoleAssignments are never deleted; windows are closed, not erased.
9. CC‑ENACT‑1. Every U.Work has performedBy = some U.RoleAssignment.
10. CC‑ENACT‑2. At the U.Work timestamp, an enactable RSG state is asserted for that assignment (via StateAssertion).
11. CC‑ENACT‑3. If state flips to non‑enactable (e.g., Suspended), enactment is blocked until an Active assertion reappears.
Strict distinction & category hygiene 12. CC‑SD‑1. No Role in a BoM/structure tree; roles do not participate in mereology. 13. CC‑SD‑2. Epistemes never enact Work; only Systems do.
Traceability
14. CC‑TRC‑1. From any U.Work, reviewers can trace performedBy → RoleAssignment → Role → (RCS,RSG) → Context and retrieve supporting StateAssertion evidence.
Run these mental “diff checks” whenever you change roles, contexts, or states.
RSG & gating
- RSCR‑RSG‑E01. After editing an RSG, verify that each enactable state still has a live Conformance Checklist and that historic StateAssertions remain interpretable (no silent renames).
- RSCR‑RSG‑E02. If a state flips enactable⇄non‑enactable, re‑evaluate pending or recurring
U.Workplans (no hidden authorisations).
SoD & windows
- RSCR‑SOD‑E01. On adding
⊥constraints, scan for overlapping assignments that newly violate SoD; schedule revocations or rescheduling. - RSCR‑SOD‑E02. On removing
⊥, confirm that governance rationale is recorded elsewhere (policy change Episteme).
Context churn
- RSCR‑CTX‑E01. When a Context edition updates, freeze prior RoleAssignments; create new assignments in the new Context rather than mutating old ones.
- RSCR‑CTX‑E02. Bridges referencing affected roles are reviewed for CL/loss adjustments.
Eligibility drift
- RSCR‑ELIG‑E01. If a role family changes (e.g., reclassifying Offerer from behavioral to status), audit all assignments for holder‑type violations.
Trace continuity
- RSCR‑TRC‑E01. Spot‑check that
U.Work → RoleAssignment → StateAssertionchains still resolve after refactors. - RSCR‑TRC‑E02. Randomly sample old incidents/runs to ensure reproducible authorisation verdicts.
Name stability
- RSCR‑NAME‑E01. If a role label changes, maintain the role identity; treat renamed labels as aliases inside the same Context rather than minting a new role unless RCS/RSG changed materially.
One line. A
U.MethodDescriptionnames the roles it needs; aU.Workcites the concreteU.RoleAssignmentthat enacted the step; the RSG state + window gates that enactment.
For every MethodStep:
requiredRoles— a list ofU.Rolefrom the same Context as the step. Example. InHospital.OR_2025, step “Make incision” hasrequires: [IncisionOperatorRole].- Role algebra in‑Context applies: if the Context defines
IncisionOperatorRole ≤ SurgeonRole, thenrequires: [SurgeonRole]also admits holders ofIncisionOperatorRole. - Separation of concerns. Capability checks (does the holder can?) belong to
U.Capabilityand resource limits; authorization belongs toU.RoleAssignment+ RSG.
A U.Work record must carry:
performedBy= a concreteU.RoleAssignment(not just a person/system name).- Window gate. The Work timestamp falls inside the assignment’s
@Window. - State gate. At that timestamp, an enactable state for the assignment is proven by a
StateAssertion(the checklist verdict for a named RSG state). - Role algebra gate. The assignment’s
roleis either one ofrequiredRolesor a specialization (≤) thereof; bundles (⊗) expand to conjunctions; incompatibilities (⊥) forbid overlaps on the same holder.
- Observation. The Work produces
U.Observation(s). - Evaluation. A
U.Evaluationcompares Observations with AcceptanceClause(s) referenced by a Service or a RequirementRole. - SoD hook. If the step or evaluation demands independence (e.g., “not performed by its reviewer”), enforce via
⊥betweenPerformerRoleandReviewerRolein the same Context.
U.WorkDescription(renamed from “WorkPlan”) binds forthcoming steps to candidate RoleAssignments and time windows.- Checks before the fact. Validate windows (no gaps/overlaps where disallowed), enforce
⊥, ensure expected RSG state will be enactable at scheduled time (or flag a pre‑flight checklist).
Didactic cue. Think “Step asks for badges; Run cites a badge; Badge must be valid & green.” (Badge = RoleAssignment; valid = window; green = RSG state with a fresh StateAssertion.)
Rule. No Cross‑context substitution by name. If a step in Context A needs
Role_A, and the performer only holdsRole_Bin Context B, you must use an explicit Bridge (F.9) that says howRole_B@Brelates toRole_A@A, with a Congruence Level (CL) and loss notes.
A Bridge may assert, directionally:
substitutesFor(Role_B@B → Role_A@A)with a CL and a list of kept and lost RCS characteristics / RSG nuances.- The reverse direction does not follow unless declared.
| CL | Meaning (intuitive) | Permit | Guard | Block |
|---|---|---|---|---|
| 3 | Near‑isomorphic sense; no material loss | Yes | None beyond ordinary RSG/Window gates | — |
| 2 | Close but with stated losses | Yes | Require extra evidence (e.g., additional checklist item) or a named reviewer | — |
| 1 | Distant analogy; risky | Exception | Only by explicit Waiver SpeechAct naming the Bridge + loss rationale | Default |
| 0 | Incompatible | No | — | Yes |
Normative hooks. The Trust & Assurance Calculus (B.3) aggregates CL penalties into confidence scores; D.2 may mandate CL≥2 for safety‑critical enactments.
-
BPMN Task ↔ PROV Activity.
substitutesFor(Task@BPMN → Activity@PROV)with CL=2; lost: BPMN control‑flow guards; kept: “bounded occurrence consuming/producing entities.” Effect. A Work logged asActivity@PROVmay satisfy a step requiring aTask@BPMNif an extra guard enforces the BPMN pre‑/post‑conditions. -
Essence Alpha‑State ↔ RoleStateGraph state.
substitutesFor(“Alpha.State:Ready”@Essence → “Ready”@RSG)with CL=2; lost: Alpha‑specific narrative criteria; kept: checklist‑based readiness. Effect. A team may reuse Essence states as labels in RSG, but still maintains local checklists as StateAssertions. -
ITIL Service Owner ↔ RBAC Administrator. Typically CL=1 and directional (Administrator@RBAC → ServiceOwner@ITIL) rejected unless a policy Bridge enumerates compensating controls. Effect. Prevents “ops admin = service owner” conflations without an explicit waiver.
- Local first. Substitution never overrides in‑Context
⊥,⊗, or≤. - Evidence trail. Every Cross‑context enactment relying on a Bridge shall reference its Bridge id in the
U.Workjustification. - Loss visibility. The Bridge must state which RCS characteristics are preserved vs dropped; if a dropped characteristic is required by the step, substitution is invalid, regardless of CL.
Benefits
- No type explosion. Structure stays stable; function lives in RoleAssignments with small, local lattices.
- Traceable authority. Every
U.Workhas a clean chain: performedBy → RoleAssignment → Role → (RCS,RSG) → Context. - Safe heterogeneity. Different Contexts can use the same badge name differently; conflicts are dissolved by locality and explicit Bridges.
- Didactic economy. One mental form —
Holder#Role:Context@Window— covers factories, clouds, labs, and libraries. - Strong SoD. Incompatibilities (
⊥) and bundles (⊗) are first‑class; audits become mechanical. - Assurance‑ready. RSG + StateAssertions convert checklists into explicit gates; CL penalties quantify Cross‑context risk.
- Temporal honesty. Windows encode the ebb and flow of assignments without history loss.
Costs / discipline required
- RoleDescription work. Each Context needs a minimal RoleDescription (name, RCS, RSG, checklists).
- Bridge authorship. Cross‑context work requires explicit Bridges with CL & loss notes.
- Vocabulary hygiene. Teams must stop using context‑less role labels.
-
Strict Distinction (A.7). Keeps identity (Holon) separate from assignment (RoleAssignment), behaviour (Method/Work), and knowledge (Episteme).
-
Ontological Parsimony (A.11). One universal binding, three tiny in‑Context relations (
≤, ⊥, ⊗), no global role types. -
Universal Core (A.8). Same mechanism works across systems (machines, software, teams) and epistemes (standards, requirements), demonstrated in §9.
-
Lexical discipline (E.10.D1 & E.10.D2). Roles are context‑local; descriptions (RCS, RSG) are descriptions of intensional roles, not the roles themselves.
-
Alignment with SoTA.
- DDD. Elevates Bounded Context from software to a universal meaning frame.
- UFO / OntoUML. Roles as anti‑rigid, context‑dependent types.
- OMG Essence. RSG generalises Alpha state graphs with explicit checklists and enactability flags.
- RBAC & Assurance. First‑class SoD and auditable windows; CL penalties match modern safety/security practice.
- PROV‑O. Clean separation of Activity (Work) from Association (who held which role).
Builds on / depends on
- A.1 Holonic Foundation —
U.Holon(holders). - A.1.1
U.BoundedContext— the “Context of meaning.” - A.2 Role Taxonomy — role families (Agential / Transformer / Observer / Speech; Epistemic‑Status / Normative / Service‑Governance).
- E.10.D1 (D.CTX) & E.10.D2 (Strict Distinction of intensional vs description) — locality & description discipline.
Enables / instantiated by
- A.15 Role–Method–Work Alignment — step gating, performer linking, evaluation hooks.
- B.1 Γ‑algebra — constructors/observers are simply roles enacted by systems.
- B.3 Trust & Assurance Calculus — CL penalties on Bridges; evidence from StateAssertions.
- D.2 Multi‑Scale Ethics — duties attach to roles; SoD encoded via
⊥. - F‑cluster (Unification Method) — F.1–F.4 define Contexts and local senses; F.9 defines Bridges consumed here.
Interacts with
- C. Architheories* (Sys‑CAL, KD‑CAL, Method‑CAL, CHR‑CAL) — enactment hooks, measurement via Observations.
- Service & Deontics (Part D/E) — obligations and acceptance evaluated against role‑gated Work.
“Give every action a badge with a Context. The badge is a
U.RoleAssignment:Holder#Role:Context@Window. The badge is valid in time (window) and green in state (RSG + StateAssertion). A Method step names the badges it needs; a Work cites the exact badge that enacted it. If a badge comes from another Context, cross with a Bridge and respect its CL penalty. Keep SoD with⊥, reuse expertise with≤, and require combos with⊗. Documents don’t act — they hold status roles; only systems enact Work. With this, factories, clouds, and knowledge all speak the same, small grammar.”
In real projects we must answer two different questions:
- “Can this system do X?” — this is about an ability inherent to the system.
- “Is this system assigned to do X here and now?” — this is about an assignment (a Role assignment) inside a bounded context.
+Teams frequently blur the two, and then further mix them with how the work is done (the Method) and what actually happened (the Work). U.Capability isolates ability as a first‑class concept so that you can plan realistically, staff responsibly, and audit cleanly.
- Permission ≠ ability. A Role assignment authorizes execution in a context; it does not prove the system can meet the required WorkScope and WorkMeasures.
- Recipe ≠ ability. A Method says how to do something; it does not guarantee that this holder can meet the target outcomes under the required constraints.
- Execution log ≠ ability. A past Work record does not, by itself, establish a stable ability; conditions may have been favorable or unique.
- Cross‑team confusion. Enterprise terms like “capability”, “service”, and the old “function” are used interchangeably; planning, staffing, and assurance become fragile.
| Force | Tension we resolve |
|---|---|
| Stability vs. change | Ability is a relatively stable property of a system, yet it evolves with upgrades, wear, calibration, and environment. |
| Universality vs. domain‑specificity | One universal notion must serve robots, teams, and software services, while letting each domain keep its own performance vocabulary. |
| Evidence vs. simplicity | We want an ability claim to be evidence‑backed, but the core idea must stay simple enough for planning conversations. |
| Local conditions vs. reusability | Ability depends on conditions (inputs, environment); still, the concept must be reusable across contexts via explicit scoping. |
U.Capability is a dispositional property of a U.System that states its ability to produce a class of outcomes (i.e., execute a class of Work) within a declared U.WorkScope (conditions/assumptions) and meeting stated U.WorkMeasures. It is not an assignment (Role), not a recipe (Method), and not an execution (Work).
One-liner to remember: Capability = “can do (within its WorkScope and measures)”, independent of “is assigned now” or “did do at time t”.
Capability declaration (summary). A capability SHALL declare, as separate items:
U.WorkScope(Work scope) — the set ofU.ContextSliceunder which the capability can deliver the intendedU.Work(see A.2.6 §6.4);U.WorkMeasures— measurable targets with units evaluated on a JobSlice (R‑lane facet);U.QualificationWindow— the time policy that governs operational admissibility atΓ_time(R‑lane facet). Note. This separation supersedes the legacy “envelope + measures + validity interval” bundle. Work scope is the set of conditions (USM), not a Characteristic; measures are CHR‑characteristics; capability packages both.
Reminder (measurement & scope). WorkScope is a set‑valued USM object (membership, set algebra) and not a CHR Characteristic; WorkMeasures are CHR Characteristics with declared scales/units. Admission checks these separately (see § 10.3 WG‑2/WG‑3).
When you describe a capability in a model or a review, anchor it by answering these five didactic prompts:
- Holder: Whose ability is this? → a specific
U.System. - Context: In which bounded context were the measures established? →
U.BoundedContext(strongly recommended for clarity and comparability). - Task family: Ability to do what kind of work? → reference the relevant MethodDescription(s) or method family the system can execute.
- WorkScope: Under what conditions? → inputs/resources/environment assumptions (e.g., voltage, pressure, ambient, tool head).
- Performance measures: With what bounds? → CHR‑style measures (throughput, precision, latency, reliability, MTBF…) with ranges/targets.
Optional descriptors that improve trust without adding bureaucracy:
- QualificationWindow: calibration/qualification window for the stated WorkScope (abilities drift).
- Evidence: links to test reports, certifications, prior Work summaries (as Episteme).
- Degradation/upgrade notes: known change points that affect the WorkScope.
Didactic guardrail: Capabilities are stated in positive, measurable terms (“can weld seam type W at ±0.2 mm up to 12/min at 18 °C–30 °C”). Avoid role words (“welder”) or recipe detail (step flows) here.
To keep discussions terse yet precise, teams often write:
- “S#17 can <MethodDescription / task family> @ <WorkScope> → <measures>.”
- Or as a bullet in a capability table scoped to a context, e.g., AssemblyLine_2025 Capability Sheet.
This is not a formal notation—just a consistent way to keep the five prompts in view.
| If you are talking about… | Use | Litmus test |
|---|---|---|
| assignment (who is being what, where) | Role → Role assignment | Can you reassign to another holder without changing the system’s composition? If yes → Role. |
| Ability (can do within bounds) | Capability | Would you still say “can do” even if not currently assigned? If yes → Capability. |
| Recipe (how‑to) | Method / MethodDescription | Has inputs/outputs and steps but no date/time. |
| Execution (what happened) | Work | Has a start/end, consumed resources, left a log. |
| External promise | Service | Framed as “we provide/guarantee to others.” |
| Law/model of change | Dynamics | Describes state evolution, not an ability of one system. |
Two useful corollaries
- A step in a Method may require a Role; optionally it may also stipulate a capability threshold (e.g., precision ≤ 0.2 mm). assignment and ability are checked separately.
- A Service depends on having the needed capabilities and being assigned to deliver under the Service’s context.
- Holder:
RobotArm_A(U.System). - Task family: seam welding per
Weld_MIG_v3MethodDescription. - WorkScope: workpiece steel grades S235–S355; ambient 18–30 °C; argon mix 92–95 %; torch T‑MIG‑07.
- Measures: bead width 6.0 mm ± 0.2 mm; throughput ≤ 12 seams/min; defect rate < 0.5 %.
- Context:
AssemblyLine_2025. - Readable claim: RobotArm_A can execute Weld_MIG_v3 within the stated WorkScope at the stated measures (AssemblyLine_2025).
- What this is not: It is not “the welder”—that is a Role assignment when assigned on a shift. It is not the weld recipe— that is the MethodDescription.
- Holder:
PlannerService_v4(deployed system). - Task family: job‑shop schedule generation per
JS_Schedule_v4MethodDescription. - WorkScope: 50–500 jobs; 5–40 machines; hard deadlines only; network latency ≤ 20 ms.
- Measures: schedule completion within 0.95 of theoretical optimum (benchmark set), 98 % on‑time delivery in simulation.
- Context:
PlantScheduling_2025. - Use: Steps that “require ScheduleGeneration capability ≥ 0.90 optimality” will only pass if the holder’s capability meets or exceeds that bound.
- Holder:
FinanceDept(U.Systemas OrgUnit). - Task family: period close per
CloseBooks_v3MethodDescription. - WorkScope: IFRS; ERP v12; 8 legal entities; staffing ≥ 6 FTE; cut‑off rules X.
- Measures: close in ≤ 5 business days; adjustment error rate < 0.2 %.
- Context:
OperatingModel_2025. - Distinction: This is ability; the Service “Provide month‑end close” is the external promise derived from this ability once formally offered.
- Lenses tested:
Arch,Prag,Did,Epist. - Scope declaration: Universal; holder constrained to
U.System. - Rationale: Gives the kernel a clean, reusable ability concept so Role (assignment), Method (recipe), Work (execution), and Service (promise) do not collapse into each other. Keeps planning talk truthful and checkable without introducing governance machinery here.
U.Capabilityis a dispositional property of aU.Systemthat states its ability to produce a class of outcomes (i.e., execute a class of Work) within a declaredU.WorkScope(conditions/assumptions) and meeting statedU.WorkMeasures.
CC‑A2.2‑1 (Holder type).
A capability belongs to a U.System (physical, cyber, socio‑technical, or organizational). Capabilities are not assigned to U.Episteme.
CC‑A2.2‑2 (Separation of concerns). A capability is not a Role, not a Method/MethodDescription, not a Work, and not a Service. Models SHALL NOT use capability declarations to stand in for assignments, recipes, executions, or promises.
CC‑A2.2‑3 (WorkScope required for operational use).
When a capability is used to qualify a step or to support planning, its statement MUST name a WorkScope (conditions/assumptions) and WorkMeasures (targets/ranges). Guards that admit Work MUST test that the holder’s WorkScope covers the step’s JobSlice (i.e., WorkScope ⊇ JobSlice) and that WorkMeasures meet the step’s thresholds, with an explicit Γ_time window bound. Without a WorkScope and measures, a capability is advisory and SHALL NOT be used for step admission or assurance claims.
CC‑A2.2‑4 (Context anchor).
Capability statements that drive operational decisions MUST be anchored to a U.BoundedContext (the “Context” whose vocabulary and test norms apply).
CC‑A2.2‑5 (QualificationWindow).
When capabilities are used operationally (e.g., to gate Work), the statement MUST carry a QualificationWindow (calibration window, software version window, etc.) and the guard MUST name the Γ_time window used for the check. Outside the QualificationWindow, the claim is not admissible for gating.
CC‑A2.2‑6 (Past work remains past). Updates to a capability statement SHALL NOT retroactively invalidate already recorded Work. Past Work is judged against the capability declaration that was valid at the time of execution.
CC‑A2.2‑7 (Threshold checks are orthogonal to roles).
A step that requires both a Role and a capability threshold admits a Work only if both are satisfied: (i) the performer’s Role assignment is active in the step window; (ii) the holder’s capability meets or exceeds the threshold with WorkScope ⊇ JobSlice and within the QualificationWindow at the named Γ_time.
CC‑A2.2‑8 (Derived capabilities). If a capability is claimed for a composite system (assembled by Γ), the claim MUST be stated as a property of that composite holder (not of its parts) with clear dependency notes (e.g., “valid while Subsystem B meets X”). Details of derivation belong to the context’s methodology, not to this definition.
CC‑A2.2‑9 (No capability for epistemes). Algorithms, standards, and documents provide evidence or recipes; they do not “have capability.” Only systems do.
CC-A2.2-10 (Γ_time selector in guards).
Scope-sensitive guards (including Method–Work gates) MUST include an explicit Γ_time selector indicating the window W over which ScopeCoverage and WorkMeasures are evaluated.
A step in a Method may define required roles (assignment) and capability thresholds (ability). A Work passes the gate if:
- assignment check: the Work’s
performedBypoints to a valid Role assignment that covers the step window and satisfies the role relation (including specialization≤inside the context). - Ability check: the holder of that Role assignment has a capability whose WorkScope covers the step’s JobSlice (i.e., declared superset) and whose WorkMeasures meet the step’s threshold(s) within
Γ_time(W)and while the capability’s QualificationWindow includes W.
Idioms managers can reuse (plain text):
- “S1 requires
IncisionOperatorRoleand Precision ≤ 0.2 mm (OR_2025 norms) in window W.” - “S2 requires
PlannerRole, WorkScope ⊇ JobSlice[W], and Optimality ≥ 0.90 onJS_Schedule_v4.”
What to avoid:
- Putting “Precision ≤ 0.2 mm” into the Role name. Keep thresholds attached to the step; keep ability on the holder.
Capabilities are stable but not static. Three simple practices keep reasoning honest:
- Qualification windows. Abilities drift. Put a QualificationWindow on the statement (e.g., “valid for software v4.2; recalibration due 2025-09-30”).
- Change points. Note upgrades/downgrades that affect the WorkScope or measures.
- Snapshot at execution. When Work is recorded, it is implicitly tied to the then‑current capability statement; later edits do not rewrite history (see CC‑A2.2‑6).
Manager’s rule of thumb: if you would reschedule a job after a tool change, the capability statement needs a new window.
Γ builds a new holder (a composite system). Its capability is not the algebraic sum of parts; it is an ability of the whole under its own WorkScope.
- Express at the whole. “Cell_3 can place 12 PCB/min with ±0.1 mm” — that is a capability of Cell_3, not of the pick‑and‑place head alone.
- State dependencies. “Valid while Feeder_A delivers reels at ≥ X; vision subsystem calibrated ≤ 72 h ago.”
- Constructor vs. transformer. The ConstructorRole builds the composite (Γ); the resulting TransformerRole may later act on products. Capability belongs to the holder relevant to the action (builder’s ability vs operator’s ability).
A Service is an external promise. It relies on capability but is not identical to it.
- From capability to service. You normally derive a Service by taking a capability and fixing the promise outward (e.g., “We guarantee close ≤ 5 days”).
- From service back to capability. If the promise raises the bar (e.g., tighter SLA), the underlying capability must meet or exceed it under the service’s context.
- Staffing. Delivering a Service still requires Role assignments; capability alone does not authorize action.
Memory aid: Capability = can do; Service = promise to others that we will do.
- Dynamics describe how states evolve (models, laws, trajectories).
- Capability says what this system can achieve within an WorkScope.
- Dynamics often serve as evidence or explanatory models for capability but are not the capability itself.
Physics example: an “isothermal process” (process here as transformation) is a Work instance whose path is explained by a Dynamics episteme; a lab rig’s ability to run that path repeatably is its capability.
- Role‑as‑capability. “Welder role ensures ±0.2 mm.” → Keep role as assignment; put precision in a capability on the holder; put the threshold on the step.
- Recipe‑as‑capability. “We have the ‘Etch_Al2O3’ capability.” → Recipe is Method/MethodDescription; ability is “can execute Etch_Al2O3 within WorkScope E at measures M.”
- Work‑as‑capability. “We did it once, so we can.” → One Work log is not a stable ability; state envelope and measures if you want a capability claim.
- Context‑less claims. “This tool can machine titanium.” → Say where and under what bounds (context + WorkScope + measures).
- Stuffing capabilities into BoM/PBS. Structure lists what it is; capabilities belong to what it can do (the holder), not inside the parts list.
- Service‑as‑capability. “We have the Month‑end Close capability (promise).” → Promise is Service; ability is internal, promise is external.
- Underline WorkScopes. For every “can do” sentence, add conditions and measures; otherwise treat it as background color, not a gate.
- Pull thresholds out of roles. Move “≤ 0.2 mm”, “≥ 0.90 optimality” from role labels into step requirements; leave roles clean (assignments).
- Pin contexts. Add the bounded context name to each capability table (“Capability Sheet — AssemblyLine_2025”).
- Snapshot validity. Add a “valid through” column (software version or calibration horizon).
- Separate recipe/execution. Move flowcharts under MethodDescription, runs under Work; link the capability to the holder with references to those specs.
| Benefits | Trade‑offs / mitigations |
|---|---|
| Truthful planning. Schedulers and managers can ask “can do?” independently of “assigned now?” | Extra column in tables. Adding scope/measures/valid‑through is a small burden that repays itself in fewer reschedules. |
| Safer gating. Steps gate on both role and ability; fewer silent failures. | Two checks instead of one. Keep the checklist simple: badge + bounds. |
| Clear service design. Services become explicit promises built on visible abilities. | Temptation to over‑promise. Keep service SLOs within demonstrated capability measures. |
| Clean separation with Dynamics and PBS/SBS. No more “process” or “function” soup. | Some retraining. Use the litmus tables (from the lexical rules) during onboarding. |
- Builds on: A.1 Holonic Foundation; A.1.1
U.BoundedContext; A.2 Role; A.2.1U.RoleAssignment. - Coordinates with: A.3 (Transformation & role masks); A.15 (Role–Method–Work Alignment).
- Constrains: Step design: thresholds belong on steps; BoM/PBS must stay structural.
- Informs:
U.Service[D] (external promises derive from capabilities);U.Dynamics[D] (models used as evidence or predictors); Γ/aggregation (capability of composites is stated at the whole). - Lexical guards: E.10.x L‑FUNC (do not call capability “function”); E.10.y L‑PROC (do not call capability “process”).
- Capability = can do (within bounds). assignment ≠ ability ≠ recipe ≠ execution ≠ promise.
- Gate every critical step with two checks: badge (Role assignment) + bounds (Capability).
- Write the Context on every claim: context name, WorkScope, measures, QualificationWindow/valid-through.
Across domains the word service is used for many different things: a server or provider, an API, a procedure, a run, a department, even a product bundle. Such polysemy is productive in everyday speech but toxic in a normative model. FPF needs a single, minimal, trans‑disciplinary meaning that stays stable from cloud computing to public administration and manufacturing utilities.
In the Role–Method–Work alignment, service must say something external‑facing and consumer‑oriented, yet remain separate from how the provider does it (Method/MethodDescription) and what actually happened (Work).
Intuition: a service is the promise you advertise and are judged by; work is what you do to keep that promise; method/spec is how you know what to do.
The words service/service‑level/service use/service access are ambiguous across domains. In the kernel we reserve U.Service for the unified concept below. Other senses (department, server, microservice binary, help‑desk ticket, etc.) must be mapped via U.RoleAssignment to roles (…#ServiceProviderRole:Context), to U.System, U.Method/MethodDescription, or U.Work, inside the appropriate U.BoundedContext. (A short lexical rule L‑SERV will be added to E.10 alongside L‑FUNC/L‑PROC/L‑SCHED/L‑ACT.)
Without a first‑class U.Service, models drift into five recurring errors:
- Provider = Service. Calling the system or team “the service” collapses structure with promise.
- API = Service. Treating an interface/endpoint as the service hides the consumer‑oriented promise (effect + acceptance).
- Process = Service. Mapping a procedure/Method (or a WorkPlan) to “service” confuses recipe/schedule with the external commitment.
- Run = Service. Logging Work as “a service” erases the Standard/promise layer and breaks SLA reasoning.
- Business ontology lock‑in. Large domain schemes (e.g., “business service” stacks) are imported wholesale, losing FPF’s universality and comparability across contexts.
| Force | Tension |
|---|---|
| External promise vs internal capability | Service must be consumer‑facing, while capability is provider‑internal. |
| Specification vs execution | Service is a specifiable obligation; value is realised only by runs of Work. |
| Universality vs domain richness | One kernel meaning must cover IT, utilities, healthcare, public services—without absorbing domain taxonomies. |
| Measurability vs privacy | Consumers need SLO/SLA and outcomes; providers want implementation freedom (Method autonomy). |
| Stability vs evolution | Services version and change without invalidating prior Work evidence. |
Definition (normative).
Within a U.BoundedContext, a U.Service is an externally oriented commitment: a context‑local promise that a provider Role will make a specified external effect available to eligible consumers through a declared access and declared acceptance criteria (SLO/SLA‑like targets). A U.Service does not prescribe how the provider fulfils it (that is U.Method/MethodDescription), nor is it the execution (that is U.Work).
- Type:
U.Episteme(a spec/Standard on a carrier). - Scope: design‑time concept; judged at run‑time by evidence from
U.Work. - Time stance: design-time concept; judged at run-time by evidence from
U.Work. - Orientation: consumer‑facing (“what you can rely on”), as opposed to capability (“what we can do”).
U.Service {
context : U.BoundedContext, // where the promise is meaningful
purpose : Text/Episteme, // the externally observable effect/value
providerRole : U.Role, // role kind that may provide it (not a person/system)
consumerRole? : U.Role, // optional role kind allowed to consume
claimScope? : U.ClaimScope, // where the promise holds (G) — operating conditions/populations/locales
accessSpec? : U.MethodDescription, // how consumers may request/use (interface/eligibility)
acceptanceSpec : U.Episteme, // targets: SLO/SLA, quality/throughput/latency/accuracy…
realizationSpec?: P(U.MethodDescription), // typical internal specs used by providers (non-binding)
unitOfDelivery?: Episteme, // how delivered units are counted/measured
version? : SemVer/Text,
timespan? : Interval
}
providerRoleandconsumerRoleare role kinds; the actual performers are RoleAssignings at run‑time.acceptanceSpecdefines what counts as fulfilled (the test).accessSpecis how to ask (eligibility, protocol, counter, desk, API).realizationSpecis only informative in the kernel (“typical methods”); providers retain Method autonomy.
- Not a provider: use
System#ServiceProviderRole:ContextU.RoleAssignment. - Not a method/recipe: that is
U.Method/MethodDescription. - Not a run/incident/ticket: that is
U.Work. - Not a schedule: that is
U.WorkPlan. - Not a capability: capability is provider‑intrinsic ability; service is outward promise. A service may require certain capabilities, but it is not the capability.
- Not a scope label: do not use applicability, envelope, generality, or validity as scope characteristics; declare Claim scope (G) or Work scope explicitly where needed (A.2.6).
-
Design‑time: The context declares Claim scope (G) for acceptance (operating conditions, populations, locales) per A.2.6. The context may assert:
bindsCapability(ServiceProviderRole, Capability). Providers chooseMethod/MethodDescriptionto realise the service. -
Run‑time: A consumer performs
Work(e.g., a request/visit) —performedBy: ConsumerRoleAssigning. The provider performsWorkto fulfil the service —performedBy: ProviderRoleAssigning. DeliveredWorkinstances are evaluated againstacceptanceSpecand counted viaunitOfDelivery. SLA/SLO outcomes are therefore functions over Work evidence, not over the Service object itself.
Memory hook: Service promises, Method describes, Work proves.
| Domain | U.Service (promise) |
Provider & Consumer (as Roles) | Access (how to ask) | Fulfilment (Work) | Typical acceptance targets |
|---|---|---|---|---|---|
| Cloud/IT | “Object Storage: durable PUT/GET of blobs up to 5 TB” | CloudTeam#ServiceProviderRole, BackupJob#ServiceConsumerRole |
S3_API_Spec_vX (MethodDescription) |
Each PUT/GET run; data durability checks | Availability ≥ 99.9%, durability 11×9 |
| Manufacturing Utility | “Compressed air at 8 bar in Zone B” | Maintenance#Provider, LineB#Consumer |
Manifold access rules (AccessSpec) |
Compressor cycles & delivery logs | Pressure window, purity class, flow ceiling |
| Public Service | “Passport issuance within 20 days” | Agency#Issuer, Citizen#Applicant |
Portal/desk SOP (AccessSpec) |
Case handling runs | Lead time ≤ 20 days, defect ≤ 1% |
Key takeaway: the same kernel object models S3, a plant utility, and a government service: a promise with access and acceptance. Everything else (APIs, compressors, clerks, workflows, tickets) is mapped via Role/Method/Work.
The popular service diagrams (provider ↔ access ↔ use ↔ capability/activity) map to FPF as follows:
- Agent (as Service Provider) →
System#ServiceProviderRole:Context(U.RoleAssignment). - Service Agreement / SLA →
U.Service.acceptanceSpec(+ optionalWorkPlanfor windows). - Operating conditions / “where the promise holds” →
claimScope : U.ClaimScope (G)(or embedded inacceptanceSpec) per A.2.6. - Service Presence / Access →
accessSpec : MethodDescription(interface/eligibility); actual endpoints are systems playing interface roles. - Individual Service Use → consumer and provider
U.Workinstances linked to theU.Servicethey fulfil. - Service‑Enabled Capability / Activity → effects on the consumer side: either a Capability gained/used, or Work performed; do not reify as a new kernel type.
(Where a domain needs richer structures—catalogs, exposure layers, charging, entitlement—model them in the domain context and relate them to U.Service via U.RoleAssignment and alignment bridges.)
CC‑A2.3‑1 (Type).
U.Service IS an U.Episteme (a consumer‑facing promise on a carrier). It is not a U.System, not a U.Method/MethodDescription, not a U.Work, and not a U.WorkPlan.
CC‑A2.3‑2 (Context).
Every Service MUST be declared inside a U.BoundedContext. Names and meaning are local; cross‑context reuse requires a Bridge (U.Alignment).
CC‑A2.3‑3 (Role kinds, not people/systems).
providerRole and (if used) consumerRole MUST be role kinds (see A.2). Actual performers at run‑time are U.RoleAssignments.
CC-A2.3-4 (Acceptance).
acceptanceSpec MUST be present and MUST define how delivered U.Work is judged (pass/fail/graded) against declared targets (SLO/SLA-like), and MUST declare Claim scope (G) where relevant (operating conditions, populations, locales). Every verdict binds to an explicit Γ_time window.
CC‑A2.3‑5 (Access).
If consumers must request/obtain the service through an interface, accessSpec MUST reference the MethodDescription that defines eligibility and invocation rules (API/desk/SOP). If the service is ambient (e.g., compressed air on a manifold), accessSpec MAY be omitted, but the eligibility condition MUST be stated in the context.
CC‑A2.3‑6 (Unit of delivery).
If performance is counted/charged, unitOfDelivery SHOULD be declared (e.g., “request”, “kWh”, “case”).
CC‑A2.3‑7 (No actuals on Service).
Resource/time actuals and incident logs MUST attach to U.Work only (A.15.1). Services carry no actuals.
CC‑A2.3‑8 (Capability requirement).
If the context requires provider abilities, it MUST express them as bindsCapability(providerRole, Capability) in the context, not by stuffing capabilities into the Service object.
CC‑A2.3‑9 (Versioning & timespan).
Services MAY carry version/timespan. A U.Work that claims/fulfils a Service MUST record which Service version it used.
CC‑A2.3‑10 (Lexical rule).
Unqualified uses of service (server/team/API/process/ticket) MUST be disambiguated per L‑SERV (E.10), mapping to System/U.RoleAssignment/Method[Spec]/Work as appropriate.
CC‑A2.3‑11 (No mereology). Do not place a Service in PBS/SBS or treat it as a part/component. Structural assemblies live in PBS/SBS; Service is a promise.
CC‑A2.3‑12 (Plan–run split).
Windows and calendars belong to U.WorkPlan (A.15.2). Fulfilment evidence belongs to U.Work (A.15.1).
CC-A2.3-13 (Scope lexicon & guards).
Deprecated labels applicability/envelope/generality/validity MUST NOT appear as scope characteristics in guards or conformance blocks. Use U.ClaimScope (G) for epistemes and U.WorkScope for capabilities (A.2.6/A.2.2). Scope-sensitive guards MUST use ScopeCoverage with explicit Γ_time selectors.
CC-A2.3-14 (Bridges & CL). Cross-context mappings via Bridges keep F/G stable; CL penalties apply to R. A mapping MAY recommend narrowing the mapped Claim scope (G) as best practice (A.2.6/B-line).
To keep the promise → evidence path explicit:
claimsService(Work, Service)— the Work instance intends to fulfil the Service (pre‑verdict).fulfilsService(Work, Service)— the Work instance meets the Service’sacceptanceSpec(post‑verdict: pass).acceptanceVerdict(Work)→ {pass,fail,partial, context‑specific grades} — computed by applyingacceptanceSpecto Work facts.usesAccess(Work, MethodDescription)— consumer Work that invokes the service via itsaccessSpec(when applicable).
Invariant:
fulfilsService(W,S)⇒claimsService(W,S)andacceptanceVerdict(W)=pass. Invariant: A Work can claim/fulfil multiple Services only if the context declares a counting policy (no silent double‑counting).
Let W(S, T) be the set of Work that claimsService(·,S) within time window T. Let W✓(S, T) be those with fulfilsService.
- Delivered units:
delivered(S, T) = |W✓(S, T)|(or sum perunitOfDelivery). - Rejection rate:
rejectRate(S, T) = 1 − |W✓| / |W|(declare handling ofpartial). - Lead time: average/percentile of
duration(Work)or of request→completion delta (declare definition). - Availability/Uptime: computed from Work/telemetry events per the context’s definition (declare availability source).
- Cost‑to‑serve: sum of
Γ_workoverW✓per resource category (A.15.1).
All metrics are functions of Work evidence; the Service object is never the bearer of actuals.
Aggregation across time uses Γ_time policies (union vs convex hull) chosen by the KPI owner.
-
“The microservice is the service.” A microservice binary is a
U.System. Make it a provider viaSystem#ServiceProviderRole:Context; keep the promise inU.Service. -
“The API is the service.” The API is typically
accessSpec : MethodDescription(and systems playing interface roles). The service is the promise judged byacceptanceSpec. -
“Our process is the service.” Process/recipe is
U.Method/MethodDescription; schedule isU.WorkPlan. The service is what is promised to the consumer. -
“The ticket is the service.” A ticket/case is
U.Work(and perhaps aWorkPlanitem). Evidence and outcomes sit on Work, not on Service. -
“Attach cost to the service.” Actual cost/time attach to
U.Workonly (A.15.1). Service metrics are computed from Work. -
“Put service under BoM.” Services are not structural parts. Keep PBS/SBS clean.
-
“Hard‑code people into the service.” Name role kinds in
U.Service; run‑time performers areU.RoleAssignments.
- Name the promises. List 5–15 consumer‑facing promises your context lives by; reify each as
U.ServicewithacceptanceSpecand, if needed,accessSpecandunitOfDelivery. - Separate provider from service. Keep systems/teams as
U.System; make them providers via…#ServiceProviderRole:Context. - Wire evidence. Ensure every relevant
U.WorkhasclaimsService(andfulfilsServicepost‑verdict). - Choose metrics. For each Service, define 2–4 KPIs and the exact Work-based formulas (availability, lead-time, rejection rate, cost-to-serve), and declare the Claim scope (G) and Γ_time policy used for each KPI.
- Bridge domains. If a business ontology already exists (“business/technical/internal service”), keep it in its own context and map to
U.Servicevia Bridges. - Tidy language. Apply L‑SERV: ban “service” as a synonym for server/team/process/ticket in kernel narratives; map them explicitly.
- Builds on: A.1.1
U.BoundedContext; A.2U.Role; A.2.1U.RoleAssignment; A.2.2U.Capability; A.2.6U.Scope/U.ClaimScope (G)/U.WorkScope. - Coordinates with: A.3.1
U.Method; A.3.2U.MethodDescription; A.15.1U.Work; A.15.2U.WorkPlan; B-line Bridges & CL (CL→R; may recommend ΔG narrowing). - Constrained by lexical rules: E.10 L‑SERV (service disambiguation); also L‑FUNC, L‑PROC, L‑SCHED, L‑ACT.
- Informs: Reporting/assurance patterns (service KPIs, SLA dashboards); catalog/exposure patterns in domain contexts.
- Service = Promise. What we advertise and are judged by.
- Method/Spec = Recipe. How we usually do it (provider‑internal).
- Work = Evidence. What actually happened and consumed resources.
- Provider/Consumer = Roles. assignment via RoleAssigning at run‑time.
- Metrics from Work. Uptime, lead time, quality are computed from Work, not from the Service object.
- Keep PBS/SBS clean. Services are not parts; they are promises.
This pattern defines how a knowledge artefact (“episteme”) serves as evidence for a specific claim or theory inside a bounded context. It is a non‑behavioural role enacted via
U.RoleAssignment; the binding must declare the target claim, the claim‑scope, and a timespan of relevance. Evidence is a classificatory status of an episteme; it is not an action and it is not an assignment of an actor.
FPF separates what exists (holons and their kinds) from what acts (systems under roles performing work) and from what is known (epistemes carried on symbols). Roles are contextual masks that holons may wear; role meanings are local to a U.BoundedContext. In this setting, we need a kernel‑level way to say that this episteme counts as evidence about that claim, here, and for this period, without confusing evidence with services, methods, or work.
Intent. Provide one uniform, discipline‑neutral role by which an episteme can be assigned as evidence, while keeping:
- Agency on systems performing
U.Work(not on epistemes). - Promise and Standardual language on
U.Service(not on evidence). - Recipe and eligibility on
U.Method/U.MethodDescription(not on evidence).
- Anthropomorphising epistemes. Models say “the paper proves…”, implicitly treating a document as an actor.
- Citation without scope. Links exist but lack explicit target claim, applicability scope, and time window.
- Deductive versus empirical conflation. A formal derivation and a lab dataset are both called “support” although their semantics and ageing differ.
- Staleness and drift. Empirical evidence ages; without explicit validity windows, stale evidence keeps influencing conclusions.
- Cross‑context leakage. Evidence is interpreted as “global,” skipping the bridge that is required to move meaning across contexts.
| Force | Tension to resolve |
|---|---|
| Universality versus domain practice | One role must cover proofs, datasets, replications, benchmarks, model fits, calibrations. |
| Static truth versus ageing confidence | Axiomatic proofs are stable relative to a theory; empirical evidence decays and requires refresh. |
| Local meaning versus reuse | Meaning is context‑local; reuse must pass through explicit bridges, not tacit “global truth.” |
| Clarity versus brevity | Kernel must stay expressive without importing domain governance or tooling procedures. |
Term.
U.EvidenceRole — a non-behavioural role that a U.Episteme may play inside a U.BoundedContext to serve as evidence for a declared target claim (or theory/version).
The target claim, its applicability scope, polarity, weighting model, and other normative facets are properties of the U.EvidenceRole definition itself within that bounded context.
How it is enacted.
The role is enacted by a standard U.RoleAssignment that connects:
RoleAssigning {
holder : U.Episteme, // the artefact: paper, proof, dataset, report…
role : U.EvidenceRole, // a context-defined role with normative properties
context : U.BoundedContext // where the role definition is valid
timespan?: Interval // optional: relevance window for this specific assignment
}
The normative properties of the role (e.g., claimRef, claimScope, polarity, weightModelRef) are set in the role’s definition in the given U.BoundedContext, not in the binding.
U.RoleAssignment carries only the linkage between a concrete episteme and a role already defined and attributed in that context.
Non-behavioural guard. The holder is an episteme; any actions that produced it are
U.Workperformed by systems. Evidence classifies an artefact’s evidential status; it does not itself enact behaviour.
Minimal readable grammar (informative).
<Episteme>#<EvidenceRole>:<Context> — where <EvidenceRole> in <Context> already normatively specifies polarity Claim / Scope [weight].
Examples.
-
In
Cardio_2026,ModelFitEvidenceRoleis defined with:claim = β-blocker > placebo,claimScope = adults 40–65,polarity = supports,weightModelRef = KD:SupportMeasure. Binding:Trial-R3.csv#ModelFitEvidenceRole:Cardio_2026. -
In
Theory_T,AxiomaticProofRoleis defined with:claim = Theorem-12,claimScope = all x ∈ D,polarity = supports. Binding:Lemma-12.proof#AxiomaticProofRole:Theory_T.
U.EvidenceRole is a role kind refined by specialisation (no mereology of roles). The recommended, substrate‑neutral specialisations are:
5.1 Axiomatic line (deductive inside a fixed theory)
AxiomaticProofRole— a proof that entails a target statement in a declaredU.TheoryVersion.CounterexampleRole— a witness that refutes a universally quantified claim in the theory.DerivationRole— a lemma or intermediary derivation establishing a dependency in the proof spine.EquiconsistencyEvidenceRole— a metaproof establishing equiconsistency or relative strength, often used to constrain theory choice.
Semantics. In a fixed theory version, these roles are boolean and non‑decaying. If the axiom base or definitions change, the binding must be re‑issued for the new version; there is no silent carry‑over.
5.2 Experimental line (empirical, inductive, and model‑selection)
ObservationEvidenceRole— raw or processed observations under a declared method.MeasurementEvidenceRole— calibrated measurements with an error model and traceability.ModelFitEvidenceRole— comparative fit or likelihood of data to competing models; supports one over another within the declared scope.ReplicationEvidenceRole— independent replication status (full, partial, failed).CalibrationEvidenceRole— evidence about the measurement chain (instrument validity), typically constraining claims.BenchmarkEvidenceRole— standardised tasks or suites producing comparable scores.
Semantics. Experimental roles require a claim‑scope and a relevance timespan. Their contribution to confidence is graded and may decay; the same artefact may carry multiple bindings for different claims or scopes (distinct role assignments).
Specialisation, not stacking. Do not build chains like “transformer‑agent‑observer role.” A system enacts behavioural roles (e.g.,
TransformerRole) to perform work; an episteme enactsU.EvidenceRoleto classify its evidential function. Keep enactment lines separate.
| If you are talking about… | Use in FPF | Why |
|---|---|---|
| Who acted and consumed resources | U.System with U.RoleAssignment performing U.Work |
Only systems act; work records resource deltas. |
| What was promised to a consumer | U.Service (promise with access and acceptance) |
A promise is not evidence; it is judged from work. |
| How work should be done or invoked | U.Method / U.MethodDescription |
Recipes and interfaces are not evidence. |
| What counts as evidence for a claim | U.Episteme holding U.EvidenceRole via U.RoleAssignment |
Evidence is a status of an artefact relative to a claim in a context. |
| Moving meaning across contexts | An explicit bridge/alignment pattern in the receiving context | Role meanings are context‑local by design. |
-
- Holder type.
U.EvidenceRoleis held by aU.Epistemeonly; never by a system, work, method, or service. # [M‑0]
- Holder type.
- Context anchor. Every binding must name a
U.BoundedContext; meaning is local and does not propagate implicitly. - Target claim. Every binding must reference a resolvable claim or theory statement and declare polarity
{supports | refutes | constrains | neutral}. - Claim‑scope. Every binding must declare an applicability scope; for the axiomatic line this can be the theory’s domain.
- Timespan. Every binding must declare a relevance interval. Axiomatic roles may be open‑ended for a fixed theory version; experimental roles require finite or refreshable windows. Gating: narrative only at M‑0; explicit
timespan&decayClassat M‑2; version fence &proofChecksat F‑*. # [M/F] - Non‑self‑evidence. The provenance of experimental bindings must trace to external
U.Workperformed by systems under roles; an episteme cannot “evidence itself.” - No mixing of stances. Do not mix design‑time proof artefacts and run‑time traces in one provenance chain; relate them via separate bindings if needed.
- +8. No role mereology. Roles have no parts; refine by specialisation only. This prevents confusing “sub‑role” with “subsystem”. Profile note: The constraint is universal (applies to all profiles). # [all]
Minimal readable grammar (informative).
<Episteme>#<EvidenceRole>:<Context> — where <EvidenceRole> is defined inside <Context> with normative facets (claimRef, claimScope, polarity, optional weightModelRef, decay policy).
Examples (illustrative only):
Cardio (empirical line)
Role definition in Cardio_2026:
ModelFitEvidenceRole with
claimRef = (β-blocker > placebo), claimScope = adults 40–65, polarity = supports, weightModelRef = KD:SupportMeasure.
Binding:
Trial-R3.csv#ModelFitEvidenceRole:Cardio_2026
Graph theory (formal line)
Role definition in GraphTheory:
AxiomaticProofRole with claimRef = Theorem-12, claimScope = all finite DAG, polarity = supports (entails), fenced to TheoryVersion = 3.1.
Binding:
Lemma-12.proof#AxiomaticProofRole:GraphTheory
This section deepens the definition of U.EvidenceRole by specifying which normative facets are attached to its definition within a U.BoundedContext, how decay is handled, what provenance anchors are required, and how the role contributes to assurance computation.
Every U.EvidenceRole definition within a U.BoundedContext MUST declare a claim-scope record. This record ties the role’s meaning to the exact target claim and its claim scope, and aligns with the typed-claim form used in B.3:
| Field | Meaning | Norms |
|---|---|---|
claimRef |
Identifier of the supported claim | MUST resolve within the context’s claim graph; dangling IDs forbidden. |
claimHost |
The holon whose claim is supported | MAY be U.System or U.Episteme. |
epistemicMode |
formal or postulative |
MUST be present; governs stability and decay rules. |
assuranceUse |
TA / VA / LA |
Declares whether the evidence functions as typing, verification, or validation input (B.3.3). |
applicability |
Domain subset (envelope) | Optional for formal proofs; REQUIRED for empirical evidence (units, constraints, parameter ranges). |
resultKind |
Kind of content on the carrier | Examples: theorem/proof obligation; dataset; calibration; model-fit result. |
notes |
Additional context | Pointers to SCR/RSCR entries; congruence rationale; bridge IDs if imported from another context. |
Evidence is perishable unless proven otherwise.
- Formal (axiomatic) roles MAY have open-ended
timespan.to = nullonly if fenced to a specificU.TheoryVersionand justified innotes. - Empirical roles MUST have a finite or refreshable
timespan. Decay parameters (half-life, renewal window) are set by the context policy and referenced in the role definition.
When the relevance window closes (validUntil reached), the evidence incurs Epistemic Debt (ED). Per B.3.4, debt must trigger one of three managed actions:
- Refresh — new work produces fresh evidence for the same claim and scope.
- Deprecate — role is retired; claim support is reduced or removed.
- Waive — explicit steward decision to accept the stale evidence temporarily.
Each U.EvidenceRole MUST anchor into the Evidence–Provenance DAG (A.10):
- Formal:
verifiedBy→ proof artefact carrier(s), with optionalcheckedBymetadata for proof-checker runs. - Empirical:
validatedBy→ data carriers from observedU.Workruns;protocolRef→U.MethodDescription;fromWorkSet→ IDs of those runs. - SCR/RSCR anchors (A.10) are mandatory for all carriers.
No self-evidence rule: the producing U.Work must have been performed by a system in an external role; an episteme cannot “prove itself” without independent generation.
A U.EvidenceRole classifies an artefact; its contribution to the target claim’s assurance tuple ⟨F, G, R⟩ is computed in B.3 using:
- F (formality) — lower-bounded by the least formal constituent in the provenance path.
- G (ClaimScope) — limited to the claim scope; unsupported regions are dropped (WLNK).
- R (reliability) — computed as:
R_eff := max(0, min_path( min_claimR(path) − Φ(CL_min(path)) ))
Here:
min_claimR(path)is the smallest justified reliability along the path from the role to the claim in the context’s support graph.CL_min(path)is the lowest congruence level on that path.Φis the penalty function defined by the context policy; it must be monotonic (lower CL → greater penalty).
If any element in the support chain is postulative, the aggregate epistemicMode is postulative.
TA/VA/LA distinctions:
- TA (Typing assurance) — primary effect is to improve
CLon edges, reducing penalties in R computation. - VA (Verification assurance) — primarily raises F and the logical component of R.
- LA (Validation assurance) — raises empirical R and constrains G to the validated envelope.
Role definition (in GraphTheory)
AxiomaticProofRole
claimRef = Theorem-12(“Every finite acyclic graph admits a topological ordering”),claimScope = all finite DAG,polarity = supports(entails),epistemicMode = formal,assuranceUse = VA,- fenced to
TheoryVersion = 3.1(open-ended relevance as long as that version stands).
Binding(s)
Lemma-12.proof#AxiomaticProofRole:GraphTheory
Provenance sketch
verifiedBy → Carrier#Proof_p1 (machine-checked), usedCarrier → Carrier#Def_graph.
Effect on assurance (informative) High F (machine-checked proof), G = “finite DAG”, R from proof-obligation integrity; potential CL penalty if an ontology bridge is used.
Role definition (in Cardio_2026)
ModelFitEvidenceRole
claimRef = “Sensor S achieves ±0.3 °C accuracy in [0,70] °C under lab conditions L”,claimScope = temperature [0,70] °C; humidity 30–50%; environment L,polarity = supports,epistemicMode = postulative,assuranceUse = LA,weightModelRef = KD:SupportMeasure,decayPolicy = annual recalibration.
Binding(s)
Trial-R3.csv#ModelFitEvidenceRole:Cardio_2026
Provenance sketch
validatedBy → Carrier#Dataset_calib_v5, protocolRef → MethodDescription#ThermoCalibration, fromWorkSet → {cal_run_0502, cal_run_0503}.
Effect on assurance (informative) F from formalised procedure, G = measured envelope, R from replication and CL on unit mapping; R decays after the policy window unless refreshed.
CC-ER-01 (Type & holder)
U.EvidenceRole MUST be held by a U.Episteme via U.RoleAssignment. Systems, services, methods, or works MUST NOT hold this role.
CC-ER-02 (Context)
Every binding MUST name a U.BoundedContext. Role meanings are local and do not propagate without an explicit bridge.
CC-ER-03 (Target claim)
Every binding MUST reference a resolvable claimRef@version and declare polarity ∈ {supports | refutes | constrains | neutral}.
CC-ER-04 (Claim-scope)
Every binding MUST declare claimScope. For formal proofs this may be the theory’s domain; for empirical evidence it is mandatory to state population, environment, and parameter envelope.
CC-ER-05 (Timespan)
Every binding MUST carry a non-empty timespan. Formal line may have open-end only if fenced to a fixed theory version; empirical line must have a finite or refreshable end.
CC-ER-06 (Provenance)
Every binding MUST anchor into the EPV-DAG (A.10). For empirical line, fromWorkSet must point to external U.Work; self-evidence is prohibited.
CC-ER-07 (Reproducibility)
Empirical bindings MUST state reproducibility ∈ {replicated-independent, replicated-internal, not-replicated, irreproducible}, with references where applicable.
CC-ER-08 (Weight discipline)
If weight.score is present, weight.modelRef MUST be named and all required inputs supplied.
CC-ER-09 (Cross-context)
Cross-context reuse MUST go via U.Alignment bridge; record CL_min on the path for assurance penalties.
CC-ER-10 (Version fences) If the claim or episteme versions, create a new binding; do not mutate in place.
CC-ER-11 (No role-of-role) Roles never hold roles; there is no chaining of behavioural sub-roles into non-behavioural ones.
CC-ER-12 (Terminology) Use specialisation for role refinements; reserve sub for mereology of systems or artefacts only.
CC-ER-13 (Lane declaration)
Every binding SHALL declare assuranceUse ∈ {TA | VA | LA} and, for empirical (LA) bindings, expose timespan/valid_until and decayPolicy so that SCR can report lane‑separated contributions and freshness (B.3).
| Anti-pattern | Symptom | Remedy |
|---|---|---|
| Data speaks for itself | Binding with no context or claimRef. |
Anchor to context and explicit claim; set polarity and timespan. |
| Evidence = the work run | Treating U.Work as the episteme. |
Keep factual record on U.Work; create a report episteme to bind. |
| Attach to system | Holder is U.System. |
Holder must be an episteme; system may be claimHost, not role holder. |
| Global evidence | Using one binding across contexts with no bridge. | Create explicit U.Alignment bridge; declare loss policy. |
| Ad-hoc weight | Number assigned with no declared model. | Use context-declared model; supply required inputs. |
| Service proves itself | Service KPI logged as evidence. | KPIs come from U.Work; service evaluation can be bound as evidence. |
| Scope blur | Mixing design-time and run-time provenance in one EPV. | Split into separate bindings; relate via claim graph or bridge. |
These operators extend E.6.1 citation graph capabilities for evidence analysis inside a U.BoundedContext:
12.1 Per-claim evidence
evidenceFor(claim, t?) → Set[EvidenceRoleAssigning]
counterEvidenceFor(claim, t?) → Set[EvidenceRoleAssigning]
weight(claim, t?, model?) → score # returns ordinal at M‑mode; numeric at M‑2/F‑mode. # [M/F]
12.2 Decay and windows
window(claim, [t0,t1]) — filter bindings by timespan.
decayedWeight(binding, t) — apply context decay policy.
12.3 Replication and provenance
replicationLedger(binding) → Ledger
isIndependentReplication(binding) → boolean
12.4 Formal line hooks
proofChecks(binding) → {assistant, status, hash, kind∈{classical, constructive}} # [F‑*]
dependsOnAxioms(binding) → Set[AxiomId]
12.5 Empirical line hooks
fromWorkSet(binding) → Set[WorkId]
protocol(binding) → MethodDescriptionId
Builds on:
A.2 U.Role, A.2.1 U.RoleAssignment (role as mask, binding as assignment), A.10 Evidence Anchoring (EPV-DAG), B.3 Trust & Assurance Calculus.
Coordinates with:
A.3.2 U.MethodDescription (protocols, proof obligations), E.6.1 Epistemic Roles via U.RoleAssignment (didactic gateway).
Informs:
KD-CAL (knowledge dynamics, assurance cases), Norm-CAL (policy claims with evidence), planned U.ServiceEvaluation (services judged from work and reported as epistemes with evidence bindings).
- Enumerate claims: For each evidence collection, identify claims and create explicit bindings with polarity.
- Separate work from reports: Facts stay on
U.Work; create report epistemes to bind as evidence. - Name the calculus: Replace free-form confidence with context-declared weight model and required inputs.
- Fence by version/time: Bindings carry
timespanand version fences; add decay class if applicable. - Bridge explicitly: Cross-context evidence goes through
U.Alignment, not by fiat.
These are short reminders for non-specialist readers to apply U.EvidenceRole correctly:
- Evidence ≠ Work — Work is what happened; Evidence is a documented argument (episteme) about a claim in a context.
- Local, not global — Evidence binds in a room (context). Outside that room, you need a bridge (
U.Alignment). - Two lines of trust — Formal line: proof artefacts checked in a declared theory version. Empirical line: observations from Work under a declared method. Both are epistemes wearing
U.EvidenceRole. - Services are promises; Work proves — KPIs are measured from Work; service evaluations can be bound as evidence for policy claims.
- Specialise, don’t stack — Use specialisations of
U.EvidenceRoleto refine meaning; never chain behavioural roles into evidence.
These stubs allow concept-level validation of bindings, without implying any specific tooling.
SCR-A2.4-E1 (Binding integrity)
Assert: holder is U.Episteme; context present; claimRef resolves; timespan non-empty; provenance anchored to EPV.
SCR-A2.4-E2 (Weight discipline)
Assert: if weight.score present → weight.modelRef present and all required inputs provided; recompute to check.
SCR-A2.4-E3 (Traceability)
For empirical bindings: binding → fromWorkSet → each U.Work has performer U.RoleAssignment and timestamps; no missing hops.
RSCR-A2.4-R1 (Regression on version bump)
When claimRef or holder episteme versions change, ensure new bindings are created; no in-place mutation.
RSCR-A2.4-R2 (Decay check)
Bindings past timespan.to or with expired decayClass are flagged for review per context policy.
+EvidenceRoleAssigning:
+ id: ERB-…
+ context: <BoundedContextId>
+ holder: <EpistemeId> # paper/proof/dataset/report
+ role: <EvidenceRoleId> # defined within the context, with normative properties
+ timespan?: {from: ISO-8601, to: ISO-8601|null} # optional assignment window
+ provenance:
+ formal?: { theoryRef: <TheoryId>, proofArtifactRef: <CarrierId>, checkedBy?: <ProofCheckId> }
+ empirical?: { protocolRef: <MethodDescriptionId>, fromWorkSet: [<WorkId>...], dataCarrierRef?: <CarrierId> }Memory hook: “Evidence binds a document to a claim in a Context, for a time, with a trail.” (document = episteme; claim = scoped thesis; Context = bounded context; time = timespan/decay; trail = provenance)
Acceptance cross-checks before publishing a binding:
- Holder: Is it a
U.Episteme? - Context: Is the
U.BoundedContextdeclared? - Claim: Does
claimRefresolve? Ispolarityset? - Scope: Is
claimScopecomplete? For empirical, are population/env/parameters given? - Timespan: Is it finite or fenced (formal line)?
- Provenance: Is EPV anchored? Any self-evidence?
- Reproducibility: For empirical, is it declared?
- Weight: If scored, is the model named and inputs complete?
- Cross-context: If imported, is
U.Alignmentbridge in place with CL_min recorded? - No role-of-role: Is this role bound directly to an episteme without chaining behavioural roles?
A role is not only a name; it is a trajectory of admissible states that governs when, and under which conditions, a holder of that role may enact steps of a U.MethodDescription. FPF therefore introduces a first‑class intensional object:
U.RoleStateGraph(RSG) — the finite, named state space of aU.Rolein a givenU.BoundedContext, with transitions guarded by conditions over the Role Characterisation Space (RCS) and contextual events.
The RSG is the gate between assignment (U.RoleAssignment) and action (U.Work). A step may be performed only when the performer’s assignment is in an enactable RSG state at the relevant Window (time slice) and this is proven by a contemporaneous StateAssertion (verdict of U.Evaluation against the state’s Checklist).
- Readiness blur. Teams conflate “has the badge” with “is fit to act now”. Without explicit states (Ready, Calibrated, Authorized, Suspended…), enactment checks dissolve into ad‑hoc judgement.
- Checklist drift. Criteria for “ready/approved” live in scattered documents; there is no single conceptual anchor tying them to the role.
- Workflow/role confusion. “State” of a workflow (according to workplan) is mistaken for the state of a role (eligibility to enact).
- Status ≠ enactment. Epistemic/Normative roles (e.g., NormativeStandard, ApprovedSpecification) need statuses that are not enactable, yet are used to gate decisions.
- Cross‑context substitution by name. Labels like Approved or Ready silently cross contexts with different criteria; the loss is hidden and unaudited.
Consequences. Violations of Strict Distinction (A.7) and Didactic Primacy (E.12): ambiguous authority to act, unsafe SoD, and non‑reproducible evaluations.
Think of a Role as a mask, and the RSG as the traffic lights for that mask inside one context of meaning.
- The nodes are named states (Ready, Degraded, Suspended, Approved, Obsolete…).
- The edges are transitions with guards (checkable conditions over RCS characteristics and contextual events, e.g., CalibrationAge ≤ 30d; AuthorizationSpeechAct recorded).
- Each state is paired with a Checklist (criteria you test to issue a StateAssertion for a given Window).
- Some states are enactable = true (green lights); others are not enactable (status lights) and therefore can gate decisions but cannot directly authorize
U.Work.
One sentence. RSG says when a badge is green. The Checklist proves it, the StateAssertion records it, and the Method step may proceed.
-
U.RoleStateGraph(RSG). Intensional object owned by(Role, Context). Finite set of named States and typed Transitions with guards. -
RSG.State. Intensional named place. Properties:
enactable ∈ {true,false}— whether being in this state authorizes enactment of steps that require this role.initial?,terminal?— optional markers for lifecycle reasoning.
-
RSG.Transition. Edge
state_i → state_jwith Guard (predicate over RCS characteristics and/or contextual events such asU.SpeechAct,U.Observation,U.Evaluationresults). -
RCS (Role Characterisation Space). The characteristic bundle that characterises this role in this Context (e.g., CalibrationAge, AuthorizationScope, FatigueIndex, IndependenceFlag, EvidenceFreshness). (Defined in A.2 Role Taxonomy / RoleDescription.)
-
State Checklist (description). A RoleDescription component that enumerates criteria to test whether a holder can legitimately be treated as in a given state for a Window. (Description, not the state itself.)
-
U.Evaluation→ StateAssertion (verdict). The result of applying the state’s Checklist to a concrete holder at a time window, yielding a verdict “IN‑STATE(S) @Window” with provenance to observations/evidence. -
Window. Temporal interval to which the StateAssertion applies (e.g.,
[2025‑05‑01, 2025‑06‑01]).
Strict distinction note.
- RSG and its States are intensionals (what the role is allowed to be).
- Checklists and StateAssertions are descriptions/evaluations (how we know a specific holder is in that state now).
- Not a workflow. RSG transitions do not encode task order; they encode eligibility changes of the role.
- Not a capability list. RSG is authorization/readiness over time, distinct from
U.Capability(ability). - Not a global status set. RSG lives inside one Context; the label Ready in another Context is a different state unless bridged (F.9).
- Not a log. RSG is not a history. Histories are StateAssertions over Windows;
U.Workis the record of enactments. - Not a document lifecycle. Epistemic role RSGs can look like document lifecycles, but they remain role‑status graphs; the carrier lifecycle stays separate (A.7,
U.Carrier).
(Full formal clauses in Part 2/4; listed here for orientation.)
- Locality.
RSG(Role, Context)is defined only within thatU.BoundedContext. - Finiteness. The State set is finite and named.
- Checklist pairing. Every State has a Checklist in the Role’s RoleDescription; every enactable State has at least one observable criterion.
- Green‑gate discipline. A Method step requiring
Rolemay proceed only if a contemporaneous StateAssertion exists for an enactable State. - No silent Cross‑context reuse. Cross‑Context reuse requires a Bridge with CL and loss notes; local
⊥/≤/⊗always prevail.
Definition. For a given
U.Rolein a givenU.BoundedContext, itsU.RoleStateGraphis the tupleRSG(Role, Context) = ⟨S, S_en, T, Guard, init?⟩, where:
-
S— a finite set of named States (StateName ∈ Tech register, with a Plain label). Names are local to(Role, Context). -
S_en ⊆ S— the subset of enactable states (“green lights”). States inS \ S_enare status‑only (not enactable). -
T ⊆ S × S— a set of typed transitionssᵢ → sⱼ. Transitions are optional; the RSG may be acyclic or cyclic. -
Guard— for each transition (and optionally for state maintenance), a predicate over:- the role’s RCS snapshot at a Window (values on named characteristics; see A.2.3), and
- Context events (e.g., presence of a
U.SpeechAct, freshness ofU.Observation, validity of a priorU.Evaluation).
-
init? : S → {true,false}— optionally marks initial state(s). (Useful for lifecycles; not required for gating.)
Naming discipline (RSG‑N1…N3).
- RSG‑N1 (Minimal set).
|S| ≥ 1. At least one state must exist; if no state is enactable, the role is status‑only in this Context. - RSG‑N2 (Disjoint labels). State names are unique within
(Role, Context); reusing global labels (e.g., “Ready”) across contexts is allowed only via Bridges (F.9). - RSG‑N3 (Human scale). For didactics, ≤ 7 states is the default target; exceeding it requires a one‑sentence rationale (“distinct gate we will actually use”).
An RSG does not determine history; it determines what counts as being in a state, and which states authorize enactment.
For each s ∈ S, the RoleDescription (A.2.3) includes a State Checklist Checklist(s) — a named set of criteria that can be evaluated at a Window to test “holder is in state s”.
-
Criterion kinds (illustrative):
- Threshold over RCS characteristic:
CalibrationAge ≤ 30 days. - Presence of act:
AuthorizationSpeechAct exists within 90 days. - Evidence freshness:
Evidence(type=SafetyTest).age ≤ 12 months. - SoD flag:
IndependenceFlag = true. - External status:
StandardStatus = Approved.
- Threshold over RCS characteristic:
Strict distinction.
Checklist(s)is a description; the statesis an intensional place in the role’s RSG.
Evaluating Checklist(s) at a Window produces an U.Evaluation verdict:
StateAssertion(holder, Role, Context, s, Window)— “For this Window, this holder is in states”, with provenance to the actual observations/evidence.
Rules (RSG‑C1…C5).
- RSG‑C1 (All‑must‑hold). A
StateAssertionMUST justify that all required criteria inChecklist(s)hold at the Window. - RSG‑C2 (Window freshness). Each criterion MUST define its freshness window; if omitted, default is instantaneous at the Window’s end time.
- RSG‑C3 (No guess). Pure opinion is disallowed; every criterion is grounded in observable facts (
U.Observation,U.Workrecord,U.SpeechAct, or a derivedU.Evaluation). - RSG‑C4 (Non‑monotonic over time). A
StateAssertionis not permanent; once the Window ends, a new evaluation is needed unless a maintenance guard keeps it valid (see 8.3). - RSG‑C5 (Uniqueness not required). Multiple states may be asserted for the same Window if their criteria do not conflict (e.g.,
ReadyandAuthorized). Enactability is governed by §8.4.
RSG transitions express how eligibility changes when guards fire. Guards are predicates; the RSG stays notation‑neutral.
- Admission guard (
→ s) declares conditions to enter states. - Maintenance guard (
s ↺) must hold to remain ins(e.g., FatigueIndex < 0.8, checked every shift). - Exit guard (
s →) declares conditions to leaves(e.g., CalibrationAge > 30d).
Rules (RSG‑G1…G3).
- RSG‑G1 (Checklists vs guards). Checklists decide recognition (“am I in
snow?”). Guards describe change (“what moves me in/out ofs?”). They may reuse the same predicates; their roles are distinct. - RSG‑G2 (No control‑flow). Guards may refer to events (e.g., “Calibration completed”), but RSG is not a task graph; it does not prescribe task order.
- RSG‑G3 (Observable basis). Every guard references observable RCS characteristics or recorded events (no hidden timers).
Law (RSG‑E1). A
U.MethodDescriptionstep that requires roleRmay be enacted at WindowWiff there exists aStateAssertion(holder, R, Context, s, W)withs ∈ S_en.
Corollaries:
- RSG‑E2 (Specialization lift). If the step requires a general role
R, and the holder has aStateAssertionfor a specialist roleR' ≤ Rin an enactable state whose lift (see §9.1) is enactable forR, the gate passes. - RSG‑E3 (Bundle gate). If the step requires a bundle
R* = R₁ ⊗ … ⊗ Rₙ, enactment requires n distinctStateAssertionsmeeting RSG‑E1 for eachRᵢ(unless the Context defines a CompositeRole with its own RSG; see §9.3). - RSG‑E4 (Status‑only roles). Roles with
S_en = ∅can never authorize enactment; they may gate decisions (e.g., ApprovedSpecRole) but notU.Work.
When R' ≤ R (Specialist role refines General role) in the same Context, their RSGs must align by a refinement map.
Rule (RSG‑R1 Refinement). There exists a surjective mapping
π : S(R') → S(R)such that:
- Enactability preservation:
s' ∈ S_en(R') ⇒ π(s') ∈ S_en(R).- Checklist entailment:
Checklist_R'(s') ⇒ Checklist_R(π(s'))(each specialist state’s criteria imply the general state’s criteria).- Guard monotonicity (informal): Transitions in
R'do not weaken the general readiness implied byR(entering/exiting patterns respect π).
Interpretation. Being in s' for R' guarantees being in π(s') for R. Thus StateAssertions lift along π, enabling RSG‑E2.
Design note. RCS for R' may extend that of R; specialist states can be stricter (more criteria) but not looser than their general counterparts.
R_A ⊥ R_B (within the same Context) states that a single holder must not have overlapping, enactable authority for both roles.
Rule (RSG‑I1). At Window
W, a holder violatesR_A ⊥ R_Biff there exist StateAssertions… in s_A ∈ S_en(R_A)and… in s_B ∈ S_en(R_B)both valid atW.
Optional refinement (soft ⊥). Contexts may tighten incompatibility by listing state pairs that are forbidden (e.g., Ready_A ⊥ Authorized_B), while allowing benign combinations (e.g., Suspended_A + Ready_B). By default, any enactable pair conflicts.
Didactic payoff. SoD is checked by states in Windows, not by static role labels.
A bundle role R* := R₁ ⊗ … ⊗ Rₙ expresses “must wear all these badges at once”.
Rule (RSG‑B1). If
R*exists only as a requirement macro, do not construct a product RSG. The gate for a step requiringR*is satisfied by n separate StateAssertionssᵢ ∈ S_en(Rᵢ)at the same Window.
Rule (RSG‑B2 CompositeRole). If the Context declares
R*as a first‑classU.Role, it MUST also specify anRSG(R*)and an embeddingιᵢ : S(R*) → S(Rᵢ)that preserves enactability; being in an enactable state ofR*implies being enactable in eachRᵢ.
Rationale. Avoid combinatorial blow‑up by default; allow a composite role only when the organization genuinely maintains its own readiness graph.
- RSG‑M1 (Specialist suffices). If a step requires
R, anyR' ≤ Rwhose lifted state is enactable suffices. - RSG‑M2 (Bundle conjunctivity). If a step requires
R₁ ⊗ R₂, the performer must produce both gates (two StateAssertions), unless a CompositeRole with RSG exists and is used.
To keep RSGs operational but not procedural, guards draw on observable inputs only.
Guard types (non‑exhaustive).
- Threshold guards over RCS characteristics
FatigueIndex < 0.8,CalibrationAge ≤ 30d,EvidenceFreshness(role=Tester) ≤ 90d. - Event guards (occurrence since last Window)
exists SpeechAct(type=Authorization),exists Evaluation(verdict=Pass, checklist=SafetyKit). - Temporal guards (time within range)
now ∈ AuthorizationValidityWindow,MaintenanceWindow not active. - Relational guards
IndependenceFrom(holder=X) = true(for SoD),NoOpenIncident(severity≥High).
Rules (RSG‑G4…G6).
- RSG‑G4 (Observable only). Each guard MUST be checkable from observable artefacts (observations, work logs, speech acts, evaluations) or present RCS values.
- RSG‑G5 (Context‑local semantics). Guard semantics are scoped to Context; Cross‑context reuse requires a Bridge (§14 in Part 1/4, F.9).
- RSG‑G6 (Didactic sparseness). Prefer few, stable guards over many brittle micro‑conditions. If a guard encodes task order, you are drifting into workflow; refactor back to eligibility.
Allowed guard evidences include:
- Observation facts (measurements/metrics),
- Evaluation verdicts (checklist results),
- SpeechAct occurrences (communicative
U.Work), identified by role, act kind, and window (e.g., “Approved(change=4711)”).
A SpeechAct can change the state (e.g., Prepared→Authorized) but does not by itself satisfy operational steps; it only opens their Green‑Gate.
At any Window:
- RoleAssignment exists (A.2.1):
Holder#Role:Context. - StateAssertion(s) exist: the holder is in one or more states as proven by checklists (
U.Evaluation). - Green‑Gate Law applies: if at least one asserted state is enactable, role‑gated Method steps may be enacted; if all are status‑only, the role can gate decisions but not perform work.
- Role algebra checks: specialization lifts readiness; bundles require conjunction; incompatibilities are detected when two enactable states coincide for the same holder at the same Window.
This yields a clean separation:
- assignment (RoleAssignment)
- Readiness (RSG + Checklists + StateAssertions)
- Action (
U.Work, gated by RSG)
…and keeps meaning local, evidence observable, and reasoning testable.
Below are didactic, reusable RSG skeletons for the three principal behavioural role families and for epistemic/status roles. Names and criteria are context‑local; treat them as templates to specialise inside your U.BoundedContext (E.10.D1). For each RSG we list:
S— candidate States (enactable states marked [E]);- Checklist gist — the recognition criteria (cf. §8.1);
- Guards — illustrative admission/maintenance/exit predicates (cf. §8.3).
Reminder. Only enactable states (
S_en) can open the Green‑Gate forU.Work(RSG‑E1). Status‑only states gate decisions but never execution.
Context sketch: Ops_ChangeManagement_2025.
RCS (characteristics, examples): CompetenceLevel, FatigueIndex, IndependenceFlag, AuthorizationValidity, IncidentLoad, RiskClass.
States S
- Unprepared — training incomplete; checklists fail.
- Prepared — training + competence thresholds met.
- Authorized — valid approval window present. [E]
- Ready —
Prepared ∧ Authorized ∧ FatigueIndex < τ. [E] - Active — contemporaneous
U.Workstep is underway under this role (with a valid StateAssertion in the window). [E] - Suspended — temporary block (incident/conflict).
- Revoked — authorization expired/withdrawn.
Checklist gist
- Prepared: certificates valid; recency of practice ≤ X; simulator score ≥ Y.
- Authorized:
exists SpeechAct(type=Approval, scope=Role, age≤30d). - Ready: Prepared ∧ Authorized ∧ independence from conflicting work; fatigue within limits.
Guards
- Admission
→ Prepared:ExamPassed ∧ SimulatorScore≥Y. - Admission
→ Authorized: presence of approval speech‑act within window. - Maintenance
Ready ↺:FatigueIndex<τ ∧ IncidentLoad≤k. - Exit
Ready → Suspended: high‑severity incident assigned OR SoD violation detected. - Exit
Authorized → Revoked: window elapsed or explicit revoke speech‑act.
Context sketch: PlantOps_Pipeline_2025.
RCS: CalibrationAge, SafetyInterlock, SelfTestPass, EnvRangeOK, DegradationIndex.
States S
- Unavailable — offline, missing prerequisites.
- Calibrated — calibration fresh; self‑test ok.
- Permitted — safety interlocks clear; clearance token valid.
- Ready —
Calibrated ∧ Permitted ∧ EnvRangeOK. [E] - Running — executing a method step (with contemporaneous StateAssertion). [E]
- Degraded — still operable under derated envelope. [E] (if policy allows)
- Quarantined — suspected hazard; no enactment.
Checklist gist
- Calibrated:
CalibrationAge≤30d ∧ SelfTestPass=true. - Permitted:
SafetyInterlock = Clear ∧ NoOpenIncident(sev≥High). - Ready: Calibrated ∧ Permitted ∧ environment in spec.
Guards
- Admission
→ Calibrated: calibration record timestamp ≤30d. - Maintenance
Ready ↺: env sensors within limits; no new hazard event. - Exit
Ready → Quarantined: detected leak OR hazard alarm. - Transition
Running → Ready: step completed ∧ cool‑down satisfied. - Transition
Ready → Degraded:DegradationIndex∈[d₁,d₂]∧ derate policy active.
Context sketch: Lab_Thermo_2025.
RCS: CalibrationAge, TraceabilityChainOK, DriftRate, SyncError, CleanlinessScore.
States S
- Unqualified — no metrological chain.
- Calibrated — with traceability to standard.
- Synchronized — time/phase sync within tolerance.
- In‑Range — drift & environment within spec.
- Measuring — performing observation. [E]
- Stale — calibration or sync expired.
- Quarantined — suspect bias/contamination.
Checklist gist
- Calibrated: traceability cert valid; calibration within period.
- Synchronized:
SyncError≤ε. - In‑Range: drift ≤ threshold; contamination tests passed.
- Measuring: Calibrated ∧ Synchronized ∧ In‑Range AND observation procedure active.
Guards
- Admission
→ Calibrated: calibration event recorded < 180d. - Exit
Calibrated → Stale: calibration age > threshold. - Exit
In‑Range → Quarantined: contamination alert OR failed control sample. - Transition
Measuring → In‑Range: procedure complete.
Note. Many ObserverRole states are pre‑enactment gates; only Measuring is enactable.
These roles are status‑only; S_en = ∅. They gate decisions (e.g., can be cited, can constrain), but can never authorize U.Work.
States: Draft, Candidate, Approved, Superseded, Deprecated. Checklist gist: governance decision records; publication identifiers; supersession links. Guards: Approved → Superseded on adoption of newer edition; Candidate → Approved after ratification vote.
States: Collected, Verified, Validated, Obsolete, Contested.
Checklist gist: verification/validation U.Evaluation present; freshness window; reproducibility tag.
Guards: decay to Obsolete by age; transition to Contested upon counter‑evidence.
States: Proposed, Accepted, Implemented, Verified, Waived.
Checklist gist: acceptance decision; trace links to U.Work; verification report; waiver authorization.
Guards: Accepted → Implemented when linked executions close; Implemented → Verified on passed acceptance checklist; Any → Waived by authorized speech‑act.
Keep each RSG teachable on one screen. Use the following notation‑neutral templates when drafting RoleDescriptions (A.2.3).
RSG for: <RoleName> Context: <ContextName/Edition>
RCS characteristics (gist): <characteristic1>, <characteristic2>, ...
States (◉ = enactable):
- [◉] <StateName> — checklist gist; typical admission/maintenance/exit
- [ ] <StateName> — ...
- ...
Green‑Gate: step requiring <RoleName> is enactable iff holder asserts any ◉ state at Window.
Role algebra hooks: specialization (≤ ...), incompatibility (⊥ ...), bundles (⊗ ...).
State <StateName> (enactable? yes/no)
Checklist (all must hold at Window):
- <Observable criterion 1> (e.g., CalibrationAge ≤ 30d)
- <Observable criterion 2> (e.g., exists SpeechAct(Approval) age ≤ 30d)
Maintenance (optional): <predicate> (e.g., EnvRangeOK)
Evidence anchors: <Observation/Evaluation ids>
Refinement map π : S(R') → S(R)
R' state π(state in R) entailment note (why Checklist_R' ⇒ Checklist_R)
----------- ------------- -----------------------------------------------
<Ready+> Ready adds stricter fatigue & independence thresholds
<Authorized+> Authorized requires same approval + extra duty segregation
...
Incompatibility ⊥ (applies when both sides enactable at same Window):
<RoleA.StateX> ⊥ <RoleB.StateY>
<RoleA.(any ◉)> ⊥ <RoleB.(any ◉)> // default if not refined
Rationale: <one‑line reason>
Didactic cue. If your “template” spills beyond a screen, you’re drifting into workflow. Pull back to eligibility (RSG) and recognition (checklists).
RSGs are context‑local. When similar roles appear in different Contexts, relate them with an Alignment Bridge (F.9), never by silently importing state names.
Bridge example: Observer readiness across two contexts:
Bridge: Observer-RSG alignment
From: Lab_Thermo_2025.ObserverRole
To: Metrology_Line_2025.ObserverRole
Map (with CL):
Calibrated(Lab) ≈ Calibrated(Metro) CL=3 (minor criterion diffs)
In‑Range(Lab) ↘ Fit‑for‑Use(Metro) CL=2 (Metro adds robustness test)
Measuring(Lab) ↔ Measuring(Metro) CL=3
Notes: 'Synchronized' in Lab maps to 'Time‑Aligned' in Metro (terminology shift).
Losses: Metro’s 'Robustness' has no direct Lab counterpart (explicit loss recorded).
Rule (RSG‑X1). A Bridge MUST record losses and extra criteria; it MUST NOT assert identity without a stated CL (congruence level).
Bridge note: In some IT change contexts, “Authorized” (deontic) overlaps with “Permitted” (operational). A Bridge can explain the design choice:
Authorized(AgentialRole@ITIL)↔Permitted(TransformerRole@IEC)with CL=1 and a note: operational interlock ≠ managerial approval; both required to lift to Ready under our policy.
Payoff. Bridges keep local honesty while enabling Cross‑context reasoning with explicit penalties (B.3).
When you define or revise an RSG, check these concept‑level rules. They are easy to hold in mind; no tooling implied.
CC‑RSG‑01 (Locality). State names and meanings are scoped to (Role, Context). Reuse across contexts only via a Bridge (F.9).
CC‑RSG‑02 (Enactability). Mark which states are enactable (S_en). If none are, the role is status‑only (valid); then it cannot open the Green‑Gate.
CC‑RSG‑03 (Observable criteria). Every checklist item must be observable (Observation, Work record, SpeechAct, or derived Evaluation). No opinions.
CC‑RSG‑04 (Guard discipline). Guards gate change, checklists recognise state. Don’t smuggle task order into guards; workflow lives elsewhere (A.15).
CC‑RSG‑05 (Refinement map). If you declare R' ≤ R, provide a π‑map and ensure entailment (RSG‑R1). Specialist states may be stricter, never weaker.
CC‑RSG‑06 (SoD by state). Define ⊥ in terms of enactable pairs. Avoid blanket ⊥ if finer, state‑aware rules reduce false conflicts.
CC‑RSG‑07 (Human scale). Default to ≤ 7 states. If you exceed, add a one‑sentence didactic rationale (“distinct gate we will actually use”).
CC‑RSG‑08 (Green‑Gate wiring). Ensure every MethodDescription step that requires this Role names the ◉ states it expects, or relies on the default “any ◉”.
CC‑RSG‑09 (Window clarity). Checklists specify freshness windows; state assertions are Window‑bound and non‑permanent.
CC‑RSG‑10 (Status/behaviour split). Epistemic/status roles: S_en = ∅. They gate decisions, not Work. Behavioural roles require U.System holders (A.2.1).
Each vignette shows (i) the Context, Role, RCS characteristics, States (◉ = enactable), Green‑Gate condition, and how a U.Work is gated by a U.RoleAssignment+RSG. Names are context‑local.
Context. Hospital.OR_2026
Role. SurgeonRole (AgentialRole)
RCS characteristics. CompetenceLevel, FatigueIndex, AuthorizationValidity, CaseComplexityBand, TeamSoD.
States.
- Unprepared — training/recency incomplete.
- Prepared — credentials valid; recency ≤ 90 days.
- Authorized — procedure‑specific approval active.
- Ready —
Prepared ∧ Authorized ∧ FatigueIndex<τ ∧ TeamSoD_OK. ◉ - Operating — currently performing steps. ◉
- Suspended — incident or conflict raised.
- Revoked — approval expired/withdrawn.
Green‑Gate. A MethodDescription step tagged requires: SurgeonRole is enactable iff the performer’s RoleAssignment asserts Ready at the Window.
Work gating.
performedBy = Dr.Kim#SurgeonRole:Hospital.OR_2026 is valid for step “Incision” only when Ready(Dr.Kim, SurgeonRole, OR_2026, W) holds (checklist items: approval id, fatigue score, SoD against AuditorRole).
Context. SRE_Prod_Cluster_EU_2026
Role. IncidentCommanderRole (AgentialRole)
RCS characteristics. OnCallStatus, PageFreshness, AuthorityToken, CognitiveLoad, ConflictSoD.
States.
- Off‑Duty — not on call.
- On‑Call — rota active; page reachable.
- Authorized — escalation token valid.
- Ready —
On‑Call ∧ Authorized ∧ CognitiveLoad≤k ∧ SoD_OK. ◉ - RunningIncident — commanding an active incident. ◉
- CoolingDown — post‑incident refractory period.
- Blocked — conflict with ChangeAuthorRole detected.
Green‑Gate. Steps in “Major Incident Process” that require: IncidentCommanderRole open only with Ready.
Work gating.
performedBy = Dana#IncidentCommanderRole:SRE_Prod_Cluster_EU_2026 is invalid for “Declare SEV‑1” if ConflictSoD(ChangeAuthorRole) holds or PageFreshness>5 min.
Context. Metrology_Thermo_2026
Role. ThermometerObserverRole (ObserverRole)
RCS characteristics. CalibrationAge, DriftRate, TraceabilityChainOK, CleanlinessScore, SyncError.
States.
- Unqualified — missing traceability.
- Calibrated — cert valid (≤ 180 d); drift within baseline.
- Synchronized —
SyncError≤ε. - In‑Range — contamination absent; env OK.
- Measuring — procedure active. ◉
- Stale — calibration/sync expired.
- Quarantined — suspected bias.
Green‑Gate. MethodDescription step “Record temperature” is enactable only in state Measuring (which requires Calibrated ∧ Synchronized ∧ In‑Range).
Work gating.
performedBy = SensorT‑17#ThermometerObserverRole:Metrology_Thermo_2026 is rejected if CalibrationAge>180 d or ControlSampleBias>δ.
Context. Finance_Audit_2026
Role. IndependentAuditorRole (AgentialRole) and EvidenceRole (status‑only)
RCS (auditor). CertificationLevel, IndependenceFlag, AssignmentToken, CaseLoad.
States (auditor). Ready/Auditing as in §12.1; ⊥ with DeveloperRole.
RCS (evidence). VerificationStatus, ValidationStatus, Age, ProvenanceChainOK.
States (evidence). Collected, Verified, Validated, Contested, Obsolete (status‑only).
Green‑Gate. Audit step requires: IndependentAuditorRole — enactable only with Ready and ⊥ DeveloperRole at the Window. Evidence states gate decisions (e.g., “accept finding”), never open Work.
Work gating.
performedBy = Alice#IndependentAuditorRole:Finance_Audit_2026 fails if Alice holds any overlapping DeveloperRole binding in the same context.
Author‑facing checks; notation‑free, concept‑level. Use them when drafting or reviewing an RSG.
SCR‑A.2.5‑S01 · Local scope. Every state name is qualified by (Role, Context). No global states.
SCR‑A.2.5‑S02 · Enactability mark. The set S_en is explicit; each ◉ state is listed.
SCR‑A.2.5‑S03 · Observable checklists. Each state has a Checklist of observable predicates (Observation / Evaluation / SpeechAct / Work evidence).
SCR‑A.2.5‑S04 · Green‑Gate wiring. Every MethodDescription step that names the Role either (a) names its ◉ state(s) or (b) relies on the default “any ◉” policy; the RSG declares which.
SCR‑A.2.5‑S05 · Guard discipline. Guards only gate transitions; they do not encode task order.
SCR‑A.2.5‑S06 · SoD by state. Incompatibilities (⊥) are declared over states (or “any ◉”), not over bare role names.
SCR‑A.2.5‑S07 · Specialisation entailment. For every R' ≤ R, a refinement map π: S(R')→S(R) is provided; each mapped pair has an entailment note (why Checklist_R' ⇒ Checklist_R).
SCR‑A.2.5‑S08 · Human scale. |S| ≤ 7 unless a one‑line didactic rationale is recorded.
SCR‑A.2.5‑S09 · Status‑only roles. If S_en=∅, the Role is explicitly tagged status‑only; it cannot open the Green‑Gate.
SCR‑A.2.5‑S10 · Bridge discipline. Any cross‑context reuse is via an Alignment Bridge (F.9) with recorded CL and losses; no silent imports.
Use when adding/removing states, changing criteria, or bridging across contexts.
RSCR‑A.2.5‑R01 · State churn impact. For every added/removed/renamed state, list affected MethodDescription steps and Work validators; confirm the Green‑Gate policy remains decidable.
RSCR‑A.2.5‑R02 · Entailment stability. When R' ≤ R changes, update the π map and re‑justify entailments; fail the check if any previously valid entailment breaks.
RSCR‑A.2.5‑R03 · SoD coverage. After edits, recompute the set of enactable pairs; verify declared ⊥ still blocks all intended conflicts and no longer blocks permitted cases.
RSCR‑A.2.5‑R04 · Evidence freshness. If any checklist predicate uses age/freshness, ensure default Windows are documented and existing state assertions re‑evaluate accordingly.
RSCR‑A.2.5‑R05 · Bridge congruence drift. If a Bridge maps states with CL=k, and either side’s checklist changes, revisit the mapping; do not keep CL unchanged by default—raise or lower with a short rationale.
RSCR‑A.2.5‑R06 · Status/behaviour split. Verify behavioural roles still require U.System holders (A.2.1); status‑only roles still have S_en=∅.
RSCR‑A.2.5‑R07 · One‑screen rule. If cumulative edits push the RSG beyond one screen, split states or tighten criteria; record a one‑line teaching rationale if you must exceed.
| Failure | Symptom | Why it hurts | Quick remedy |
|---|---|---|---|
| Workflow creep | Guards encode task order | RSG becomes a hidden workflow model | Move ordering to MethodDescription; keep guards as eligibility only |
| Vague criteria | “experienced”, “mature” in checklists | Non‑decidable Green‑Gate | Replace with observable proxies (hours, exam score, age thresholds) |
| Global states | “Ready” reused across contexts | Meaning leakage | Qualify by (Role, Context); use Bridges for Cross‑context talk |
| Over‑broad ⊥ | Many false conflicts | Blocks delivery | Make ⊥ state‑aware; restrict to enactable pairs |
| Missing π‑map | Specialisation with no entailment | Unsafe substitutions | Add π and entailment notes; otherwise drop ≤ |
*“A role assignment says who wears which mask where (A.2.1). The RoleStateGraph says when that mask is actually wearable. Each role’s RSG is a small named state space with checklists for each state. Some states are enactable (◉): they open the Green‑Gate for
Work. Others are status‑only: they gate decisions, never execution.A RoleDescription (A.2.3) is where you publish the role’s RCS (characteristics), its RSG (states + checklists + guards), and any role algebra (≤, ⊥, ⊗) specific to your context.
In practice: a
MethodDescriptionstep lists required roles; at runtime, aWorkrecord is valid only if its performer is aRoleAssignmentwhose RSG asserts an enactable state at the Window. That’s the Green‑Gate.Different Contexts may use the same role labels. We never assume global meaning; we relate Contexts with Bridges that map states and record losses.
Keep each RSG on one screen, with observable checklists. If you’re writing task order, you’ve slipped into workflow—move it to the Method. If you’re writing opinions, convert them into observables or drop them. That’s the whole trick.”*
- Builds on: A.2.1
U.RoleAssignment(the binding that can assert states); A.2.3U.RoleDescription(the carrier of RSG); E.10.D1 (Context discipline). - Enables. A.15 (Role‑Method‑Work Alignment via Green‑Gate); B.3 (Trust penalties when crossing Bridges with lower
CL). - Interacts with. D‑cluster deontics (speech‑acts gate Authorized‑like states for agential roles); F.9 (state‑level alignment across contexts).
One-line summary. Introduces a single, context-local scope mechanism for all holons:
U.ContextSlice(where we reason and measure) and a family of set-valued scope types (USM scope objects,U.Scope), specialized asU.ClaimScopefor epistemes (G in F–G–R) andU.WorkScopefor system capabilities, with one algebra (∩ / SpanUnion / translate / widen / narrow / refit) and uniform Cross-context handling (Bridge + CL).
Status. Normative pattern [A] in Part A · Core Holonic Concepts. Numbered A.2.6.
Replaces / deprecates. This pattern supersedes the scattered use of labels applicability, envelope, generality, universality and capability envelope where they tried to stand in for the one scope mechanism. From now on:
- For epistemes, the only scope type is
U.ClaimScope(nick G in F–G–R). - For system capabilities, the only scope type is
U.WorkScope. - The abstract architectural notion is
U.Scope— a set-valued USM object overContextSliceSpacewith its own algebra (∩ / SpanUnion / translate / widen / narrow / refit); it is not aU.Characteristicand MUST NOT appear in anyCharacteristicSpace.
Legacy words (applicability / envelope / generality / capability envelope) MAY appear only as explanatory aliases in non‑normative notes.
Cross‑references.
— C.2.3 (Unified Formality F) and C.2.2 (F–G–R): this pattern defines G as U.ClaimScope.
— A.2.2 (Capabilities): capability gating now SHALL use U.WorkScope.
— Part B (Bridges & CL): Cross‑context transfers MUST declare a Bridge with CL; CL affects R, not F/G.
This pattern gives engineering managers and assurance architects one vocabulary, one model, and one set of operations to talk about where a claim holds and under which conditions a system can deliver a piece of Work. It removes the need to remember whether a document said “applicability,” a model said “envelope,” or a safety plan said “capability envelope.” Scope is scope. The only distinction that matters is what carries it:
- Knowledge/episteme → Claim scope (G).
- System/capability → Work scope (conditions under which Work at the promised measures is deliverable).
With USM, teams can:
- specify, compare, and compose scope without translation games;
- gate ESG and Method–Work steps with observable, context‑local scope checks;
- cross Contexts safely using Bridges and explicit CL penalties applied to R.
This pattern defines the scope mechanism (Context slices, set‑valued scopes, algebra, and guard usage) and the canonical lexicon (Claim scope (G), Work scope). It does not prescribe which Contexts must widen/narrow scope, nor which assurance levels are required; those are set by context‑local ESG and Method–Work policies, which SHALL reference the mechanisms defined here.
Modern projects couple formal specs, data‑driven models, safety cases, and operational playbooks. Each artifact must say where it is valid—yet terminology drifts:
- Standards and specs often say applicability or scope.
- Modeling communities say envelope.
- Safety and performance documents speak about capability envelope.
- Knowledge patterns have used generality (G) as if it were “more abstract,” when we actually need “where the statement holds.”
FPF is context‑local: decisions, checks, and state assertions are valid inside a bounded context. Every practical question—Is this claim usable here? Can this capability deliver that Work now?—must be answered on a concrete slice of context (terminology, versions, environmental parameters, time selector Γ_time). USM provides a first‑class object for such slices and a single scope calculus atop them.
In F–G–R:
- F (formality) is “how strictly a claim is expressed” (C.2.3).
- G must be “where it holds,” not “how abstract it sounds.”
- R measures evidence and decays/penalties (freshness, CL).
When G is a set‑valued scope, composition becomes precise: serial dependencies intersect scopes; parallel, independently supported lines can publish a SpanUnion—but only where each line is supported.
- Synonym soup. Applicability, envelope, generality, capability envelope—different labels for the same mechanism led to mismatches in gating, review, and reuse.
- Abstraction confusion. Calling G “generality” invited teams to treat “more abstract wording” as “broader scope,” silently masking unstated assumptions.
- Split mechanics. Episteme vs system text used different algebra and guard language, though the same set operations were meant.
- Cross‑context opacity. Transfers between Contexts lacked a shared carrier and a rule for what changes (trust) vs what stays (scope).
- Overloaded words. Validity clashed with Validation Assurance (LA); operation/operational clashed with Work/Run in A.15, producing governance ambiguity.
| Force | Tension to resolve |
|---|---|
| One mechanism vs two worlds | We must serve both knowledge about the world (claims) and doing work in the world (capabilities) without duplicating concepts. |
| Locality vs interoperability | Scope must be context‑local and precisely checkable, yet transferable across Contexts via Bridges without redefining the characteristic. |
| Expressivity vs minimal vocabulary | Teams need to capture rich conditions (time windows, environment, versions) but not explode the lexicon into “envelope/applicability/…” variants. |
| Static content vs operational change | Claims may hold broadly while current operations are narrow (or vice versa). The mechanism must keep “what is true” and “what can be done” aligned yet distinct. |
| Open‑world exploration vs closed‑world gating | Exploration benefits from permissive drafts; gates require crisp, observable checks. The same scope object must support both. |
USM introduces:
U.ContextSlice— an addressable slice of a bounded context (terminology, parameter ranges, versions/Standards, and a mandatory Γ_time selector). All scope checks are performed on slices.U.Scope— the abstract set‑valued scope characteristic overU.ContextSlice.- Specializations:
U.ClaimScope(nick G) onU.Episteme(“where the claim holds”), andU.WorkScopeonU.Capability(“where the capability can deliver Work at declared measures within qualification windows”). - One algebra: serial intersection, parallel SpanUnion (only where supported), translate via Bridge (CL affects R, not F/G), and widen / narrow / refit operations for scope evolution.
Lexical commitments (normative): — In normative text and guards, use Claim scope (G) and Work scope. — Do not name the characteristic “applicability/envelope/generality/capability envelope/validity.” Those words are permitted only as explanatory aliases in notes.
Definition. U.ContextSlice is an addressable, context‑local selection of a bounded context comprising:
- Vocabulary & roles. The active terminology, role bindings, and local dictionaries.
- Standards & versions. Concrete versioned interfaces, schemas, notations, or service Standards in force.
- Environment selectors. Named parameters/ranges (e.g., temp, humidity, platform, jurisdiction, dataset cohort).
- Time selector
Γ_time. A mandatory selector for the temporal frame of reference (point, window, or policy), disallowing implicit “latest”.
Semantics. All scope checks, guards, and compositions are evaluated inside an explicitly named U.ContextSlice. Cross‑context or cross‑slice usage MUST be mediated by a Bridge (Part B) with an explicit CL rating; see §7.4.
Addressability. A slice MUST be identifiable via a canonical tuple (Context, vocab‑id, Standard/version ids, env selector(s), Γ_time). A slice MAY be a singleton or a finite set if a guard tests multiple coherent sub‑conditions.
Slice key (minimal). A U.ContextSlice SHALL be addressable by a tuple containing at least: (Context, Standard/version ids (if any), environment selectors, Γ_time). Contexts MAY extend this tuple (e.g., vocab/roleset ids).
Definition. U.Scope ⊆ ContextSliceSpace is a set‑valued USM property whose values are sets of U.ContextSlice where a given statement, behavior, or capability is fit‑for‑use. It is not numeric; its internal order is the subset relation ⊆. There is no “unit”. The primitive judgement is membership: slice ∈ Scope.
Guard (normative). U.Scope, U.ClaimScope (G), and U.WorkScope are not U.Characteristics in the A.17/CSLC sense; do not include them as slots in any U.CharacteristicSpace, and do not attach normalizations/scores to them. They are USM scope objects.
Operations. USM admits:
- Intersection
∩(serial composition). - SpanUnion (parallel, independently supported coverage).
- Translate (Cross‑context mapping via Bridge).
- Widen / Narrow (monotone changes to the set).
- Refit (content‑preserving re‑expression; set equality).
Locality. U.Scope values are defined and reasoned about context‑locally. Translation between Contexts never occurs implicitly; see §7.4.
Carrier. U.Episteme (claims, specifications, theories, policies).
Meaning. The set of U.ContextSlice where the claim holds as stated. This is G in the F–G–R triple. G is not “abstraction level”; it is the applicability area of the claim.
Expression. Authors SHALL declare Claim scope as explicit predicates or condition blocks (assumptions, parameter ranges, cohorts, platform/Standard versions, Γ_time windows).
Path composition (serial). Along any essential dependency path supporting the claim, the effective scope is the intersection of contributors’ Claim scopes (see §7.2). Empty intersection makes the path inapplicable.
Parallel support. Where independent lines of support justify disjoint areas, the episteme MAY publish a SpanUnion (see §7.3) limited strictly to the covered slices.
Δ‑moves.
- ΔG+ (widen). Replace scope S with S′ such that S ⊂ S′.
- ΔG− (narrow). Replace scope S with S′ such that S′ ⊂ S.
- Refit. Replace S with S′ where S′ = S (normalization, re‑parametrization).
- Translate. Map S across Contexts via a declared Bridge; CL penalties apply to R, not to F/G.
Orthogonality. Changes in F (form of expression) or D/AT (detail/abstraction tiers) do not change G unless the declared area of validity changes.
Carrier. U.Capability (a system’s ability to deliver specified U.Work).
Meaning. The set of U.ContextSlice (conditions, Standards, platforms, operating parameters, Γ_time) under which the capability can deliver the intended Work at the declared measures, within declared qualification windows.
Expression. Capability owners SHALL declare U.WorkScope as explicit conditions/constraints over U.ContextSlice only (environment, platforms, Standards by version, resource regimes, Γ_time). Quantitative deliverables and operation windows are not part of the scope value:
- Declare targets as
U.WorkMeasures(e.g., latency ≤ L, throughput ≥ T, tolerance ≤ ε) bound in guards (WG‑2). - Declare inspection/recertification policies as
U.QualificationWindowbound in guards (WG‑3). The use‑time admission requires all of:WorkScope covers JobSliceANDWorkMeasures satisfiedANDQualificationWindow holds.
Method–Work gating. A Work step’s guard MUST check that the target slice is covered by the capability’s Work scope and that required measures and qualification windows are satisfied.
Composition and Δ‑moves. Work scope uses the same algebra as Claim scope (∩ / SpanUnion / translate / widen / narrow / refit). Translation across Contexts follows §7.4.
Separation from knowledge. Work scope does not assert a proposition about the world; it asserts deliverability of Work under conditions. Evidence for deliverability feeds R (Reliability) via measurements and monitoring.
Required guard facets (capabilities).
U.WorkMeasures(mandatory). A set of measurable targets with units and tolerated ranges, evaluated on the JobSlice.U.QualificationWindow(mandatory for operational use). A time policy (point/window/rolling) stating when the capability is considered qualified; evaluated atΓ_time. These facets are separate fromU.WorkScopeand live in the R‑lane (assurance). They MUST be referenced in Method–Work guards (see §10.3 WG‑2/WG‑3).
-
Membership judgement.
slice ∈ Scopeis the primitive check. -
Coverage guard. A guard “Scope covers TargetSlice” means either:
- singleton:
TargetSlice ∈ Scope, or - set:
TargetSet ⊆ Scope.
- singleton:
-
No implicit expansion. Absent an explicit declaration, guards MUST NOT treat “close” slices as covered; widening requires a ΔG+ change.
Rule S‑INT (serial). For an essential dependency chain C1 → C2 → … → Ck that supports a claim/capability, the effective scope along that chain is:
Scope_serial = ⋂_{i=1..k} Scope(Ci)
If Scope_serial = ∅, the chain is inapplicable and MUST NOT contribute to published scope.
Monotonicity. Adding a new essential dependency can only narrow (or leave unchanged) the serial scope.
Rule P‑UNION (parallel). If there exist independent support lines L₁,…,Lₙ for the same claim/capability, each with serial scope S_i, the publisher MAY declare:
Scope_published = SpanUnion({S_i}) = ⋃_{i=1..n} S_i
Constraints.
- Independence MUST be justified (different support lines must not rely on the same weakest link).
- The union MUST NOT exceed the union of supported slices; “hopeful” areas are disallowed.
- Publishers SHOULD annotate coverage density/heterogeneity (informative) to aid R assessment, but numeric “coverage” is not part of G.
- Independence criterion. Support lines in a SpanUnion MUST be partitioned so that each line has a set of essential components disjoint from the others’ essential components (no shared weakest link). The partition (or a certificate thereof) SHALL be referenced in the publication.
1) G is not an ordinal scale; it is set-valued.
Under MM‑CHR, U.ClaimScope is a set‑valued U.Characteristic over U.ContextSlice. The only well‑typed primitives are membership and set operations (⊆, ∩, ⋃). Imposing ordinal “levels” such as G0…Gk violates the type discipline and produces non‑invariant behavior (the same set could be “rated” with different numbers under different heuristics).
2) G composes via ∩ / SpanUnion, not via min / avg.
USM already fixes composition: along a dependent path use intersection; across independent support lines publish SpanUnion. None of these operations relies on (or preserves) any linear order. An ordinal “G ladder” invites people to take minimums/averages, which is incorrect for sets and breaks the established algebra.
3) A G ladder drags in “abstraction level,” which is orthogonal.
Early “G ladders” effectively encoded abstraction/typing (instances → patterns → formal classes/types → up‑to‑iso). That is valuable didactics, but not applicability. We have already separated these concerns: abstraction is captured, if needed, by U.AbstractionTier (AT) as an optional facet; applicability is U.ClaimScope (G).
4) A G ladder breaks locality and Bridge semantics.
Cross‑context transfer maps a set Scope via a Bridge and penalizes R by CL. There is no canonical way to “translate” an ordinal G level between Contexts: the mapped area may be strictly narrower or differently factored. Level numbers would become non‑portable, causing hidden loss or inflation of trust. With USM, we translate sets and keep the CL penalty where it belongs—in R, not in G.
5) A G ladder duplicates ESG guards without adding decision power.
What teams often want to “compress into a G number” is actually (a) the quality of expression and (b) the completeness of the declared scope. The first is an F threshold (e.g., require U.Formality ≥ F4 so the scope is predicate‑like and addressable); the second is handled by explicit ESG guards: “Scope covers TargetSlice,” “Γ_time is specified,” and “freshness window holds” (R‑lane). A ladder for G adds confusion but no additional control.
Normative directive.
U.ClaimScope (G) SHALL remain a set‑valued characteristic; no ordinal or numeric ladder SHALL be defined for G. Authoring and gating SHOULD use F thresholds (C.2.3) and explicit guard predicates (A.2.6) rather than pseudo‑levels of G.
Rule T‑BRIDGE. To use a scope in a different bounded context (room), an explicit Bridge MUST be declared with:
- Mapping. A documented mapping from source to target
U.ContextSlicevocabulary/characteristics. - Congruence Level (CL). A rating of mapping congruence.
- Loss notes. Any known losses, assumptions, or non‑isomorphisms.
Effect. The mapped scope is T(Scope) in the target Context. CL penalties apply to R (the trust in support/evidence), not to F or G. If mapping is coarse, the publisher SHOULD also narrow the mapped scope to the area where losses are negligible (best practice, not a requirement).
- ΔG+ (widen). Monotone expansion:
S ⊂ S′. Requires new support or stronger bridges. - ΔG− (narrow). Monotone restriction:
S′ ⊂ S. Often used to remove areas invalidated by new findings. - Refit.
S′ = Safter normalization (e.g., re‑parameterization, changing units, factoring common predicates). Refit MUST NOT alter membership.
Refit (normalization). A refit MUST preserve membership exactly (S′ = S). Any change that alters boundary inclusion (due to rounding, unit conversion, discretization) is a ΔG± change, not a refit.
Edition triggers. Any change that alters the published set (ΔG±) is a content change and MAY trigger a new edition per Context policy (see A.2.x on editions). Refit is not a content change.
- I‑LOCAL. All scope evaluation is context‑local. Cross‑context usage MUST follow §7.4.
- I‑SERIAL. Serial scope is an intersection; it cannot grow by adding dependencies.
- I‑PARALLEL. Parallel scope MAY grow by union, but only where independently supported.
- I‑WLNK. Weakest‑link applies to F and R on dependency paths; G follows set rules (∩ / ⋃).
- I‑IDS. Idempotence: Intersecting or unioning a set with itself does not change it.
- I‑EMPTY. Empty scope is a first‑class value; guards MUST treat it as “not applicable”.
- Empty scope (
∅). The claim/capability is currently not usable anywhere in the Context; guards MUST fail. - Partial scope. Publishers SHOULD avoid “global” language when actual scope is thin; instead, publish explicit slices and (informatively) coverage hints to guide R assessment.
Scopes are owned and evaluated within a U.BoundedContext. State assertions (ESG/RSG) and Method–Work gates MUST NOT assume that a scope declared in another Context applies verbatim; see §7.4.
Every scope declaration and every guard MUST specify a Γ_time selector (point, window, or policy such as “rolling 180 days”) whenever time‑dependent assumptions exist. Implicit “latest” is forbidden. When Γ_time differs between contributors, serial intersection resolves the overlap.
Scope predicates SHALL name Standards/interfaces/schemas by version. Changing symbols/notations with a faithful mapping does not change G (it may change CL for the mapping and thus affect R).
Given fixed inputs (slice tuple, declared scope), the membership judgement MUST be deterministic. Guards SHALL fail closed (no membership ⇒ no use).
For empirical claims and operational capabilities, R typically binds evidence freshness windows. Scope does not decay with time; trust in the support does. Guards MAY combine “Scope covers” with “Evidence freshness holds” as separate predicates.
+L-USM-1 (names). Use Claim scope (G) for epistemes and Work scope for capabilities. Use Scope only when discussing the abstract mechanism. Avoid naming any characteristic as “applicability,” “envelope,” “generality,” “capability envelope,” or “validity”.
L‑USM‑2 (Work/Run). Prefer Work/Run vocabulary from A.15 for system execution contexts. Do not introduce “operation/operating” as characteristic names; use Work scope.
L‑USM‑3 (Validation). “Validation/Validate” remain reserved for LA in assurance lanes (Part B). Do not name the scope characteristic “validity”.
L‑USM‑4 (Domain). “Domain” is a descriptive convenience. Scopes are evaluated on Context slices; guards SHALL reference slices, not generic “domains”.
L‑USM‑5 (First mention). On first use in a Context, include the parenthetical nick: “Claim scope (G)” to preserve the F–G–R mapping.
A scope‑aware guard has the form:
Guard := ScopeCoverage AND TimePolicy AND (EvidenceFreshness?) AND (BridgePolicy?)
Where:
- ScopeCoverage:
Scope covers TargetSlice(singleton or finite set), see §7.1. - TimePolicy: explicit
Γ_timeselector(s); implicit “latest” is forbidden (§8.2). - EvidenceFreshness: optional R‑lane freshness/decay predicates; separate from ScopeCoverage (§8.5).
- BridgePolicy: required if the Scope and TargetSlice are in different Contexts; declares Bridge, CL, loss notes (§7.4).
The guard fails closed (no membership ⇒ denial), and evaluation is deterministic given the slice tuple (§8.4).
EG‑1 · ClaimScopeCoverage (mandatory). The state transition MUST include a predicate:
U.ClaimScope(episteme) covers TargetSlice
- Singleton:
TargetSlice ∈ ClaimScope. - Finite set:
TargetSet ⊆ ClaimScope.
EG‑2 · Formality threshold (if required by ESG). When rigor is gated, the guard MUST reference C.2.3:
U.Formality(episteme) ≥ F_k
EG‑3 · Evidence freshness (R‑lane). If the state implies trust, a separate predicate MUST assert freshness windows for bound evidence:
Fresh(evidence, window) AND (NoExpiredBindings)
EG‑4 · Cross‑context usage.
If TargetSlice.Context ≠ episteme.Context, the guard MUST require a declared Bridge and CL:
Bridge(source=episteme.Context, target=TargetSlice.Context) AND CL ≥ c
Effect: CL penalties apply to R, not to F/G (§7.4). The ESG guard MAY also narrow the mapped Claim scope when mapping losses are known.
EG‑5 · ΔG triggers. If the transition publishes a wider Claim scope (ΔG+), the guard MUST capture the new support or the new Bridge and, if Context policy so dictates, mint a new edition (PhaseOf).
EG‑6 · Independence for SpanUnion (when claiming parallel scope). When the episteme declares a SpanUnion across independent lines, the guard MUST include an independence justification (pointer to the support partition). No independence ⇒ no union.
(Informative note.) Managers often combine EG‑1 (coverage) + EG‑2 (F threshold) + EG‑3 (freshness) for “Effective” or “Approved” states, and EG‑4 when adopting claims across Contexts.
WG‑1 · WorkScopeCoverage (mandatory). A capability can be used to deliver a Work step only if:
U.WorkScope(capability) covers JobSlice
WG‑2 · U.WorkMeasures satisfied (mandatory for deliverables).
Guards MUST bind quantitative measures that the capability promises in the JobSlice:
SLO/target measures satisfied (latency ≤ L, throughput ≥ T, tolerance ≤ ε, ...)
WG‑3 · U.QualificationWindow holds (mandatory for operational use).
Operational guards MUST assert that qualification windows (qualification/inspection/recert intervals) hold at Γ_time:
ValidityWindow(capability) holds at Γ_time
WG‑4 · Cross‑context use of capability. If the JobSlice is in another Context:
Bridge(source=capability.Context, target=JobSlice.Context) AND CL ≥ c
CL penalties affect R (confidence in deliverability), not Work scope; however, the guard SHOULD narrow the mapped Work scope to account for known mapping losses.
WG‑5 · Δ(WorkScope). When widening Work scope (new operating ranges/platforms), the guard MUST require evidence at the new slices (measures + qualification windows). Refit (e.g., new units/parametrization) requires no new evidence.
A reusable macro for Cross‑context guards:
Guard_XContext(Scope, TargetSlice) :=
exists Bridge b: (b.source = owner(Scope).Context AND b.target = TargetSlice.Context)
AND CL(b) ≥ c
AND Scope’ = translate(b, Scope)
AND Scope’ covers TargetSlice
AND (Apply CL penalty to R)
- Owner(Scope). The carrier that declares the scope: an Episteme (for
U.ClaimScope) or a Capability (forU.WorkScope). - Translate(b, Scope). The partial mapping of a set of source slices to target slices induced by Bridge b. If a source slice is unmappable, it is dropped. The result is a set of target slices; CL penalties apply to R only.
- Penalty to R: applied per trust calculus; F and G remain as declared.
All ESG and Method–Work guards MUST spell out Γ_time:
- Point (“as of 2026‑03‑31T00:00Z”).
- Window (“rolling 180 days”).
- Policy (“last lab calibration within 90 days”).
Implicit “latest” is not allowed. If multiple contributors declare different policies, serial intersection computes the overlap (§8.2).
| ID | Requirement |
|---|---|
| CC‑USM‑1 (Declaration). | Epistemes SHALL declare U.ClaimScope, capabilities SHALL declare U.WorkScope. The abstract U.Scope MAY be used in architectural notes but not in guards. |
| CC‑USM‑2 (Set‑valued). | Scope characteristics are set‑valued over U.ContextSlice. Implementations MUST support membership, intersection, SpanUnion, translate, widen/narrow, refit. |
| CC‑USM‑3 (Coverage guards). | ESG and Method–Work guards MUST use Scope covers TargetSlice predicates and MUST specify Γ_time. Guards fail closed. |
| CC‑USM‑4 (Serial intersection). | Along essential dependency paths, effective scope SHALL be the intersection; empty intersection invalidates the path. |
| CC‑USM‑5 (SpanUnion constraints). | Parallel scope MAY use SpanUnion only if independent support lines are justified; published union MUST NOT exceed supported slices. |
| CC‑USM‑6 (Cross‑context). | Any Cross‑context use MUST declare a Bridge and CL; CL penalties apply to R, not F/G. |
| CC‑USM‑7 (No synonym drift). | In normative text and guards, MUST use Claim scope (G) or Work scope. Terms “applicability/envelope/generality/capability envelope/validity” MUST NOT name the characteristic. |
| CC‑USM‑8 (Determinism). | Membership evaluation MUST be deterministic given the slice tuple; no heuristic “close enough” matching. |
| CC‑USM‑9 (Edition triggers). | ΔG± (widen/narrow) constitutes a content change; refit does not. Contexts MAY require a new edition when published scope changes. |
| CC‑USM‑10 (Separation). | Scope coverage checks and evidence freshness/assurance checks MUST be separate predicates (G vs R). |
| CC‑USM‑11 (Versioned Standards). | Scope predicates SHALL name Standards/interfaces by version; changes in notations with faithful mapping do not change G (may change CL for R). |
| CC‑USM‑12 (Min‑info publication). | Published scopes SHOULD enumerate slices or predicate blocks sufficient to re‑evaluate membership without external folklore. |
Each example declares the Context, the scope, the target slice, and shows the guard outcome. Where relevant, serial intersection, SpanUnion, and Bridge & CL are illustrated.
- Context:
MaterialsLab@2026. - Episteme: claim “Adhesive X retains ≥85 % tensile strength on Al6061 for 2 h at 120–150 °C.”
- Claim scope (G):
{substrate=Al6061, temp∈[120,150]°C, dwell≤2h, Γ_time = window(1y), rig=Calib‑v3}. - Target slice:
{substrate=Al6061, temp=140 °C, dwell=90 min, Γ_time=2026‑04‑02, rig=Calib‑v3}. - Guard (EG‑1, EG‑2):
covers(TargetSlice)true;U.Formality ≥ F4true (predicates in spec). - Outcome: state transition allowed (freshness checked separately under R).
- target Context:
AssemblyFloor@EU‑PLANT‑B. - Bridge: declared mapping of rigs and temp measurement correction; CL=2 (loss: ±2 °C bias).
- Mapped Claim scope:
translate(Bridge, G)narrows temp to[122,148]°C. - Guard (EG‑4): Bridge present,
CL≥2true; R is penalized per Φ(CL). - Outcome: allowed; G remains the mapped set; R lowered.
- Context:
RobotCell‑Weld@2026. - Capability: “Weld seam W at bead width 2.5 ± 0.3 mm, cycle ≤ 12 s.”
- Work scope:
{humidity<60 %, current∈[35,45]A, wire=ER70S‑6, Γ_time=rolling(90d), controller=FW‑2.1}. - Job slice:
{humidity=55 %, current=40A, wire=ER70S‑6, Γ_time=now, controller=FW‑2.1}. - Guards (WG‑1..3): coverage true; measures satisfied; qualification window true (controller certified 60 d ago).
- Outcome: capability admitted for this Work.
- Claim A (API Standard):
v2.3request schema with constraint “idempotent under retry”. - Claim B (Dataset cohort): “metrics valid for cohort K with schema
ds‑14”. - Composition: service S depends on both A and B → serial intersection of Claim scopes:
{api=v2.3} ∩ {cohort=K, schema=ds‑14}. - Target slice:
{api=v2.3, cohort=K, schema=ds‑14}→ membership true. - Any drift (e.g.,
ds‑15) empties the intersection ⇒ path inapplicable.
- Line L1: tests on dry asphalt support braking property; scope
S1={surface=dry, speed≤50 km/h}. - Line L2: simulations for wet asphalt; scope
S2={surface=wet, speed≤40 km/h}. - Published scope:
SpanUnion({S1,S2})={(dry, ≤50), (wet, ≤40)}with independence note (L1 empirical, L2 model‑validated). - Guard: allowed; union does not include
(wet, 45)because not supported.
- Model claim: “AUC ≥ 0.92 on cohort K, pipeline P, features F,
Γ_time=rolling(180d).” - Claim scope:
{cohort=K, pipeline=P, features=F, Γ_time=rolling(180d)}. - target Context: product
On‑Device@v7, featuresF’(subset), pipelineP’. - Bridge: declared mapping
F→F’,P→P’, CL=1 (notably lossy). - Guard: Bridge present;
translate(G)covers a strict subset; CL=1 penalizes R strongly; ESG requires F≥F5 (executable semantics) and freshness < 90 d. - Outcome: allowed only for the covered subset; adoption flagged with reduced R.
- Name the TargetSlice. Write the tuple (Context, versions, environment params,
Γ_time). - Check scope coverage. “Claim/Work scope covers TargetSlice?” If no, either ΔG+ (publish wider scope with support) or decline.
- Check rigor if gated. If ESG requires it, ensure
U.Formality ≥ F_k. - Check evidence freshness (R). Validate windows/decay policies; do not conflate with coverage.
- Bridge if Cross‑context. Require declared Bridge, CL, and loss notes; accept R penalties.
- Record the decision. Keep the slice and guard outcomes with the StateAssertion (auditability).
- Prefer predicates over prose. Name parameters, ranges, Standards by version, and
Γ_time. - Factor common conditions. Use Refit to normalize units and factor shared predicates; do not widen by stealth.
- Partition support lines. If you plan a SpanUnion, document independence up front.
- Keep scope thin & honest. Publish what you can support; add slices as support appears (ΔG+).
- Design Bridges early. When interop is planned, sketch mapping characteristics and expected CL; plan R penalties.
| Anti‑pattern | Why it’s wrong | Fix |
|---|---|---|
| “Latest” time by default | Non‑deterministic; violates §8.2 | Declare Γ_time explicitly (point/window/policy) |
| Using “domain” in guards | Not addressable; hides slices | Replace with concrete U.ContextSlice tuples |
| Treating “more abstract wording” as wider scope | Abstraction ≠ applicability | Keep AT/D separate; widen G only with explicit ΔG+ |
| Publishing union without independence | Overstates coverage | Justify independence or publish serial intersection only |
| Cross‑context use without Bridge | Silent semantic drift | Require Bridge + CL; apply R penalties |
claimScope:
Context: MaterialsLab@2026
Standards:
- rig: Calib-v3
- api: v2.3
env:
substrate: Al6061
temp: [120, 150] # °C
dwell: { max: "2h" }
gamma_time: { window_days: 365 }
(Illustrative only; the specification does not mandate a particular syntax.)
Contexts that adopt USM SHALL record, per scope‑aware decision:
- Owner. Episteme (for Claim scope) or Capability (for Work scope).
- TargetSlice tuple. Context, vocab/roles, versioned Standards, environment selectors,
Γ_time. - Guard outcomes. Membership result, Bound measures (for Work scope), Freshness predicates (R).
- Bridge info (if any). Mapping summary, CL, loss notes, applied R penalty.
- ΔG log. Widen/narrow/refit; edition policy outcome.
- USM‑Ready. Context declares adoption; editors trained; lexicon updated.
- USM‑Guarded. All ESG/Method–Work guards use Claim/Work scope and
Γ_time. - USM‑Auditable. Decision records include TargetSlice tuples and Bridge/CL details.
- USM‑Composed. Serial intersection and SpanUnion are implemented in composition tooling.
- Does each guard name a concrete TargetSlice?
- Is membership deterministically recomputable from published predicates?
- Are freshness and coverage separate predicates?
- For Cross‑context use: is there a Bridge with CL and loss notes?
- For parallel support: is independence justified?
- Silent widening. Require ΔG+ review; flag any scope increase without new support/Bridge.
- Opaque slices. Disallow “domain” placeholders; enforce addressable selectors.
- Time drift. Require
Γ_timepolicies (rolling windows) for time‑sensitive scopes.
- G is Claim scope. Use set algebra (∩ / SpanUnion).
- F remains the expression rigor (C.2.3); R captures evidence freshness and CL penalties.
- Weakest‑link. On dependency paths: F_composite = min(F), R_composite = min(R); G follows §7.2–§7.3 (set rules).
- No conflation. Raising F does not change G unless scope predicates change.
- Guarding rigor. ESG may use
U.Formality ≥ F_kalongside scope coverage.
- Work scope aligns with the execution context of
U.Work. - Method–Work gates use Work scope coverage plus measures and qualification windows.
- CL only impacts R. CL penalties reduce trust; they never rewrite F or G.
- Best practice. Narrow mapped scopes where mapping losses are material.
- Capabilities MUST declare Work scope, measures, qualification windows; gates MUST verify all three.
- Capability refits that preserve the set (unit changes) are Refit, not Δ(WorkScope).
Q1. Is “Claim scope” the same as “domain”?
No. “Domain” is descriptive and often fuzzy. Claim scope is addressable: it names concrete U.ContextSlice conditions and a Γ_time policy. Guards MUST reference slices, not generic “domains”.
Q2. How do we express partial coverage across different cohorts or platforms?
Declare each supported serial scope (S₁, S₂, …) and publish SpanUnion({Sᵢ}) with independence justification. Do not include unsupported slices.
Q3. Can raising F (formalizing) widen G? Only if the formalization explicitly changes the scope predicates (ΔG+). Formalization alone does not widen scope.
Q4. What is the difference between Work scope and SLOs? Work scope is where the capability can deliver; measures within the guard are what it promises there (SLO targets). Both are required at use time (WG‑1..3).
Q5. Can we assign numeric coverage to G? Not normatively. G is set‑valued. You MAY attach informative coverage metrics (e.g., proportions) to aid R assessment, but guards use set membership.
Q6. How do we handle “latest data” scopes?
You don’t. Declare a Γ_time policy (e.g., rolling 90 days). “Latest” is forbidden to ensure reproducible evaluation.
Q7. How do we move a scope to another Context?
Declare a Bridge with CL and loss notes; compute translate(Bridge, Scope); apply CL penalty to R; consider narrowing the mapped set.
Q8. What about abstraction level or detail? Keep AT (AbstractionTier) and D (Detail/Resolution) as orthogonal, optional annotations. They never substitute for Claim/Work scope.
Q9. Can a capability’s Work scope be broader than an upstream claim’s Claim scope? They are on different carriers. In a serial dependency, the effective scope is the intersection; the broader one does not dominate.
Q10. When does an empty scope make sense? It indicates “not usable anywhere (here, now)”. Guards MUST fail. This is common during early drafting or after a refutation.
| Legacy wording | USM term |
|---|---|
| applicability (of a claim) | Claim scope (G) |
| envelope (of a requirement/spec) | Claim scope |
| generality G | Claim scope (G) |
| capability envelope | Work scope |
| validity (as a characteristic name) | Claim scope or Work scope (depending on carrier) |
| operational applicability | Work scope |
(Use legacy terms only in explanatory notes; not in guards or conformance text.)
ContextSlice tuple (suggested keys):
Context, vocabId, rolesetId?, Standards: [{name, version}], env: {param: range/value}, gamma_time: {point|window|policy}.
Claim scope block:
assumptions, cohorts, platforms/Standards, env, gamma_time.
Work scope block:
conditions (env/platform/Standards), measures (targets & units), validity_windows, gamma_time.
(These are informative; the spec does not mandate a concrete serialization.)
def covers(scope: Set[Slice], target: Union[Slice, Set[Slice]]) -> bool:
if isinstance(target, Slice):
return target in scope
return target.issubset(scope)Intent. This annex applies the F‑cluster method to triangulate USM terms against a diverse set of post‑2015 sources and communities (“Contexts”), and then fixes the Unified Tech and Plain names used in A.2.6. Results are ready for downstream lexicon entries (Part E) and guard templates (ESG / Method–Work).
Contexts surveyed (SoTA, diverse):
- ISO/IEC/IEEE 42010 (architecture description)
- OMG Essence (Kernel: Alphas, Work Products, States)
- NIST AI RMF 1.0/1.1 (trustworthy AI)
- ASME V&V 40–2018 / FDA 2021–2023 (model credibility)
- W3C SHACL (2017+) / SHACL‑AF (data constraints)
- OWL 2 / ontology engineering (2012+, current practice)
- IETF BCP 14 (RFC 2119/8174) (normative keywords & guard style)
- DO‑178C + DO‑333 (avionics, formal methods supplement)
- ISO 26262:2018/2025 (automotive functional safety)
- IEC 61508 (2010+, current revisions) (basic safety)
- ACM Artifact Review & Badging v1.1 (reproducibility signals)
- MLOps/Cloud SLO practice (SRE / platform) (operational guardrails)
Survey focus (terms we align): U.ContextSlice, generic Scope and set algebra, Claim scope (G), Work scope, Bridge & CL, Γ_time, widen/narrow/refit/translate, SpanUnion / serial intersection, separation from F and R, avoidance of overloaded validity/operation terms.
| # | Context / Source | Local label(s) (native) | Closest USM concept | Notes on fit & deltas |
|---|---|---|---|---|
| 1 | ISO/IEC/IEEE 42010 | Architecture context; environment; stakeholder concerns; viewpoints/views | ContextSlice (addressable slice); Scope as view‑specific applicability | 42010 is about views in context; it has no first‑class set‑valued scope char but aligns with “evaluate in a concrete context” → USM uses explicit slice tuples. |
| 2 | OMG Essence | Alpha State; Work Product State; Level of Detail (LoD) | Work scope (guards), Detail (D) (LoD), ESG/RSG | Essence separates status (states) and work evidence; LoD is detail, not scope. USM treats scope as guardable membership over slices; states/LoD map to ESG & D, not to G. |
| 3 | NIST AI RMF | Context of use; validity, reliability, robustness; monitoring | Claim scope (G); R freshness/monitoring | “Context of use” = where a claim/model holds → maps to G. “Validity” is part of R vocabulary; we avoid naming the characteristic “validity” to prevent LA confusion. |
| 4 | ASME V&V 40 / FDA | Context of use; credibility factors; verification/validation | Claim scope (G); R (credibility) | Direct fit for G via “context of use”. Credibility/evidence freshness contribute to R, not to G; USM keeps them separate in guards. |
| 5 | W3C SHACL | Shapes; targets (sh:targetClass, sh:target); constraints | Claim scope (targets define where constraints apply); F≥4 (predicate form) | SHACL “target” ≈ membership predicate on a dataset context; perfect analogue of Claim scope on data slices; constraint language supports F4‑style predicates. |
| 6 | OWL 2 practice | Class extension; domain/range; imports/version IRI | Claim scope as class extension over an ontology context | Class extension is set‑semantics by design; G naturally maps to extension over a versioned ontology (part of ContextSlice). |
| 7 | IETF BCP 14 | MUST/SHALL/SHOULD; requirements language | Guard style (observable predicates) | BCP 14 doesn’t define scope but dictates how guards are worded; USM aligns by requiring observable, deterministic membership checks. |
| 8 | DO‑178C / DO‑333 | Operational conditions; DAL; formal method objectives; TQL | Work scope (operating conditions); F (proof‑grade), R (assurance objectives) | Operational applicability = Work scope; formal method objectives lift F; Tool qualification impacts TA/R, not G. |
| 9 | ISO 26262 | Operational situation & operating modes; ASIL; OSED | Work scope (operating modes/situations) | OSED/operating modes define where capability can be exercised → Work scope. Assurance level (ASIL) relates to R, not G. |
| 10 | IEC 61508 | SIL; demand mode; proof test interval | Work scope (demand vs continuous mode) + R freshness | Mode concepts influence where/how a function can be claimed → Work scope; proof test interval sits in R (freshness/decay). |
| 11 | ACM Artifacts | Available/Evaluated/Reusable; Reproduced/Replicated | R signals; ContextSlice (reproduction environment) | Badges encode evidence availability/strength; the declared environment maps to a slice; scope of claim is often implicit → USM makes it explicit. |
| 12 | SRE / Cloud SLO | SLOs; error budgets; regions/tiers; rollout windows | Work scope (regions/tiers/windows) + measures; Γ_time policies | SLOs attach measures within a Work scope (region/tier/time window); perfect fit for USM Method–Work guards (WG‑1..3). |
Summary. Across all Contexts, two stable notions recur: (1) evaluate in a concrete context (→ U.ContextSlice), and (2) declare where something holds/is deliverable (→ set‑valued Scope). “Context of use,” “operating modes,” “targets,” “class extension,” and “OSED” are all Context‑flavored presentations of Claim scope or Work scope. Terms like validity and operation are semantically close but collide with LA and FPF’s Work/Run lexicon; we therefore do not adopt them as characteristic names.
| Concept in A.2.6 | Unified Tech (lexicon) | Unified Plain (manager‑friendly) | Allowed short form | Deprecated / avoid |
|---|---|---|---|---|
| Addressable evaluation context | U.ContextSlice |
Context slice | Slice (when local) | “domain” (as guard input), “latest” time |
| Abstract mechanism (set‑valued) | U.Scope |
Scope | — | “applicability”, “envelope”, “validity” (as characteristic names) |
| Episteme applicability | U.ClaimScope (nick G) |
Claim scope | G | “generality”, “applicability/envelope (of claim)” |
| Capability applicability | U.WorkScope |
Work scope | “capability envelope”, “operational applicability”, “operation scope” | |
| Time selector | Γ_time |
Time selector | — | implicit “latest” |
| Cross‑context mapping | Bridge + CL | Bridge + congruence level | CL | silent reuse across Contexts |
| Parallel coverage | SpanUnion | Union of supported areas | — | unqualified “union” without independence |
| Serial dependency | Intersection | Intersection of scopes | — | ordinal “more/less general” language |
| Scope edits | ΔG+ (widen), ΔG− (narrow), Refit, Translate | Widen, narrow, refit, translate | — | stealth widening (“it’s obvious”) |
| Optional didactics | U.Detail (D), U.AbstractionTier (AT) |
Detail / abstraction tier | D / AT | using AT/D as G substitutes |
Why these names (decision grounds):
- “Scope” wins over “envelope/applicability/validity”. It is short, self‑documenting, and already idiomatic in SRE/SW, while “validity” clashes with Validation Assurance (LA) and “envelope” suggests geometry, not membership.
- “Claim scope” vs “Work scope”. Two‑word compounds meet the FPF clarity rule: the first token reveals the carrier (Claim vs Work/Capability), the second the mechanism (scope).
- Keep G. The F–G–R triple is canonical; we retain G as nickname for Claim scope.
- “Context slice” is the only term that makes the evaluation target addressable (Context, versions, params, Γ_time).
- “Operation/operating/validity” avoided. They are overloaded in existing FPF lanes (Work/Run, LA) and create policy ambiguities in guards.
- Use “Claim scope (G) covers TargetSlice” and “Work scope covers JobSlice” in guards.
- Always spell
Γ_time; never say “latest”. - To compose, say: “intersection along dependency paths; SpanUnion across independent support lines.”
- For Cross‑context use, say: “via Bridge; CL penalties apply to R (trust), not to F/G (content/scope).”
- When widening/narrowing, write “ΔG+ / ΔG−” and log the support change; use “Refit” for unit/param normalization.
| local context phrase | Use in USM wording |
|---|---|
| “Context of use” (NIST, ASME/FDA) | Claim scope (G) on explicit Context slice |
| “Operating modes/situations” (ISO 26262) | Work scope with measures & qualification windows |
| “Target (class/shape)” (SHACL/OWL) | Claim scope predicates (membership) |
| “Architecture view context” (42010) | Context slice + Scope checks inside the view |
| “Capability envelope” (legacy safety docs) | Work scope |
| “Domain” (informal) | Context slice elements; not acceptable as a guard input |
Outcome. The UTS shows strong convergence across SoTA Contexts on addressable context and set‑valued applicability. F.18 therefore fixes: Context slice, Scope, Claim scope (G), Work scope, with the algebra and guard clauses mandated in A.2.6. This closes synonym drift while remaining readable for engineering managers and precise for assurance tooling.
Establish a single, substrate‑neutral way to say who acts, under which role, according to which description, by which capability, and what actually happened—without “self‑magic” and without blurring design‑time and run‑time. The pattern fixes the Transformer Quartet so all kernel and Γ‑patterns reuse the same four anchors. It builds directly on Holon‑Role Duality (A.2) and Temporal Duality (A.4) and is guarded by Strict Distinction (A.7) and Evidence Anchoring (A.10).
- Holonic substrate. FPF separates what things are (Holon → {System, Episteme, …}) from what they are being right now via roles. Only systems can bear behavioural roles and execute methods/work; epistemes are changed via their symbol carriers.
- Role as mask; behaviour as method/work. A role is a mask, not behaviour; behaviour is a Method (order‑sensitive capability) that may be executed as Work (dated occurrence).
- Design‑time vs run‑time. A holon’s states belong to disjoint scopes Tᴰ and Tᴿ; transitions are physically grounded by a system bearing TransformerRole.
- Evidence & carriers. Claims about outcomes must anchor to carriers (SCR/RSCR) and to an external evidencing transformer.
Legacy phrasing (“actor / process / blueprint”) causes recurrent failures:
- Self‑magic: “the system configures itself” (no external acting side, causality lost).
- Plan = event: blueprint/algorithm reported as if execution happened.
- Capability = result: possession of a method counted as evidence of work.
- Episteme as doer: documents/models treated as actors.
- Scope leak: design‑time and run‑time mixed; run traces lack carriers/method ties. A.2/A.4/A.7/A.10 collectively forbid these, but A.3 must give the canonical quartet that authors can apply consistently.
| Force | Tension |
|---|---|
| Identity vs behaviour | Keep holon identity stable while roles/behaviours change. |
| Simplicity vs precision | Managers want one “process” box; kernel must keep MethodDescription / Method / Work distinct. |
| Universality vs idioms | Pumps, proofs, and data‑pipelines must read the same, yet allow domain names. |
| Design vs run | No overlap of Tᴰ and Tᴿ; bridges explicit and causal. |
| Evidence vs mereology | Provenance edges (EPV‑DAG) must never turn into part‑whole edges. |
A.3 defines four anchors, tied together by Role Assignment (U.RoleAssignment) and aligned with Temporal Duality.
-
Acting side: a system bearing TransformerRole — the only holon kind allowed to enact transformations (behavioural role). Canonical phrase: “system bearing TransformerRole”. Local shorthand: after explicit binding in the same subsection, you MAY write “Transformer” to denote that same system; re‑bind on context change and do not use shorthand where the domain already has a conflicting “transformer” term.
-
MethodDescription (design‑time description): protocol / algorithm / SOP / script — all are idioms of MethodDescription; they live in Tᴰ and are epistemes with carriers (SCR/RSCR).
-
Method (design‑time capability): order‑sensitive composition the system can enact at run‑time (Γ_method); it is not an occurrence.
-
Work (run‑time occurrence): dated execution producing state change and consuming resources (Γ_work); every Work isExecutionOf exactly one MethodDescription version and is performedBy exactly one performer (possibly a collective system).
Safe memory line: MethodDescription → (describes) Method → (executed as) Work. Roles are masks (A.2/A.7); methods/work are behaviour.
Use the universal assignment to state who plays which role where and when:
U.RoleAssignment(
holder : U.System, -- the acting system (bearer)
role : U.TransformerRole, -- behavioural role
context : U.BoundedContext, -- semantic boundary
timespan?: Interval -- optional validity window
)
- A role is local to context and time‑indexed.
- The same system may bear multiple roles if the context allows compatibility.
- For epistemes, the target of change is their symbol carriers; the acting side is still a system.
Every transformation is modelled with two sides and an explicit U.Interaction boundary: acting (system bearing TransformerRole) and target (system being transformed, or the carrier of an episteme). There is no self‑doing; “self‑like” stories are handled by the reflexive split (regulator vs regulated subsystems) or by promoting a meta‑holon and keeping evidence external (A.12).
- MethodDescription lives in Tᴰ;
- Method is defined at design-time and executed as
U.Workat run-time by aU.Systemwith a validU.RoleAssignment(window-aligned) and a live StateAssertion for an enactable RSG state; - Work lives in Tᴿ;
- transitions Tᴰ → Tᴿ and Tᴿ → Tᴰ are grounded by executions of appropriate methods by an external transformer (e.g., fabrication or observation).
Each Work anchors to carriers and to the MethodDescription it instantiates; evidencing transformers are external (no self‑evidence). This sits in the EPV‑DAG and never in mereology.
- “Process / Workflow / SOP / Algorithm” ⇒ MethodDescription (design‑time description).
- “Operation / Job / Run / Performance” ⇒ Work (run‑time occurrence).
- “Function (equipment spec)” ⇒ Method (or MethodDescription if purely textual).
- “Creator” (legacy) ⇒ Transformer (shorthand for system bearing TransformerRole after local binding).
6.1 Physical system — Cooling loop
PumpUnit#3 (system bearing TransformerRole) executes ChannelFluid (Method) as per centrifugal_pump_curve.ld (MethodDescription), producing run‑2025‑08‑08‑T14:03 (Work, 3.6 kWh; ΔT=6 K). Evidence goes to carriers in SCR; resource spend goes to Γ_work.
6.2 Epistemic change — Proof revision
LeanServer (system bearing TransformerRole) edits proof_tactic.lean (carrier) per MethodDescription; lemma‑42‑check‑2025‑08‑08 is Work; the episteme (theorem) changes through its carriers; evidence is attributed to the external transformer.
6.3 Reflexive maintenance — “calibrates itself” Split into Regulator (calibration module, acting side) and Regulated (sensor suite, target) with an interaction boundary; credit evidence to the regulator; no self‑evidence.
CC‑A3‑0 · U.RoleAssgnment presence.
Every claim that a holon “performs a transformation” MUST be backed by at least one RoleAssignment triple:
U.RoleAssignment(holder: U.Holon, role: U.Role=TransformerRole, context: U.BoundedContext, timespan?).
This is the canonical way to say who acts, in which role, where (semantically), and when. See A.2.1 for the universal U.RoleAssignment Standard and its invariants.
CC‑A3‑1 · External transformer discipline.
The bearer of TransformerRole MUST NOT be the same model instance as the object‑under‑change within the same assignment. Self-modification is modelled via two U.RoleAssignments (same holder playing two roles) or via an explicit controller–plant split. This upholds Agent Externalization (A.12).
CC‑A3‑2 · Design–Run separation.
U.MethodDescription (recipe, definition) is a design‑time artefact; U.Method (mask‑of‑work) and U.Work (executed work) are run‑time. It is non‑conformant to mutate a MethodDescription inside a Work log or to treat a Work as a MethodDescription. This enforces the kernel’s Temporal Duality (A.4) and the A.15 alignment.
CC‑A3‑3 · Boundary‑crossing evidence. A conformant transformation that changes a system’s state MUST reference the boundary effects it induces: interactions, flows, or state transitions attach to the target system’s boundary (per Γ‑defaults for additive, min/AND/OR folds). Conservation‑class effects MUST satisfy B‑invariants (e.g., B‑1 Conservation).
CC‑A3‑4 · Method ←→ Work traceability.
Every U.Work MUST (i) name the U.Method it instantiates and (ii) trace the U.MethodDescription it claims to follow (versioned). If a deviation occurs, it MUST be logged as a policy override or exception path; silent drift is non‑conformant. (A.15 guards the vocabulary; Γ_work aggregates resource deltas.)
CC‑A3‑5 · Episteme as object‑under‑change. When the changed holon is an episteme (document, dataset, theory), the transformer is still a system; the episteme’s history MUST be recorded via PhaseOf (versioning) and ConstituentOf/PortionOf as appropriate (not via component trees). See A.14’s mereology firewalls and Γ_epist hooks.
CC‑A3‑6 · Units and measures for resource effects.
Any resource consumption/production in U.Work MUST specify the measure μ and units (e.g., kg, J, bytes); “percentage” effects MUST be grounded in a PortionOf μ to be Γ‑aggregatable. (A.14 POR‑axioms; Γ_work usage.)
CC‑A3‑7 · Provenance minimum.
For each U.RoleAssignment and U.Work, the following fields are REQUIRED: {authority?, justification?, provenance?} where justification: U.Episteme and provenance: U.Method/process evidence. This aligns with the kernel’s governance and B‑cluster lineage practices.
CC‑A3‑8 · Policy–Plan–Action separation for agentic cases.
If the transformer bearer is agentic, the log MUST separate D.Policy → U.PlannedAction → U.Action (A.15/A.13), preserving where failure occurred (strategy, plan, or execution).
CC‑A3‑9 · Context‑local conflicts.
Conflicts among roles (including TransformerRole) are only within the same bounded context; cross‑context differences are admissible if bridges are declared. Non‑conformance arises only when a context’s own incompatibility rules are violated. (A.2.1 U.RoleAssignment invariants.)
CC‑A3‑10 · Γ‑compatibility. Descriptions MUST be sufficient for the relevant Γ‑aggregations to run: Γ_method for recipe composition, Γ_work for resource deltas, Γ_sys for boundary integration, Γ_time for ordering. Each Γ flavour declares its A.14 hooks (Portion/Phase) and inherits B‑invariants.
Benefits
- Explainability by construction. Every transformative claim carries who/what/when/why/how via
U.RoleAssignment+ provenance fields; audits become mechanical rather than heroic. (B‑invariants and Γ tables do the heavy lifting.) - No category errors. Keeping methods/roles out of mereology and enforcing design/run separation prevents the usual “process‑as‑part” and “version‑as‑component” mistakes. (A.14 + A.15.)
- Composable analytics. With measures and boundary folds explicit, cross‑scale proofs (Σ/Π/min/∧/∨) are predictable.
- Contextual pluralism without chaos. Divergent domain practices coexist as distinct bounded contexts with bridges; disagreements are localised and tractable.
Trade‑offs
- More declarations up‑front.
U.RoleAssignment+ units + policy/plan/action feels verbose, but yields deterministic Γ‑runs and reproducible audits. - Discipline for “self‑modifiers.” Modellers must split controller vs plant or dual‑role the same carrier; this adds one line but avoids hidden identity conflations.
Constructor theory (post‑2015).
Our Transformer Principle mirrors constructor theory’s shift from dynamics to tasks: what transformations are possible vs impossible, and why. By making the transformer (constructor) an explicit bearer of a role and keeping recipes as MethodDescription, A.3 captures the core “tasks & constructors” distinction and aligns with constructor‑theoretic thermodynamics linking work, heat, and informational constraints. (Royal Society Publishing, arXiv, Constructor Theory)
Active inference & free‑energy mechanics (2017→).
Where transformers are agentic, A.3’s policy–plan–action split and boundary‑centred accounting dovetail with active inference and free‑energy formulations of self‑organising systems (Markov blankets; Bayesian mechanics). This legitimises U.Objective/cost function links and makes design–run duality natural (prior vs posterior policies). (MIT Press Direct, PubMed, arXiv)
Provenance and FAIR packaging (2016→). Provenance minima in CC‑A3‑7 reflect FAIR principles (machine‑actionable reuse), RO‑Crate (methods+data+context packaged together), and operational lineage standards such as OpenLineage and ML Metadata (TFX) that treat artefacts, runs, and jobs as first‑class, with typed facets and versioning — exactly what enactment + Γ_work need. (Nature, researchobject.org, SAGE Journals, openlineage.io, GitHub, arXiv)
Together, these lines of work argue for explicit role‑bearing transformers, recipe/run separation, boundary‑grounded deltas, and traceable contexts — the four pillars that CC‑A3 enforces.
A.7 Strict Distinction.
A.3 operationalises A.7 by keeping object ≠ description ≠ observation:
object = target holon; description = MethodDescription; observation/log = Work. Violations (e.g., treating a recipe as a part) are non‑conformant and usually surface as Γ failures.
A.12 Agent Externalization & External Transformer. A.3’s CC‑A3‑1 is the mechanical guard‑rail for A.12: even in self‑modification, the modelling split keeps the agent (transformer bearer) distinct from the object‑under‑change.
A.13 Agential Role.
When the bearer is an Agent, A.3 defers identity and states management to Agent‑CAL (U.Agent, U.Intent, U.Action), while still requiring RoleAssigning + Γ compatibility. This is where policy/plan/action pipelines live.
A.15 Role–Method–Work Alignment. A.3 relies on A.15’s vocabulary guard‑rails (roles are not parts; methods are masks of work; specs are recipes). CC‑A3‑2/‑4 are enforceable precisely because A.15 fixes the naming discipline.
A.14 Advanced Mereology. A.3 consumes A.14’s PortionOf (for quantitative deltas) and PhaseOf (for versioning) and forbids role/recipe leakage into part–whole trees. Γ‑flavours declare which A.14 hooks they use.
B‑cluster (Γ‑sections). A.3 is executable only because Γ‑operators provide aggregation and invariants:
- Γ_sys enforces boundary folds and conservation;
- Γ_epist preserves document/data provenance and versioning;
- Γ_time orders work;
- Γ_method composes recipes;
- Γ_work accounts resource deltas; each inherits B‑invariants (e.g., B‑1 Conservation, B‑2 No‑Duplication).
Indexing to the glossary. Terms used here (TransformerRole, Work, Method, MethodDescription, PortionOf, PhaseOf, BoundedContext) remain exactly as defined in Annex A; see A.1/A.2/A.14/A.15 entries for lexical registers.
Teams must talk about how something is done without entangling:
- Who is assigned (that is Role/RoleAssigning),
- Whether the holder can do it (that is Capability), and
- What actually happened (that is Work).
U.Method supplies the how—the abstract way of performing a transformation, independent of a specific run, a specific assignee, or a specific notation. It works across paradigms:
- Imperative (step‑graphs, SOPs, BPMN),
- Functional (pure mappings and compositions, no “steps”),
- Logical/constraint/optimization (goals, rules, admissible solutions).
In FPF, a system bearing a TransformerRole enacts a U.Method (producing Work) by following a MethodDescription—an episteme that describes the method in some representation.
- Process soup. “Process” gets used for recipe, execution, schedule, or org area. Planning, staffing, and audit blur together.
- Spec = run fallacy. A flowchart (or code) is taken as if execution already happened; conversely, logs get mistaken for the recipe.
- Role leakage. People encode assignments inside the recipe (“this step is the surgeon”), tying who to how and making reuse impossible.
- Notation lock‑in. When “method” is defined as “a set of steps,” functional or logical styles become second‑class citizens and cannot be modeled cleanly.
| Force | Tension we resolve |
|---|---|
| Universality vs. specificity | One notion must cover welding, ETL, proofs, and schedulers, while letting each domain keep its idioms. |
| Representation vs. semantics | Many notations express the same “way of doing”; we need one semantic anchor across specs. |
| Reusability vs. assignment | The how should be reusable regardless of who is assigned this time. |
| Compositionality vs. executability | Methods compose (serial/parallel/choice/iteration), but execution may diverge due to conditions/failures. |
| Determinism vs. search | Methods may be deterministic algorithms or constraint problems with admissible solution sets. |
U.Method is a context‑defined abstract transformation type—the semantic “way of doing” a kind of work.
It is:
Described (never identical) by one or more U.MethodDescription epistemes (code/SOP/diagram/rules),
Enacted by a U.System bearing an appropriate Role (usually a TransformerRole) to produce U.Work, and
Independent of who is assigned, what instance ran, or which notation was used.
Strict Distinction (didactic):
- *Method = how in principle (semantic Standard).
- *MethodDescription = how it is written (artifact on a carrier).
- *Work = how it actually went this time (dated execution).
A U.Method does not require an imperative step structure. Representations live in U.MethodDescription, not in the Method itself.
Typical MethodDescription forms include:
- Imperative MethodDescription: step‑graph/flow (serial/parallel/branch).
- Functional MethodDescription: a composition
f ∘ g ∘ hwith typed interfaces/constraints, no “steps”. - Logical/constraint MethodDescription: a goal/constraint set with admissible solutions and search/optimization semantics.
- Hybrid MethodDescription: imperative scaffolding with functional kernels and/or solver calls.
Semantic identity criterion (context‑local). Two MethodDescriptions describe the same U.Method in a given U.BoundedContext iff, for all admissible inputs and conditions recognized by that context, they entail the same preconditions, guarantee the same postconditions/effects, and satisfy the same non‑functional bounds (allowing permitted non‑determinism). Internal control‑flow/search details may differ.
| You have in your hand… | In FPF it is… | Why |
|---|---|---|
| A flowchart/BPMN/SOP text | U.MethodDescription (Episteme) |
A description on a carrier. |
| A git repo with code | U.MethodDescription (Episteme) |
Still a description (even if executable). The Method is the semantic “way” it denotes. |
| A log/run report with timestamps | U.Work |
A concrete event that happened. |
| “The way we weld seams type W” | U.Method |
The abstract how, represented by one or more specs and realized by many runs. |
Didactic rule: when referencing the idea of “how”, say Method; when referencing the document or code, say MethodDescription; when referencing the run, say Work.
When presenting a U.Method in a review, anchor it with these paradigm‑neutral elements (not a data schema):
- Interface — what is required/provided in general (inputs/outputs/types or resources/roles/ports).
- Preconditions — what must already hold (guards, invariants, Standard “requires”).
- Postconditions / Effects — what is guaranteed after successful enactment (Standard “ensures”).
- Non‑functional constraints — latency, accuracy, cost, safety envelope (ties to Capability thresholds).
- Failure modes — known failure classes and recoverability hints.
- Compositional hooks — whether this method composes serially/parallel/choice/iteration (see §4.5).
Methods compose into bigger methods; executions compose into bigger executions—do not conflate the two.
Method composition (design‑time): serial (·), parallel (‖), choice (|), iteration (*), refinement/substitution—yield new U.Methods.
Work composition (run‑time): the corresponding Work may split/merge/overlap differently due to scheduling, failures, or environment, yet it is still execution of the same Method.
Mapping advice: avoid naming run‑time artifacts inside the method definition (no “this thread”, “this person”); keep those in Role/Work.
Constructor Theory views a constructor as a physical entity that effects transformations. In FPF:
- A
U.Systemwith TransformerRole is the constructor (the performer). - A
U.Methodis the abstract transformation type it enacts (semantic Standard). - An algorithm artifact is a
U.MethodDescriptionfor an information‑transformation Method. - A universal transformer generalizes the Turing machine by executing any
U.Methoddescribed by a physically admissibleU.MethodDescription(not only informational ones).
Thus, welding, milling, reagent mixing, and proof construction are all Methods; textbooks/code/derivations are their MethodDescriptions; Work are the concrete runs.
U.Method is local to a U.BoundedContext: terminology, admissible pre/postconditions, and non‑functional constraints are interpreted inside that context. If two teams or theories use the same name for different “ways of doing,” they are different Methods in different contexts unless bridged explicitly.
- Method:
Etch_Al2O3. - MethodDescription: SOP document; a PLC program that controls gas mix and timing.
- Enactment:
Tool_42#TransformerRole:FabLine_Aproduces Work runs W‑101, W‑102…. - Notes: Step diagram exists, but a later functional spec may also exist (composition of gas‑flow functions). Both specs describe the same Method.
- Method:
JS_Schedule_v4(job‑shop scheduling). - MethodDescription: a MILP model + solver configuration; documentation of constraints/objective.
- Enactment:
PlannerService_v4#TransformerRole:PlantScheduling_2025produces WorkRun_2025‑W32‑P1. - Notes: No “steps” are visible at the method level; the solver’s search is internal. Still a
U.Method.
- Method:
Gauss_Elimination. - MethodDescription: formal rules in a proof assistant; textbook chapter as a second spec.
- Enactment:
CAS_Alpha#TransformerRole:MathLab_2025generates a Work proof instance for a concrete matrix. - Notes: The Episteme (spec) is not the ability (that belongs to the CAS system) and not the execution (the proof run).
- Who?
Holder#Role:Context(Role assignment) - Can?
Capability(holder)within envelope/measures - How (in principle)?
Method, described byMethodDescription - Did?
Work(execution), linked byperformedBy → RoleAssigningandisExecutionOf → MethodDescription
Keep the four words apart and plans become dependable.
- Lenses tested:
Arch,Prag,Did,Epist. - Scope declaration: Universal; semantics are context‑local via
U.BoundedContext. - Rationale: Gives FPF a paradigm‑neutral “how” that bridges MethodDescription (knowledge on a carrier) and Work (execution), while staying independent of Role (assignment) and Capability (ability).
CC‑A3.1‑1 (Strict Distinction).
U.Method is the semantic “way of doing”. It is not a U.MethodDescription (artifact on a carrier), not a U.Work (dated execution), not a U.Role/assignment, and not a U.Service/promise.
CC‑A3.1‑2 (Context anchoring).
Every U.Method MUST be defined within a U.BoundedContext. Identity, admissible pre/postconditions, and non‑functional bounds are interpreted in that context.
CC‑A3.1‑3 (Specification linkage).
A U.Method SHOULD be described by ≥1 U.MethodDescription. For operational gating, at least one MethodDescription MUST be present and named. Multiple specs may coexist (imperative/functional/logic), see CC‑A3.1‑7.
CC‑A3.1‑4 (assignment‑free).
A U.Method SHALL NOT hard‑code holders or assignments. If a step “needs a surgeon”, express that as a role requirement (to be satisfied via U.RoleAssignment at run time), not as a named person/unit inside the method.
CC‑A3.1‑5 (Runtime‑free).
A U.Method SHALL NOT contain schedule, calendar slots, or run IDs; those belong to U.WorkPlan (plans) and U.Work (executions). Methods are timeless.
CC‑A3.1‑6 (Interface & effects).
A U.Method MUST admit a context‑local statement of interface (inputs/outputs or ports/resources), preconditions, postconditions/effects, and (when relevant) non‑functional bounds. These anchor semantic identity beyond a particular notation.
CC‑A3.1‑7 (Multi‑spec semantic identity).
Two or more U.MethodDescription describe the same U.Method in a given context iff they entail the same admissible preconditions, guarantee the same effects, and satisfy the same non‑functional bounds for all inputs/conditions recognized by that context (allowing permitted non‑determinism). Internal control‑flow/search differences are irrelevant.
CC‑A3.1‑8 (Composition vs execution). Composition of Methods (design‑time) and composition of Work (run‑time) MUST be kept distinct. Method composition yields new Methods; Work composition yields composed executions. They may correspond but are not identical.
CC‑A3.1‑9 (Parameterization).
If a Method is parameterized, parameters are declared at the Method/MethodDescription level; concrete values are bound at U.Work creation. Avoid freezing parameter values inside the Method definition.
CC‑A3.1‑10 (Dynamics ≠ Method).
Laws/trajectories (U.Dynamics) are models of state evolution and SHALL NOT be labeled as Methods. A Method may rely on a Dynamics model (e.g., for control), but they remain distinct artifacts/concepts.
CC‑A3.1‑11 (Capability checks are orthogonal).
A step may impose capability thresholds; those thresholds are checked against the holder’s U.Capability independently of assignment and independently of the Method’s description.
CC‑A3.1‑12 (Constructor‑theoretic alignment).
Algorithm artifacts are U.MethodDescription for information‑transforming Methods. Physical Methods are equally valid (matter/energy transformations). A “universal transformer” is a system that can enact any physically admissible MethodDescription; this does not collapse Method into “algorithm.”
Operators (conceptual, context‑scoped):
- Serial composition (
·) — do A then B →A · Bis a new Method. - Parallel composition (
‖) — do A and B concurrently (with declared independence/joins). - Choice (
|) — do one of {A, B} under guard/selector. - Iteration (
*) — repeat A under a loop invariant/termination condition. - Refinement (
≤ₘ) — Method M' preserves M’s interface/effects and strengthens preconditions or tightens non‑functional bounds (context‑defined lattice). - Substitution — replace a Method factor with a semantically equivalent one (
M ≡ Nin context) without changing the whole’s Standard.
Design‑time laws (intuitive, not mechanized here):
- Associativity for
·and, where admissible, for‖. - Distributivity over guarded choice under context rules.
- Identity elements (e.g.,
Skipthat preserves state and satisfies neutral bounds). - Monotonicity: refinement of a factor should not break the whole’s postconditions.
Run‑time mapping (do not conflate):
U.Work instances of A · B may interleave differently due to scheduling or failure‑handling and still be executions of A · B. The mapping is “execution semantics,” not part of Method mereology.
- Roles (assignment). Steps stipulate role kinds (e.g.,
IncisionOperatorRole), not people. At run time,U.Workreferences aU.RoleAssignmentthat satisfies the role kind. - Capability (ability). Steps may require thresholds (e.g., “precision ≤ 0.2 mm”). They are checked against the holder’s
U.Capabilityin the context/envelope. - Work (execution). Each run records
isExecutionOf → MethodDescription(the spec used) andperformedBy → RoleAssigning. Logs, resources, and timestamps live here. - Dynamics (laws/models). Methods may cite or assume a Dynamics model; runs may attach traces that are explained by that model. Do not label the model itself as the Method.
- Spec = Method. “The BPMN is the Method.” → The BPMN is a MethodDescription; the Method is the semantic way it denotes.
- Run = Method. “Yesterday’s process is our Method.” → Yesterday’s run is Work.
- Role leakage. “Step 3 is done by Alice.” → Step 3 requires
SurgeonRole; Alice may be assigned via RoleAssigning. - Schedule leakage. “Run at 02:00 daily” inside the Method. → This belongs to WorkPlan; Methods are timeless.
- BoM entanglement. Putting parts/assemblies inside Method definition. → Structure stays in PBS/SBS; Method references interfaces/resources, not a BoM.
- Algorithm‑only bias. Declaring that only code counts as a Method. → Physical transformations (welding, mixing) are Methods too; their SOPs/parameters are MethodDescriptions.
- Hard‑coding capability. Baking “≤ 0.2 mm” into a role name or Method name. → Keep thresholds on steps; capability lives on the holder.
- Rename wisely. Where texts say “process/method” but mean a diagram or code repo, label it MethodDescription; where they mean the abstract “how,” label it Method.
- Extract assignments. Replace named people/units in specs with role kinds; enforce assignments via RoleAssigning at run time.
- Pull time out. Move calendars/schedules from specs into WorkPlan.
- Parameter hygiene. Declare parameters at Method/MethodDescription; bind values in Work.
- Equivalence notes. When two specs are intended as the same Method, write an equivalence note in the context (pre/post/bounds parity).
| Benefits | Trade‑offs / mitigations |
|---|---|
| Clarity across paradigms. Methods are first‑class regardless of notation; teams stop arguing step‑vs‑functional. | One more name to learn. Use the quick grammar card; it pays off fast. |
| Reuse without personnel lock‑in. assignment moves to RoleAssigning; Methods remain portable. | Extra role tables. Keep role‑kind lists short and context‑local. |
| Robust audits. Logs are Work, specs are MethodDescription, Standards are Method; no more “we thought the diagram was the run.” | Discipline needed. Enforce the three‑way split in reviews. |
| Constructor‑theoretic coherence. Physical and informational transformations are peers. | Cultural shift. Not every team is used to seeing SOPs and code as the same class (MethodDescription). |
- Builds on: A.1 Holonic Foundation; A.1.1
U.BoundedContext; A.2U.Role; A.2.1U.RoleAssignment; A.2.2U.Capability. - Coordinates with: A.3 (role masks for transformers/constructors/observers); A.15 (Role–Method–Work Alignment); B.1 Γ (aggregation) for method families vs assembly of systems.
- Informs:
U.WorkPlan[D] (plans reference Methods they schedule);U.Service[D] (promises cite Methods as delivery means);U.Dynamics[D] (models that Methods may assume).
- Method / MethodDescription / Work = how in principle / how it is written / how it went this time.
- Four‑slot grammar: Who? → RoleAssigning. Can? → Capability. How? → Method (via MethodDescription). Did? → Work.
- Design‑time vs run‑time: Composition of Methods ≠ composition of Work.
- No steps required: Functional, logical, and hybrid MethodDescriptions are first‑class.
- Keep time and people out: Schedules → WorkPlan; assignees → RoleAssigning.
Projects need a stable way to express “how it is written”—the recipe, code, SOP, rule set, or formal proof—without confusing it with:
- the semantic “way of doing” (that is
U.Method), - the assignment (that is
U.RoleAssignment), - the ability (that is
U.Capability), - the execution (that is
U.Work), or - the calendar plan (that is
U.WorkPlan).
U.MethodDescription gives this anchor. It treats algorithms, programs, proofs, SOPs, BPMN diagrams, solver models, playbooks as one class of epistemes: knowledge on a carrier that describes a Method. This unifies software and “paper” procedures and lets teams switch notations without breaking the model.
- Spec/run conflation. A flowchart or code is mistaken for the run; audits and SLOs become unreliable.
- Who/time leakage. People and calendars creep into the recipe; reuse and staffing agility die.
- Step‑only bias. Functional or logical styles are treated as “not real methods”; designs get contorted into faux steps.
- Algorithm‑centrism. Only code is considered “the method”, leaving SOPs and scientific procedures second‑class.
- Structure entanglement. BoM/PBS elements end up inside the recipe; method and product structure tangle.
- Unstated equivalence. Two specs intended to mean “the same method” are not declared equivalent; teams fork semantics by accident.
| Force | Tension we resolve |
|---|---|
| Representation vs. semantics | Many notations, one meaning: specs may differ, method stays one. |
| Universality vs. domain idioms | SOPs, code, solver models, proofs—all first‑class, yet domain terms remain local. |
| Timelessness vs. operability | Specs are timeless, but must be precise enough to drive execution and audit. |
| Reusability vs. constraints | Specs should declare role kinds, capabilities, safety bounds—without baking in people or calendars. |
| Evolvability vs. identity | Specs change; we need a way to evolve them without losing the method’s identity or history. |
U.MethodDescription is an U.Episteme that describes a U.Method in a concrete representation (text, code, diagram, model). It is knowledge on a carrier that can be reviewed and validated; at run-time a U.System uses it to execute the U.Method as U.Work under a U.RoleAssignment.
Strict Distinction (memory aid): Method = how in principle (semantic Standard). MethodDescription = how it is written (artifact/description). Work = how it went this time (dated execution).
U.MethodDescription does not privilege any single notation. Typical forms include (non‑exhaustive):
- Imperative Spec — SOP, BPMN/flowchart, PLC ladder, shell/pipeline scripts.
- Functional Spec — compositions of pure functions, typed pipelines, category‑style combinators.
- Logical/Constraint Spec — rules/goal sets, SAT/SMT/MILP models, theorem‑prover scripts.
- Statistical/ML Spec — model definitions, training/evaluation procedures, inference pipelines.
- Reactive/Event‑driven Spec — statecharts, observers/triggers, stream/CEP rules.
- Hybrid Spec — mixtures (e.g., imperative orchestration calling solver kernels).
Same Method, different MethodDescriptions. In a single U.BoundedContext, several MethodDescriptions may describe the same U.Method if they entail the same preconditions, guarantee the same effects, and meet the same non‑functional bounds (cf. A.3.1).
Not a schema—these are content prompts for reviewers:
- Purpose & Name of the Method it describes (link to
U.Method). - Interface/ports (inputs/outputs/resources/Standards) in the context’s vocabulary.
- Preconditions (guards, invariants, required states).
- Postconditions / Effects (what is guaranteed upon success).
- Non‑functional constraints (latency, precision, cost, safety envelope).
- Role requirements for enactment (kinds, not people)—to be satisfied at run time via
U.RoleAssignment. - Capability thresholds the performer must meet (checked against
U.Capabilityof the holder). - Failure semantics (detectable failures, compensations, rollback/forward strategies).
- Compositional hooks (how this spec composes: serial/parallel/choice/iteration), without embedding calendars.
- Parameter declarations (what may vary per run; values bound at
U.Workcreation).
Didactic guardrail: A MethodDescription does not embed a schedule, assignees, or BoM. Calendars →
U.WorkPlan; people/units →U.RoleAssignment; product structure → PBS/SBS.
Being an Episteme, a MethodDescription may itself play epistemic roles via U.RoleAssignment in a context (classification, not action), e.g.:
ApprovedProcedureRole,RegulatedProcedureRole,SafetyCriticalProcedureRole,De‑factoStandardRole.- These do not make the spec an actor; they classify its status within the context (who may use it, in which settings).
In the constructor‑theoretic reading used by FPF:
- Algorithms, programs, solver models, proofs are all
U.MethodDescription—descriptions of Methods that transform information. - SOPs, control recipes, lab protocols are
U.MethodDescription—descriptions of Methods that transform matter/energy. - A universal transformer (a system with sufficient capability) enacts any physically admissible MethodDescription—not only informational ones.
This keeps software and “wet lab” on equal footing.
| You are holding… | It is… | Why |
|---|---|---|
| A BPMN diagram or SOP | U.MethodDescription |
A description on a carrier. |
| A git repo or compiled binary | U.MethodDescription |
Still a description (even if executable). |
| “The way we do X in principle” | U.Method |
Semantic Standard beyond any single notation. |
| A run log with timestamps | U.Work |
A dated execution event. |
| A role description (“surgeon”, “planner”) | U.Role / U.RoleAssignment |
assignment, not recipe. |
| “Can achieve ±0.2 mm” | U.Capability |
Ability of a holder, not a spec. |
| A calendar for next week’s runs | U.WorkPlan |
Plan/schedule, not a recipe. |
| A state‑transition law | U.Dynamics |
Model of evolution, not a method description. |
- Method:
Etch_Al2O3. - MethodDescription:
SOP_Etch_v7.pdf+ PLC ladder file. - Role requirements:
EtchOperatorRole; Capability: gas‑control precision ≤ threshold. - Execution:
Tool_42#TransformerRole:Fab_Aenacts the spec → Work runs W‑143…W‑155.
- Method:
JS_Schedule_v4. - MethodDescription: MILP model + solver config; admissible solution definition.
- Execution:
PlannerService_v4#TransformerRole:Plant_2025produces WorkRun_2025‑W32‑P1.
- Method:
AcuteAppendicitis_Triage. - MethodDescription: clinical decision rule set; Epistemic Role:
RegulatedProcedureRole:Hospital_Context. - Execution:
ER_Team#TransformerRole:ER_Shiftenacts the spec on a case → Work visit V‑8842.
- Lenses tested:
Did,Prag,Arch,Epist. - Scope declaration: Universal; semantics are context‑local via
U.BoundedContext. - Rationale: Elevates all procedural artifacts—code, SOPs, proofs, models—to a single class, avoiding algorithm‑centrism and step‑only bias. Keeps the strict split among Method / MethodDescription / Work / Role / Capability.
CC‑A3.2‑1 (Episteme status).
U.MethodDescription IS an U.Episteme (knowledge on a carrier). It is not a U.Method (semantic way), not a U.Work (execution), not a U.Role/RoleAssigning (assignment), not a U.WorkPlan (schedule), and not PBS/SBS content.
CC‑A3.2‑2 (Context anchoring).
Every U.MethodDescription MUST be interpreted within a U.BoundedContext. Names, Standards, and admissible non‑functional bounds are local to that context.
CC‑A3.2‑3 (Method linkage).
A U.MethodDescription MUST declare the U.Method it describes. Multiple MethodDescriptions MAY describe the same Method (see CC‑A3.2‑8).
CC‑A3.2‑4 (assignment/time‑free).
A MethodDescription SHALL NOT embed assignees, org units, or calendars. People/units are bound via U.RoleAssignment at run time; calendars belong to U.WorkPlan.
CC‑A3.2‑5 (Structure‑free). BoM/PBS/SBS artifacts SHALL NOT be embedded in MethodDescriptions. Reference interfaces/resources and constraints instead of listing parts/assemblies.
CC‑A3.2‑6 (Role and capability requirements).
A MethodDescription MAY state role kinds and capability thresholds required for enactment. These are requirements, not bindings. They are checked at run time against U.RoleAssignment and U.Capability.
CC‑A3.2‑7 (Parameterization).
Parameters MUST be declared in the Method/MethodDescription; concrete values are bound when creating U.Work. Default values in a spec are allowed but SHALL NOT force a schedule or assignee.
CC‑A3.2‑8 (Semantic equivalence).
Two MethodDescriptions describe the same U.Method in a given context iff they entail the same preconditions, guarantee the same postconditions/effects, and satisfy the same non‑functional bounds for all admissible inputs/conditions of that context (per A.3.1 CC‑A3.1‑7). Differences in control flow, search, or notation do not break equivalence.
CC‑A3.2‑9 (Refinement).
Spec₂ refines Spec₁ for the same Method iff it preserves interface, does not weaken postconditions/effects, and tightens (or equal) non‑functional bounds under equal or stronger preconditions. Declare refinement explicitly in the context.
CC‑A3.2‑10 (Compatibility claims). Claims such as “sound but incomplete” or “complete but potentially unsound” relative to another MethodDescription MUST be stated explicitly and scoped to the context (e.g., solver approximations).
CC‑A3.2‑11 (Executable specs). Executability does not change status: an executable artifact (program, script) is still a MethodDescription. Its runs are Work; its semantics are the Method it denotes.
CC‑A3.2‑12 (Epistemic roles via U.RoleAssignment).
A MethodDescription MAY play epistemic roles via U.RoleAssignment (e.g., ApprovedProcedureRole, RegulatedProcedureRole) that classify its status. Such bindings do not make the spec an actor.
CC‑A3.2‑13 (Non‑determinism declaration). If a MethodDescription permits non‑determinism (e.g., search/optimization), the space of admissible outcomes and acceptance criteria MUST be stated (so that Work can be judged).
CC‑A3.2‑14 (Bridging across contexts).
If two contexts use different MethodDescriptions for “the same‑named way,” an explicit Bridge (U.Alignment) SHOULD be provided to map terms/assumptions. Do not assume cross‑context identity by name alone.
Keep two worlds separate:
- Method composition (design‑time semantic): combines Methods into new Methods (A.3.1 §9).
- MethodDescription mereology (epistemic): combines documents/code/models into larger spec artifacts. This is about parts of the description, not about the semantic method algebra.
Epistemic part relations (illustrative):
ConstituentOf— a chapter/module/snippet is a constituent of a larger spec.Imports/Uses— this spec reuses a library/rule set.VariantOf— this spec is a variant (e.g., for different equipment) with declared deltas.RepresentationOf— this visual diagram is a representation of the textual rule set.
Didactic rule: Do not infer that a spec with two modules means a Method with “two steps.” Modules are parts of the description, not necessarily steps of the Method.
Templates. A MethodDescription may serve as a template with parameters (e.g., temperature set‑points, solver tolerances, objective weights).
Binding time.
- Declare parameters in the spec;
- Bind values when creating
U.Work(or at an agreed “compile” stage); - Keep bound values visible in the Work record (so runs can be compared).
Defaults and guards.
- Defaults are allowed; list valid ranges and guards (e.g., safety constraints).
- If a default has safety impact, state it explicitly as part of preconditions.
Variants.
- When variants differ only by parameter ranges → keep one Method with one MethodDescription template.
- When variants differ by Standard (effects/bounds) → either declare a refinement or introduce a distinct Method (context decision).
Within one context.
- Use semantic equivalence (CC‑A3.2‑8) to assert that BPMN vs code vs solver model are the same Method.
- Prefer a short equivalence note showing parity of pre/post/bounds.
Across contexts.
- Treat identity as not guaranteed.
- Provide Bridges (
U.Alignment) that map terms, units, roles, and acceptance criteria. - Be explicit if one spec is only sound (never returns forbidden outcomes) vs complete (can return all allowed outcomes).
Observational perspective (pragmatic). Two specs are observationally equivalent for stakeholders if, under declared conditions, they are indistinguishable by the acceptance tests of that context (even if internal strategies differ).
- Spec = run. “Yesterday’s process log is our spec.” → The log is Work; write a MethodDescription and link runs to it.
- Who/time in the spec. “Step 3 by Alice at 02:00 daily.” → Use RoleAssigning at run time; schedule via WorkPlan.
- Stuffing BoM. Listing parts/assemblies inside the spec. → Reference interfaces/resources; keep PBS/SBS separate.
- Algorithm‑only bias. Treating code as “real spec” and SOPs as “notes.” → Both are MethodDescription; judge by Standards, not by format.
- Hiding non‑determinism. Solver model with no acceptance criteria. → Declare admissible outcome set and tests.
- Silent parameter capture. Hard‑coding values without declaring parameters. → Declare parameters with ranges; bind at Work creation.
- Undeclared variant drift. Copy‑pasting specs and tweaking silently. → Use VariantOf with stated deltas or declare a refinement.
- Label the artifacts. Wherever a repo/diagram/document “is the process,” rename it MethodDescription and link it to a named Method.
- Extract people and calendars. Move all assignees to RoleAssigning and all schedules to WorkPlan.
- Introduce parameter blocks. Add a small “Parameters” section with ranges/defaults and safety guards.
- Write acceptance criteria. Especially for search/optimization or ML specs.
- Declare equivalence/refinement. Where two notations intend “the same way,” add an equivalence note; where the new one tightens bounds, declare refinement.
- Bridge domains. If two departments use different vocabularies, add a Bridge (
U.Alignment) rather than forcing a single spec.
| Benefits | Trade‑offs / mitigations |
|---|---|
| One class for all recipes. SOPs, code, models, proofs become peers; teams can choose the best notation. | A bit more ceremony. You name the Method and the MethodDescription separately; the payoff is clarity. |
| Cleaner audits. Specs vs runs vs assignments vs abilities never mix. | Discipline required. Keep schedules and people out of specs. |
| Easier reuse and substitution. Equivalence/refinement rules enable swapping notations without semantic drift. | Equivalence is a claim. Back it with short acceptance tests. |
| Cross‑domain coherence. Bridges allow controlled translation between contexts. | Bridge maintenance. Someone owns the mapping; keep it short and focused. |
- Builds on: A.3.1
U.Method(the semantic way it describes); A.1.1U.BoundedContext. - Coordinates with: A.2
U.Role, A.2.1U.RoleAssignment(who enacts it); A.2.2U.Capability(ability thresholds); A.15 Role–Method–Work (linkingisExecutionOfto runs). - Informs:
U.WorkPlan(plans reference MethodDescriptions);U.Dynamics(models that specs may assume); Epistemic Role patterns (status of specs RoleStateGraph + State Assertion). - Lexical guards: E.10.y L‑PROC (do not call MethodDescription “process” when you mean Work/WorkPlan); E.10.x L‑FUNC (avoid “function/functionality” confusion).
- Spec ≠ Method ≠ Work. Written recipe ≠ semantic way ≠ dated execution.
- Keep people/time out. Assignees → RoleAssigning; schedules → WorkPlan.
- Declare parameters & acceptance. Bind values at Work; state how success is judged.
- Same method, different specs. BPMN/code/solver can be equivalent if pre/post/bounds match.
- Bridge, do not blur. Cross‑team/domain differences go through
U.Alignment, not wishful thinking.
Teams need one place to say how a thing changes. Physicists call this “dynamics” (equations of motion, state‑transition maps). In IT and enterprise change, we often talk about evolution of characteristics (latency, cost, reliability, compliance, architectural fitness) across time. In knowledge work, KD‑CAL (knowledge dynamics) reasons about how the status of claims shifts as evidence arrives. All these are the same modeling need: a context‑local description of state space and allowed transitions.
FPF already separates:
- what a holon is (structure, PBS/SBS),
- how we act (Method/MethodDescription, Work),
- what we promise (Service).
What is missing without U.Dynamics is the law of change—the model that tells us how states evolve with or without our interventions.
Intuition: Method tells an agent what to do; Dynamics tells everyone how the world (or a model of it) changes when something happens (or even when nothing happens).
Lexical note. Terms like process and thermodynamic process are mapped by L‑PROC:
- the recipe is
U.Method/MethodDescription, - the dated run is
U.Work, - the law/trajectory model is
U.Dynamics.
Without a first‑class U.Dynamics, models suffer predictable failures:
- Recipe = Law. Teams put the procedure (Method/MethodDescription) where the state law should be, so simulations and predictions become impossible to compare with reality.
- Run = Law. Logs of Work are mistaken for dynamics; past events are treated as if they defined what must happen.
- No state space. Discussions jump between metrics (latency! throughput!) without an explicit characteristic space or invariants, so “improvements” cannot be reasoned about.
- Domain lock‑in. “Dynamics” is left to domain vocabularies (physics, control, finance), losing a trans‑disciplinary way to speak about change in a single kernel.
| Force | Tension |
|---|---|
| Universality vs. richness | One kernel notion must cover ODE/PDE, Markov chains, queues, discrete events, and enterprise “fitness characteristics”. |
| Model vs. reality | A law must be design‑time (an Episteme), yet judged by run‑time evidence (Work). |
| Continuous vs. discrete vs. hybrid | Different time bases and update rules must coexist. |
| Open vs. closed systems | Exogenous inputs (control/disturbances) may be explicit or implicit. |
| Predictive use vs. diagnostic use | The same dynamics can guide planning or explain incidents; interfaces must support both. |
Definition (normative).
Within a U.BoundedContext, U.Dynamics is an U.Episteme that specifies a state space and a state‑transition law (deterministic or stochastic, continuous/discrete/hybrid) for one or more holons, possibly under exogenous inputs and constraints. It does not prescribe what an agent should do (that is U.Method/MethodDescription) and is not the dated evolution itself (that is U.Work evidence).
- Type:
U.Episteme(design‑time model/law on a carrier). - Orientation: descriptive/predictive about how states evolve; can be used by Methods but remains separate from them.
- Judged by: conformance of observed Work‑derived traces to the law and invariants.
U.Dynamics {
context : U.BoundedContext, // where the model’s meaning and units are defined
stateSpace : CharacteristicSpace, // explicit characteristics & units; may include topology/geometry
transitionLaw : Episteme, // equations/relations/kernels/transition matrices/rules
timeBase : {continuous|discrete|hybrid},
stochasticity? : {deterministic|stochastic}, // incl. noise/likelihood model if stochastic
inputs? : P(Characteristic), // control/disturbances/environmental drivers
observation? : Episteme, // measurement/observation map from state to observables
constraints? : Episteme, // invariants/safety envelopes/guards
validity? : Conditions, // operating region, approximations, version, timespan
calibration? : Episteme // parameter identification / priors
}
stateSpaceuses FPF characteristics (not “characteristics”) so we can talk about architecture fitness (e.g., latency, MTBF, cost) just like temperature/pressure/volume in physics.transitionLawis paradigm‑agnostic: ODE/PDE, finite‑state relation, Petri net firing, queueing kernel, Bayesian update, etc.observationseparates what exists from what we measure (key for monitoring/assurance).
- Not a Method/MethodDescription: no imperative steps or prescriptions.
- Not Work: no timestamps/resources attached; evidence lives on
U.Work. - Not a Service: no consumer promise; dynamics may underpin service SLOs but does not define the promise.
- Not PBS/SBS: do not place dynamics inside structural BoMs.
-
Design‑time: Methods may reference Dynamics for planning/control (e.g., MPC uses a plant model). Services may derive acceptance targets from Dynamics (e.g., queueing predictions → SLO).
-
Run‑time: Work produces state samples/telemetry; applying the observation map yields traces. Conformance/violation is decided by comparing traces with constraints and predictions from the transition law. Updates to model parameters flow via calibration (design‑time again).
Memory hook: Method decides, Dynamics predicts, Work reveals.
When predicted coordinates (from a dynamics model) are used for comparison or gating, one of the following MUST hold:
- a fresh observation is available for the gate’s window; or
- the applied flow/map
Φ_{Δt}is proven non‑expansive (Lipschitz ≤ 1) under the declared distance overlay (see § 5.1.7), and it commutes with the invariantization step (§ 5.1.6) — i.e.,Quot/Fix_g ∘ Φ_{Δt} = Φ_{Δt} ∘ Quot/Fix_gon the domain of use.
If neither condition is satisfied, using prediction for gating is forbidden; the system MUST fall back to observation. Any use of Φ_{Δt} SHALL declare its validity window (range, Δt).
| Domain | Holon & State Space | Transition Law Example | Observation | Typical Questions |
|---|---|---|---|---|
| Process control | Reactor: {Temperature, Concentration} | Non‑linear ODE with disturbance term | Thermocouples, analyzers | Will we overshoot? What control horizon keeps safety constraints? |
| Reliability/ops | Service platform: {MTBF, MTTR, Backlog} | Birth–death/queueing model | Incident logs, uptime pings | Given load, what SLO is feasible? |
| Evolutionary architecture | System: {Latency, Cost, Coupling} | Discrete‑time map per release | Perf tests, bills | If we change X, how does latency trend next 3 sprints? |
| KD‑CAL (knowledge) | Claim: {Belief, Support} | Bayesian update rule | Evidence artifacts | How does confidence evolve as studies arrive? |
Key takeaway: one kernel object captures trajectories in a characteristic space, from thermodynamics to software quality and knowledge confidence.
CC‑A3.3‑1 (Type).
U.Dynamics IS an U.Episteme (design‑time model/law on a carrier). It is not a U.Method/MethodDescription, not U.Work, and not a structural part of any PBS/SBS.
CC‑A3.3‑2 (Context).
Every U.Dynamics MUST be declared inside a U.BoundedContext. Units, characteristic names, admissible regions, and time base are local to the context; cross‑context reuse requires a Bridge (U.Alignment).
CC‑A3.3‑3 (Explicit state space).
stateSpace MUST enumerate characteristics with units/scales (continuous/discrete/ordinal) and any topology/geometry needed for trajectories. Do not refer to informal “axes”.
CC‑A3.3‑4 (Transition law).
transitionLaw MUST specify a state‑transition relation/map/kernel suitable for the declared time base (continuous|discrete|hybrid) and stochasticity (deterministic or with a likelihood/noise model).
CC‑A3.3‑5 (Observation model).
If evidence from U.Work is to be checked against the law, an observation mapping MUST be provided (identity is acceptable only if explicitly stated). Sampling rate/granularity SHOULD be declared.
CC‑A3.3‑6 (Constraints & validity).
If safety/envelope constraints apply, they MUST be declared under constraints. Operating region, approximations, version, and timespan SHOULD be stated under validity.
CC‑A3.3‑7 (Separation from Method).
A U.Dynamics SHALL NOT prescribe imperative steps or responsibilities. Planning/control algorithms that use the dynamics belong to U.Method/MethodDescription.
CC‑A3.3‑8 (No actuals on Dynamics).
Resource/time actuals and telemetry MUST attach to U.Work. Calibration outcomes produce new versions of U.Dynamics; the law object itself carries no run‑time logs.
CC‑A3.3‑9 (Multi‑scale declaration).
If state is aggregated across parts or time, the aggregation policy (Γ_time, Γ_work, averaging vs. sum vs. percentile) MUST be stated to prevent incoherent comparisons.
CC‑A3.3‑10 (Lexical hygiene). Ambiguous uses of process/processual (laws vs. runs vs. recipes) MUST be resolved per L‑PROC/L‑ACT:
- law →
U.Dynamics, - recipe →
U.Method/MethodDescription, - run →
U.Work.
CC‑A3.3‑11 (Link to Services—optional).
If Service SLOs are derived from a dynamics model, the Service SHOULD reference that U.Dynamics (A.2.3), but the Service remains the promise, not the law.
Let D be a U.Dynamics in context C. Let W be a set of U.Work records produced under C. Let obs_D(·) be the declared observation map for D.
-
trace(W, D)→ Sequence<t, y>: derive an ordered sequence of observed valuesyat timestby applyingobs_Dto Work/telemetry associated withW. (Not a kernel type; a derived artifact for analysis/assurance.) -
inputs(W)→ Series: exogenous inputs/control signals recovered from Work metadata if the model declaresinputs. -
initialState(W, D)→ x₀: the assumed/estimated state at trace start (from Work context or a stated estimation rule).
-
predict(D, x₀, inputs?, horizon)→ Trajectory: propagate the law to obtain a predicted trajectory in the declared state space. -
admissible(D, x)→ bool: test whether statexsatisfiesconstraints. -
reach(D, S₀, S₁, inputs?, horizon)→ bool: reachability: can states inS₀evolve intoS₁under the law.
-
residuals(D, trace)→ Series: discrepancies between predicted and observed series under a stated alignment (point‑wise, windowed, distributional). -
fits(D, trace, tol)→ {pass|fail|partial}: verdict under tolerance policytoldefined by the context (e.g., sup‑norm ≤ ε, percentile bands, likelihood threshold). -
drift(D₁, D₂, domain)→ Measure: divergence between two model versions over a declared operating domain (e.g., max deviation of eigenvalues, KL between predictive distributions).
fits(D, trace, tol)=pass⇒ every sample lies inadmissible(D,·)unless the context explicitly permits out‑of‑envelope transients.- If two traces are generated under identical
inputsand initial conditions, recorded differences must be explainable by the declared stochasticity/noise model or flagged as violations.
Didactic hook: Dynamics predicts; Work reveals; Conformance compares.
-
“Dynamics = procedure.” Control recipes/step graphs belong to
Method/MethodDescription. Keep the law inU.Dynamics. -
“Telemetry = dynamics.” Logs are
Workevidence. Buildtrace(Work, D)and compare to the law; do not store logs inside the law. -
“No state space.” KPI lists without an explicit
stateSpaceturn into dashboard folklore. Name characteristics with units and ranges. -
“Hard‑coding SLO inside the law.” Service targets are promises (
U.Service.acceptanceSpec). Keep predictions and promises separate; link them. -
“Stuffing Dynamics into BoM.” A model is not a component. Leave PBS/SBS for structure.
-
“One size fits all time base.” If parts of the system evolve on different clocks, declare
hybridand separate update rules.
- Name the changing things. Pick 3–7 characteristics that matter (physical or architectural). Declare
stateSpacewith units and ranges. - Write the law you already use. Even if it is a queueing approximation or a simple ARIMA—put it under
transitionLawand state assumptions undervalidity. - Separate recipe from law. Move control procedures to
Method/MethodDescription; keep forecasting/plant equations inU.Dynamics. - Wire evidence. Ensure production
Workemits the measurements needed byobservation. Buildtrace(Work, D). - Start conformance. Define a simple
toland computefits(D, trace, tol)weekly. Raise issues on drift; version the model when calibrating. - Link to promises (optional). If SLOs depend on the law, reference
U.DynamicsfromU.Serviceand derive targets transparently. - For KD‑CAL. Treat belief/support as characteristics; declare a Bayesian/likelihood update in
transitionLaw; evaluate conformance against evidence arrivals.
-
Builds on:
A.1.1 U.BoundedContext(local meaning/units),A.2 Role/A.2.1 RoleAssigning(agents that use the law),A.15.1 U.Work(run‑time evidence). -
Coordinates with:
A.3.1 U.Method/A.3.2 U.MethodDescription(planning/control using the law),A.2.3 U.Service(promises informed by predictions), KD‑CAL (knowledge dynamics as a specialisation: belief‑update laws), Resrc‑CAL (cost/energy models as dynamics over resources). -
Constrained by lexical rules: E.10 L‑PROC (process disambiguation), L‑ACT (activity/action), L‑FUNC (function).
- Dynamics = Law of Change. A design‑time model of how states evolve.
- State space = Named characteristics with units. No vague “axes”.
- Method vs Dynamics. Method decides what we do; Dynamics predicts what will happen.
- Work = Evidence. Only Work has timestamps and resource actuals.
- Conformance = Prediction vs Trace. Fit, residuals, drift.
- Keep promises separate. Services are promises; Dynamics informs them but does not replace them.
Memory hook: Method decides · Dynamics predicts · Work reveals.
“A holon is born in design‑time, lives in run‑time, and is reborn when the world talks back.”
A holon’s blueprint and its lived reality are never identical for long. Pumps wear out, theories meet anomalous data, workflows face unanticipated load. FPF therefore requires a temporal framework that:
1. Physically grounds every modification (via the Transformer Principle, A 3). 2. Supports unbounded improvement cycles (P‑10 Open‑Ended Evolution). 3. Works identically for physical, epistemic, operational (method, work) and future holon flavours.
| Failure mode | Consequence |
|---|---|
| Blueprint ≡ Reality | “As‑built” discrepancies remain invisible; safety and validity claims become fiction. |
| Implicit magic updates | Versions overwrite each other; provenance chains snap. |
| Observer special‑case | Measurement treated as metaphysical rather than a normal, physically grounded transformation. |
| Force | Tension |
|---|---|
| Stability vs Change | Identify a holon across time ↔ allow radical redesigns. |
| Prediction vs Evidence | Plan with intended specs ↔ respond to real telemetry. |
| Parsimony vs Expressiveness | Keep the model lean ↔ respect the full lifecycle complexity. |
FPF assigns every holon state to one—and only one—of two temporal scopes:
| Scope | Symbol | Definition | Typical contents |
|---|---|---|---|
| Design‑Time | Tᴰ | Interval(s) during which the holon may be structurally altered by an external Transformer executing a U.TransformationalMethod. |
Specs, CAD, theorem scripts, IaC SCRs. |
| Run‑Time | Tᴿ | Interval(s) during which the holon executes its own OperationalMethods and is assumed structurally stable (self‑maintenance allowed). |
Telemetry, transaction logs, field data, physical wear. |
Temporal invariants
Tᴰ ∩ Tᴿ = ∅ (never overlap)
Tᴰ ∪ Tᴿ = worldline(holon) (cover full existence)
version(n+1) created only in Tᴰₙ (monotonic lineage)
A holon may repeat the cycle ad infinitum:
(H₀ in Tᴿ₀) → observe → Δspec in Tᴰ₁ → build → H₁ in Tᴿ₁ → …
Observation itself is a transformation:
An External Transformer (U.System playing transformerRole ⊑ TransformerRole)
executes a measurement method whose output is an epistemic holon
containing observations. Thus the traditional “External Observer Pattern” collapses into
the universal external Transformer pattern.
| Phase | Pump‑v2 (U.System) |
Proof‑v2 (U.Episteme) |
|---|---|---|
| Design‑Time | 3‑D CAD + G‑code; stress‑sim config. | Lean/Coq script of theorem; dependency graph. |
| Run‑Time | Pump circulates coolant under OperatePump method. |
Theorem cited & reused; runtime is “being relied on”. |
| Run → Design loop | Sensor data shows cavitation; anomaly report produced by monitoring server (transformerRole). |
New experiment contradicts corollary; lab apparatus + scientists act as transformerRole. |
| Design → Run loop | Engineers author Pump‑v3 spec, printer (TransformerRole) fabricates it. |
Community revises proof, proof‑assistant (TransformerRole) verifies Proof‑v3. |
(Diagrammatic lineage table omitted for brevity but included in annex.)
| ID | Requirement | Purpose |
|---|---|---|
| CC‑A.4.1 | Every U.Holon MUST be tagged with its current temporal scope (Tᴰ or Tᴿ). |
Eliminates blueprint/reality ambiguity. |
| CC‑A.4.2 | A transition from Tᴰ → Tᴿ SHALL be modeled as executes(Transformer, U.TransformationalMethod). |
Guarantees physical grounding of instantiation. |
| CC‑A.4.3 | A transition from Tᴿ → Tᴰ SHALL be modeled as executes(transformerRole, U.TransformationalMethod) producing an observational U.Episteme. |
Ensures observation is treated as transformation. |
| CC‑A.4.4 | Tᴰ ∩ Tᴿ = ∅ and the concatenated intervals MUST equal the holon’s worldline. |
Guards against illicit overlap. |
| CC‑A.4.5 | Each new design version MUST reference (refinesVersion) exactly one predecessor or declare firstVersion = true. |
Enforces monotonic lineage for auditability. |
| Benefits | Trade‑offs / Mitigations |
|---|---|
| Audit‑Ready engineering workflow – Every state and change is explicitly typed, timed, and causally linked to a physical system/Tramsformer. | Additional metadata tagging; mitigated by templates in Authoring Guide (E 8). |
| Unified View of Build & Measure – Observation, test, simulation, maintenance, and fabrication all share one mechanism. | Requires modelers to think in terms of Transformers even for “passive” sensing; mitigated by role libraries (transformerRole, CalibratorRole, etc.). |
| Foundation for Learning Loops – Enables higher patterns (e.g., B 4 Canonical Evolution Loop, D 3 Trust Calculus) to reason over evidence accrual and version fitness, including self-modification. | None significant—temporal scoping is already needed for safety‑critical provenance. |
-
Why separate scopes? Real‑world artefacts SCR the as‑intended versus as‑is gap. By formalising that gap, FPF prevents silent assumption of perfect fidelity and allows quantified error (
U.Error) to drive evolution. -
Why treat observation as transformation? Physics tells us measurement changes state (energy, information, even quantum collapse). Making the observer just another
Transformermeans: no special metaphysics, full energy/provenance accounting, seamless tie‑in with Constructor Theory (see A 3 Rationale §2). -
Why insist on open‑endedness? Perfect finality is unattainable outside mathematics mandates that holons must be improvable in principle; this pattern encodes that mandate structurally: version n+1 is always possible.
-
Why no overlap (Tᴰ ∩ Tᴿ)? The instant a holon is mutable (design) it ceases to be the “same” operational asset relied upon for guarantees. Overlap would break trust calculations and violate A.7 Strict Distinction.
This pattern therefore realises three core principles in concert:
- Temporal Duality – explicit tagging of states.
- Open‑Ended Evolution – guaranteed pathway for refinement.
- Ontological Parsimony – one mechanism (Transformer) for all state changes, avoiding specialised “observer” or “installer” types.
“Blueprints dream; instances speak. Evolution is the conversation between them.”
FPF’s ambition is to act as an “operating system for thought.” That ambition can only be realised if the framework:
- (i) remains stable and self‑consistent over multi‑decade timespans;
- (ii) invites, rather than resists, the continual influx of new disciplinary knowledge; and
- (iii) allows multiple, even competing, explanatory lenses to coexist without forcing a “winner‑takes‑all” unification.
Historically, grand “total” ontologies—Aristotle’s Categories, Carnap’s Logical Construction of the World, Bunge’s TOE—failed precisely because each tried to embed every domain’s primitives directly into a single monolith. Once the monolith cracked under domain pressure, the whole edifice became unmaintainable.
If FPF were to let domain‑specific primitives creep into its Kernel, two pathologies would follow:
| Pathology | Manifestation | Breach of Constitution |
|---|---|---|
| Kernel Bloat | Every new field (e.g. synthetic biology) adds bespoke U.Types → Core size explodes, review surface becomes unscalable. |
Violates C‑5 Ontological Parsimony; erodes P‑1 Cognitive Elegance. |
| Conceptual Gridlock | Conflicting axioms (deterministic thermodynamics vs. indeterministic econ‑metrics) must fight for space in the same namespace. | Breaks C‑3 Cross‑Scale Consistency; triggers chronic DRR deadlock. |
A minimal, extensible design is therefore mandatory.
| Force | Tension |
|---|---|
| Stability vs. Evolvability | Immutable core needed for trust ↔ constant domain innovation needed for relevance. |
| Universality vs. Specificity | Single kernel language ↔ rich idioms for fields as diverse as robotics, jurisprudence, metabolomics. |
| Parsimony vs. Coverage | Few primitives keep reasoning elegant ↔ framework must still model energy budgets, epistemic uncertainty, agentic goals. |
FPF adopts a micro‑kernel hour‑glass architecture consisting of a strictly minimal core plus an infinite flat namespace of plug‑ins called architheories. (The formal plug‑in Standard is defined in A.6 Architheory Signature & Realisation.)
1 The Open‑Ended Kernel The Kernel’s normative content is frozen to three buckets only:
- Foundational Ontology:
Entity,Holon,Boundary,Role,design‑/run‑time, etc. (A‑cluster, Part A). - Universal Reasoning Patterns: Γ‑aggregation, MHT, Trust calculus, Canonical evolution loop, etc. (B‑cluster, Part B).
- Ecosystem Standards: Guard‑Rails (E‑cluster) and the Architheory Signature schema (A.6).
Everything else—physics, logic operators beyond minimal MODAL, resource semantics, agent decision calculus—is expelled to architheories.
2 Architheory Layering
+To manage this extensibility without creating chaos, FPF classifies all architheories into three mutually exclusive classes, each with a distinct role. This classification governs what an architheory is allowed to do.
| Class | Mnemonic | Conceptual Mandate |
|---|---|---|
| Calculus | CAL - The Builder | Introduces a new composite holon type and exactly one aggregation operator Γ_* that constructs such holons from parts. |
| Logic | LOG - The Reasoner | Adds rules of inference or proof patterns about existing holons. It cannot create new composite holons and thus exports no Γ_* operator. |
| Characterization | CHR - The Observer | Attaches metrics or descriptive properties to existing holons. It neither constructs nor infers new holons and exports no Γ_* operator. |
Each architheory (CAL / LOG / CHR):
- extends the Kernel by importing its primitives and exporting new, typed vocabularies;
- remains self‑contained—it must not mutate Kernel axioms (CC‑A.6.x);
- is versioned, compared, and substituted entirely via its Signature (public Standard) while permitting multiple Realizations (private axiom-sets).
Architheories therefore form the “fat top & bottom” of the hour‑glass:
┌──────────────────────────┐
│ Unlimited Domain CALs │ ← e.g. Resrc‑CAL, Agent‑CAL
├──────────────────────────┤
│ Core CAL / LOG / CHR │ ← Sys‑CAL, KD‑CAL, Method‑CAL …
╞════════ Kernel (Part A+B) ╡
│ Γ, MHT, Trust, etc. │
├──────────────────────────┤
│ Unlimited Tooling Real. │ ← simulators, proof assistants …
└──────────────────────────┘
| Element of the Pattern | Archetype 1 – U.System(industrial water‑pump) |
Archetype 2 – U.Episteme(scientific theory of gravitation) |
|---|---|---|
| Kernel concepts used | U.System, U.Holon, TransformerRole |
U.Episteme, U.Holon, transformerRole |
| Domain CAL that extends the Kernel | Sys‑CAL adds conservation laws, port semantics, resource/work hooks | KD‑CAL adds F‑G‑R characteristics, provenance graph, trust metrics |
| Resulting instance | A fully specified CAD model of the pump that can be aggregated by Γ_sys, analysed by LOG‑CAL, and costed by Resrc‑CAL – without ever mutating the Kernel | A fully formalised theory object that can be cited, aggregated, and challenged by KD‑CAL, validated by LOG‑CAL, scored by the Trust calculus – again without Kernel change |
This table demonstrates the hour‑glass architecture in action: Wide variety of concrete instances → narrow, stable Kernel neck → wide variety of analysis & tooling.
| ID | Requirement | Purpose |
|---|---|---|
| CC‑A.5.1 | The Conceptual Kernel MUST NOT declare any U.Type that is specific to a single scientific or engineering discipline. |
Prevents kernel bloat; enforces Ontological Parsimony (C‑5). |
| CC‑A.5.2 | Every architheory MUST supply a U.ArchitheorySignature (see A.6) that lists all new types, relations, and invariants it introduces. |
Enables plug‑in discoverability and versioned evolution. |
| CC‑A.5.3 | A normative pattern or invariant defined in one architheory MUST NOT override a Kernel pattern, but MAY refine it by additional constraints. | Preserves Kernel immutability while supporting specialisation. |
| CC‑A.5.4 | Dependency edges between architheories MUST point toward the Kernel (acyclic, upward) as required by the Unidirectional Dependency Guard‑Rail (E .5). | Prevents cyclic coupling and “middle‑layer” choke‑points. |
| CC‑A.5.5 | Every architheory MUST declare its classification as one of CAL, LOG, or CHR. Only a CAL may export a Γ_* operator. |
Enforces a clear separation of concerns between constructing, reasoning, and describing. |
| Benefits | Trade‑offs / Mitigations |
|---|---|
| Kernel stability for decades: small, conceptually elegant nucleus rarely changes; archival citations remain valid. | Extra discipline for authors: every domain team must package work as a CAL/LOG/CHR plug‑in. Mitigation: E.8 style‑guide and pattern templates automate most boiler‑plate. |
| Unlimited, parallel innovation: biology, economics, quantum computing can all add CALs without waiting on a central committee. | Potential overlap of CALs: two teams might publish competing resource calculi. Mitigation: coexistence is allowed; the Trust layer lets users choose. |
| Clear “API” boundary: tool builders know the exact, minimal surface they must support – boosting interoperability. | — |
Micro‑kernels succeeded in operating‑system research because they separated immutable primitives (threads, IPC) from replaceable servers (file‑systems, network stacks). FPF adopts the same strategy:
- Immutable primitives → the Part A Kernel (holons, roles, transformer quartet, temporal scopes, constitutional C‑rules).
- Replaceable servers → architheories in Part C (each with its own calculus, logic, characterisation kit).
This delivers on P‑4 (Open‑Ended Kernel), P‑5 (Plugin Layering) and keeps the framework aligned with modern proof‑assistant ecosystems (Lean’s mathlib vs. core).
The “hour‑glass” brings two further advantages:
- Pluralism with auditability – rival CALs can coexist; the Kernel’s Trust pipeline (B.3) quantifies their evidence base.
- Future‑proofing – if a genuinely new substrate (e.g., quantum knowledge objects) emerges, it plugs in at the bottom layer without touching Part A.
- Instantiates: P‑4, P‑5, and relies on Guard‑Rails E.5 (especially Unidirectional Dependency).
- Provides Standard for: every entry in Part C; style enforced via Architheory Signature & Realization (A .6).
- Feeds: Trans‑disciplinary reasoning operators in Part B – Γ, MHT, Trust, Evolution Loop all treat each CAL uniformly through the Kernel neck.
“A stable neck sustains an ever‑growing hour‑glass.”
Status. Architectural pattern [A], kernel‑level and universal. Placement. Part A (Kernel), before A.6 (“Architheory Signature & Realization”) and A.6.1 (“U.Mechanism”). Builds on. E.8 (authoring order), E.10 LEX‑BUNDLE (registers, naming, stratification), E.10.D1 D.CTX (Context discipline).
Coordinates with. A.6 (architheory specialisation of signatures), A.6.1 (mechanism as law‑governed signature), Part F (Bridges & cross‑context transport; naming). Conformance keywords: RFC 2119.
FPF already uses “signatures” to stabilise public promises of architheories and, via A.6.1, of mechanisms. But authors also need stable, minimal declarations for theories (LOG), methods (operational families), and even disciplines (regulated vocabularies). Without one universal notion of signature:
-
similar constructs proliferate under incompatible names;
-
readers cannot tell what is declared (intension & laws) versus what is implemented (specification);
-
cross‑context reuse lacks a canonical place to state applicability and lawful vocabularies.
E.8 demands a single authoring voice and section order; E.10 demands lexical discipline across strata. A.6.0 provides the common kernel shape these patterns presuppose.
If each family (architheories, mechanisms, methods, disciplines) invents its own “signature”:
-
Tight coupling. Private definitions leak as public standards, breaking substitutability.
-
Lexical drift. The same surface label (e.g., scope, normalization) hides different laws.
-
Scope opacity. Applicability (where the words mean what) remains implicit, violating D.CTX.
| Force | Tension |
|---|---|
| Universality vs. fitness | One shape must fit architheories, mechanisms, theories, methods, disciplines, without over‑committing to any one of them. |
| Intension vs. specification (I/D/S) | Signatures declare what and the laws (intension), not recipes or test harnesses (specification). |
| Simplicity vs. expressivity | Keep the kernel small while leaving normalized slots for specialisations (e.g., Γ‑export in A.6; Transport in A.6.1). |
| Locality vs. transport | Meaning is context‑local (D.CTX), yet cross‑context use must be explicit and auditable via Bridges without smuggling implementation. |
Definition. A U.Signature is a public, law‑governed declaration for a named Subject on a declared HostSpace that (i) introduces a vocabulary (types, relations, operators), (ii) states laws (axioms/invariants/guards) over that vocabulary, and (iii) records applicability (where and under which contextual assumptions the declarations hold). Dependencies (imports) are metadata governed by specialisations (e.g., A.6) and not part of the universal four‑row Block.
Naming discipline. The Subject MUST be a single‑sense noun phrase; avoid synonyms/aliases within the same Signature.
A U.Signature is conceptual: it contains no implementation, no packaging/CI metadata, and no Γ‑builders. Γ‑export, if any, is governed by A.6 and only for architheories with classification=CAL.
Every U.Signature SHALL present a four‑row conceptual block (names are universal; family‑specific aliases are mapped below):
-
HostSpace & Subject — the typed HostSpace (the value‑bearing space) and the Subject being governed (e.g., charts, context slices, orders, grammars).
-
Vocabulary — names and sorts of the public types / relations / operators this signature commits to.
-
Laws (Invariants/Guards) — equations, order/closure laws, admissibility constraints (no proofs here; only the law statements).
-
Applicability (Scope & Context) — conditions under which the laws are valid (bounded context, plane, stance, time notions). Applicability MUST bind a
U.BoundedContext(D.CTX). Cross‑context use MUST NOT be implicit; if intended, name the Bridge (conceptual reference only). When numeric comparability is implied, bind legality to CG‑Spec/MM‑CHR (map‑then‑compare; lawful scales/units).
Mapping to existing families (normative aliases). — A.6 (Architheory). Vocabulary ↔ Derivations; Laws ↔ Invariants; Applicability notes layer (CAL/LOG/CHR) and context; Γ‑export policy lives only in A.6. — A.6.1 (Mechanism). HostSpace & Subject ↔ HostSpace/GovernedSubject; Vocabulary ↔ OpSig; Laws ↔ LawSet; Applicability carries GuardPolicy and a conceptual Transport clause (Bridges/CL named; Bridges per F.9; CL/penalties per B.3; CL^plane per C.2.1). — Architheory View (A.6 add‑on). Specialisation A.6 adds an adjacent Imports/Derivations/Invariants/BelongsToAssurance view for pass interfaces.
-
I/D/S separation. A signature states intension and laws; Realizations (if any) carry specifications. Do not mix tutorial text or operational recipes into the Block.
-
Context discipline. Bind Applicability to a
U.BoundedContext. If cross‑context use is intended, name the crossing and reference the Bridge (Part F/B); A.6.0 does not prescribe CL ladders, CL^plane, Φ/Ψ tables, or penalty formulas. -
Stratification. Use LEX‑BUNDLE registers and strata; do not redefine Kernel names in lower strata (no cross‑bleed).
-
Imports location. If your family requires an explicit imports list (e.g., A.6 Architheory), place it in the Signature header or the family‑specific view, not inside the universal four‑row Block.
-
Token hygiene. Do not mint new
U.*tokens inside a Signature without a DRR; prefer referencing existing Kernel/ArchitheoryU.Types.
A.6.0 exposes three conceptual knobs; specialisations (A.6, A.6.1, method/discipline specs) may tighten them:
-
Builder policy. Whether a signature may commit to a builder
Γ_*is not decided here; A.6 governs this for architheories (classification=CALonly). -
Transport clause. If cross‑context/plane use is part of the design, the signature may declare a conceptual Transport clause; A.6.1 gives a concrete schema (Bridge, CL/CL^k/CL^plane—Bridges per F.9, penalties per B.3, CL^plane per C.2.1), but A.6.0 remains agnostic about penalty shapes.
-
Morphisms. Families may define
SigMorph(refinement, conservative extension, equivalence, quotient, product) to relate signatures; A.6.1 instantiates this for mechanisms.
| quartet Element | U.System Example — Grammar of Motions |
U.Episteme Example — Normalization Family |
|---|---|---|
| HostSpace & Subject | HostSpace: U.System:TrajectorySpace; Subject: MotionGrammar. |
HostSpace: U.Episteme:ChartFamily (within one U.BoundedContext); Subject: NormalizationMethod‑Class. |
| Vocabulary | Types: Pose, Segment; Operators: concat, reverse, sample (any Γ‑builder is governed by A.6). |
Operators: apply(method), compose, quotient(≡). |
| Laws (Invariants/Guards) | Closure of concat; associativity; time‑monotone sampling; admissible reverse only for holonomic arms. |
Ratio→positive‑scalar; Interval→affine; Ordinal→monotone; Nominal→categorical; LUT(+uncertainty). |
| Applicability (Scope & Context) | Context: industrial robotics; stance: design; time notion: discrete ticks. Cross‑context transport not declared. | Context: clinical metrics; stance: analysis; validity windows declared; cross‑context transport via Bridge (concept only; details per A.6.1). Numeric comparability bound to CHR/CG‑Spec. |
Why these two? E.8 requires pairs from U.System and U.Episteme to demonstrate trans‑disciplinary universality.
-
Local‑first meaning. Laws are local to the named Context; cross‑context use must be explicit (Bridge), never implicit.
-
No illicit scalarisation. If numbers appear, legal comparability follows CG‑Spec/MM‑CHR; no ordinal means, partial orders return sets; unit/scale alignment is explicit.
-
Register hygiene. Keep Tech vs Plain register pairs; avoid tooling/vendor talk in Kernel prose (E.10).
| ID | Requirement |
|---|---|
| CC‑A.6.0‑1 | A conformant text labelled U.Signature SHALL expose the four‑row Signature Block: HostSpace & Subject; Vocabulary; Laws; Applicability. |
| CC‑A.6.0‑2 | The Block is conceptual only (no packaging/CI metadata, no machine schemas, no Γ). |
| CC‑A.6.0‑3 | Applicability binds a U.BoundedContext; if cross‑context use is intended, a Transport clause is named (Bridge reference) without re‑stating Part F/B.3 details (including any CL^plane). |
| CC‑A.6.0‑4 | Where numeric comparability is implied, Applicability binds to CG‑Spec/MM‑CHR legality (map‑then‑compare; scale/unit alignment). |
| CC‑A.6.0‑5 | Families that specialise A.6.0 (e.g., A.6, A.6.1) MAY add constraints (e.g., Γ‑export policy; penalty routing) and MAY add a family‑specific view (e.g., the Architheory View) but MUST NOT contradict A.6.0’s separation of intension vs specification. |
| CC‑A.6.0‑6 | Under E.10/E.8, tokens respect strata/registers; Kernel names are not redefined in Architheory/Context prose (Part F naming discipline applies). |
-
Uniform kernel shape. Authors can define architheory, mechanism, method, discipline, or theory signatures without inventing new templates.
-
Hard decoupling. A.6 can continue to guarantee substitutable Realizations behind a stable Signature; A.6.1 can continue to guarantee law‑governed operations with explicit guard surfaces.
Didactic cohesion. Readers see the same four conceptual rows across the spec, satisfying E.8’s comparability goal.
Why “HostSpace & Subject”? A.6.1 showed that making the carrier explicit (here: HostSpace) avoids category mistakes when moving between domains (e.g., set‑algebra on context slices vs equivalence‑classes of normalisations). A.6.0 lifts this to the kernel so every signature can declare what it is about before saying what it provides. Why one universal Block? A.6 already proved the value of a compact Signature Block (Imports/Derivations/Invariants/Assurance). A.6.0 factors out the conceptual core—rephrased as “HostSpace & Subject / Vocabulary / Laws / Applicability”—so A.6 can map its four rows onto this universal frame without changing existing architheories.
Informative echoes (post‑2015 SoTA). — Algebraic effects & handlers (OCaml 5, Koka, Effekt): operation signatures + handler laws mirror Vocabulary + Laws while keeping implementations separate. — Policy‑as‑code (OPA/Rego): declarative guard surfaces echo Applicability. — Session/behavioural types (2016–2024): protocol/admissibility laws parallel the Laws row.
-
Specialises / is specialised by: A.6 (adds Γ‑export policy; imports DAG; architheory layering) and A.6.1 (adds OpSig/LawSet/GuardPolicy/Transport for mechanisms).
-
Constrained by: E.10 LEX‑BUNDLE (registers, strata); D.CTX for Context binding; Part F (Bridges & cross‑context transport; naming).
-
Enables: uniform authoring and comparison of signatures across Part C families, methods, and discipline glossaries (Part F).
This pattern follows the E.8 canonical order and uses Tech/Plain register discipline per E.10; it introduces no packaging metadata, no Γ, and remains purely conceptual.
FPF’s architecture is a modular ecosystem of architheories (CAL/LOG/CHR) that extend a slim Kernel. To keep composition stable and comparable, each architheory publishes a public Signature (the Standard) and provides one or more Realizations (private implementations).
A.6 as a specialisation. This pattern is the architheory‑specific specialisation of A.6.0 U.Signature and coordinates cross‑context use with A.6.1 U.Mechanism (Bridge/CL per F.9; penalties route to R/R_eff only per B.3; F/G invariant; CL^plane per C.2.1 CHR:ReferencePlane).
When Signatures (interface) leak implementation, the ecosystem becomes brittle: (1) substitutability breaks, (2) imports entangle, (3) cross‑context use becomes implicit and unauditable.
| Force | Tension |
|---|---|
| Stability vs. evolution | Keep public promises stable while allowing private Realizations to evolve. |
| Universality vs. fitness | One Signature shape across CAL/LOG/CHR vs architheory‑specific vocabularies. |
| Intension vs. specification | Signatures state what & laws; Realizations carry how/tests. |
| Locality vs. transport | Context‑local semantics vs explicit, auditable Bridge‑only crossings (R‑only penalties). |
A Signature states what an architheory offers—its vocabulary, laws, and applicability—without embedding implementation or build metadata. It is the stable unit that other architheories import.
A Realization satisfies the Signature while remaining opaque. Multiple Realizations may co‑exist; they may tighten (never relax) the Signature’s laws (Liskov‑style substitutability).
Every architheory SHALL publish two adjacent views of its public contract:
- the universal A.6.0
U.SignatureBlock (HostSpace & Subject; Vocabulary; Laws; Applicability), and - an Architheory View that preserves the pass interface used across Part C: Imports / Derivations / Invariants / BelongsToAssurance. This ensures both cross‑family uniformity and compatibility with existing architheory tooling.
U.Signature row (A.6.0) |
A.6 alias / where to author it |
|---|---|
| HostSpace & Subject | One‑line declaration above the block (carrier plane and governed subject) |
| Vocabulary | Derivations (public types/relations/operators that the theory contributes) |
| Laws (Invariants/Guards) | Invariants (law statements; proofs live in Realizations) |
| Applicability (Scope & Context) | BelongsToAssurance + context note in the header; bind a U.BoundedContext where relevant; numeric comparability binds to CG‑Spec/MM‑CHR (map‑then‑compare; lawful units/scales). |
Architheory View (mandatory alongside the universal view):
- Imports — required
U.Types/relations already present or produced by earlier passes. - Derivations — new
U.Types/relations/operators the architheory contributes. - Invariants — law statements (proofs in Realizations).
- BelongsToAssurance — {Typing | Verification | Validation}.
Prohibition. The Signature block is conceptual: no packaging/CI/tooling metadata (LEX firewall), no Γ‑builders (except as permitted below for CAL).
- A CAL architheory SHALL export exactly one aggregation/builder
Γ. TheΓidentifier MUST be namespaced under the architheoryid(e.g.,ArchitheoryId.Γ) to avoid collisions. - LOG and CHR architheories SHALL NOT export
Γ. - Import layering SHALL respect the holonic stack: LOG/CHR may import CAL; CAL may import CAL; import graphs are acyclic and respect LEX‑BUNDLE strata (Kernel → Architheory → Context → Instance); no cross‑bleed.
Each Signature begins with:
id (PascalCase), version (SemVer), status (draft/review/stable/deprecated), classification (CAL/LOG/CHR), imports (list), provides (list, including Γ if CAL).
If HostSpace & Subject are non‑trivial, add a one‑liner in the header (or immediately above the block).
Signatures SHALL NOT restate Bridge/CL mechanics. If cross‑context/plane use is intended, the Signature names the Bridge conceptually. Semantics are governed by A.6.1 U.Mechanism; Bridges are specified in F.9; CL/CL^k and Φ/Ψ penalty calculus live in B.3; CL^plane follows C.2.1 CHR:ReferencePlane. No implicit “latest”; time‑sensitive guards require an explicit Γ_time policy in the consuming mechanism.
implements(Realization, Signature) (mandatory, one‑way) · imports(Signatureᵢ, Signatureⱼ) (DAG) · provides(Signature, U.Type ∪ Operator) (public namespace).
Provide a brief pair of examples (Work/System; Knowledge/Episteme) that name HostSpace & Subject, show Vocabulary and Laws, and state Applicability/Context. Keep proofs out of the Signature.
| ID | Requirement |
|---|---|
| CC‑A6.1 | Every architheory MUST declare exactly one Signature. |
| CC‑A6.2 | Every architheory MUST provide ≥ 1 Realization consistent with its Signature. |
| CC‑A6.3 | The global graph of imports MUST be acyclic. |
| CC‑A6.4 | Realizations MUST NOT reference internals of other architheories; only their Signatures. |
| CC‑A6.5 | A Signature’s provides MUST NOT redeclare U.Types already exported by transitive imports. |
| CC‑A6.6 | Realizations MAY tighten but MUST NOT relax Signature laws (Liskov‑style). |
| CC‑A6.7 | If multiple Realizations exist, authors SHOULD provide a short trade‑off rationale. |
| CC‑A6.8 | The Signature MUST include an explicit A.6.0 alignment mapping (table or one‑liners). |
| CC‑A6.9 | Where numeric comparability is implied, bind legality to CG‑Spec/MM‑CHR (map‑then‑compare; lawful units/scales; no ordinal means). |
| CC‑A6.10 | Any intended cross‑context/plane use MUST name the Bridge and defer semantics to A.6.1/Part F; penalties route to R/R_eff only. |
| CC‑A6.11 | If classification = CAL and a Γ is exported, its identifier MUST be namespaced under the architheory id. |
| CC‑A6.12 | Both views of the Signature are present: the universal A.6.0 Block and the Architheory View (Imports/Derivations/Invariants/BelongsToAssurance) placed adjacently. |
Author-facing:
- The two Signature views are present (A.6.0 Block and Architheory View).
- If
classification = CAL, exactly one Γ is named. - Imports point down the layering and remain acyclic.
- Any referenced artefacts are anchored by SCR/RSCR identifiers (A.10).
- An A.6.0 alignment note is provided (table or one‑liners as above).
- Hard decoupling — Kernel stability is preserved; swapping a Realization never breaks dependents.
- In‑framework competition — Alternative logics, physics, economic models can co‑exist under the same interface.
- Machine‑checkable composition — Because imports form a DAG and
providesare explicit, automated loaders can detect conflicts early.
| Benefit | What you get | Trade‑off / Guard |
|---|---|---|
| Universal shape (A.6.0 alignment) | One 4‑row block across architheories, mechanisms, methods, disciplines. | Maintain Intension vs. Specification separation; no Γ in Signatures except CAL per A.6. |
| Substitutability | Multiple Realizations behind one Signature; safe swaps; Liskov‑style tightening allowed. | Relaxing laws is forbidden; otherwise mint a refined Signature or use U.MechMorph (A.6.1). |
| Transport discipline | Bridge‑only crossing; CL penalties route to R/R_eff; F/G invariant. | Crossings are named; no implicit “latest”; Γ_time where relevant. |
| Numeric comparability sanity | Map‑then‑compare via CG‑Spec/MM‑CHR; explicit unit/scale alignment. | Partial orders return sets; illegal scalarisation (e.g., ordinal means) is blocked. |
| Layering predictability | Exactly one Γ for CAL; LOG/CHR export none; imports acyclic; no cross‑bleed across strata. | Some constructs belong as Mechanisms (A.6.1), not as architheories. |
Why “Signature”? Familiar to engineers (function/type signatures) and to logicians (algebraic signatures). It is concise, neutral, and keeps the Kernel slim while enabling competing world‑views to co‑exist behind the same interface.
Specialises / is specialised by: A.6.0 U.Signature, A.6.1 U.Mechanism.
Constrained by: LEX‑BUNDLE (registers/strata), D.CTX (Context), Part F (Bridges & cross‑context transport; naming).
One‑line summary. A U.Mechanism is a Signature with laws over a declared HostSpace and GovernedSubject, with explicit operations, invariants/guards, and a named Transport clause for cross‑context use. Transport is Bridge‑only (per F.9) with penalties routed to R/R_eff only (per B.3); F/G remain invariant; CL^plane follows C.2.1 CHR:ReferencePlane. Realizations MAY be published under A.6 (Signature→Realization; one Γ only if classification=CAL; acyclic imports; opacity).
Status. Normative [A] in Part A (Kernel).
Placement. Immediately after A.6 as A.6.1. USM (A.2.6) and UNM (A.19/C.16) become instances conforming to A.6.1 (no semantic change to either).
Give FPF one uniform kernel shape for things like USM (set‑algebra on context slices) and UNM (classes of admissible normalizations with ≡_UNM) so authors can define, compare, refine, compose, and port mechanisms without re‑inventing the meta‑language; all cross‑context use is Bridge‑only with CL penalties to R/R_eff, never to F/G.
Without a kernel abstraction, scope/normalization/comparison constructs proliferate with incompatible algebras and guard surfaces; cross‑context reuse lacks visible Bridge/CL routing; comparability drifts into illegal scalarisation (e.g., ordinal means). FPF already curbs this via A.6 (Signature discipline), USM (scope algebra & Γ_time), UNM (normalize‑then‑compare), and CG‑Spec (lawful comparators/gauges)—but lacks a common meta‑slot for “mechanism.”
Locality vs transport. Semantics are context‑local; crossing contexts is Bridge‑only (Part F/B.3); penalties hit R/R_eff; F/G invariant.
Expressivity vs legality. Rich operators vs CHR legality and CG‑Spec (no ordinal averages; lawful unit alignment).
Time determinacy. Explicit Γ_time; no implicit latest. (Required in USM’s ContextSlice.)
Signature hygiene. Obey A.6 (exactly one Γ from CAL, LOG/CHR export none; imports acyclic; realizations opaque).
A U.Mechanism publishes
U.MechSig := ⟨SignatureHeader, Imports, HostSpace, GovernedSubject, OpSig, LawSet, GuardPolicy, Transport, Γ_timePolicy, PlanePolicy⟩
+and admits Realizations (kernel‑level or architheory‑level) that respect it. The shape is notation‑independent and conceptual (no tooling, storage, or CI metadata).
-
SignatureHeader.
id(PascalCase),version(SemVer),status(draft/review/stable/deprecated). If realized as an Architheory, add the A.6 header withclassification ∈ {CAL|LOG|CHR}andimports/provides; only CAL may export exactly one Γ; LOG/CHR export none. For Kernel‑level realizations, do not mint an A.6 header. -
Imports. Architheory Signatures /
U.Typesthis mechanism requires (notation‑independent; acyclic). When realized as an Architheory, LOG/CHR may import CAL; CAL may import CAL (A.6 layering). -
HostSpace. The typed value‑holding space the mechanism ranges over (e.g.,
U.ContextSliceSpace; aU.CharacteristicSpace/chart family within oneU.BoundedContext). Do not mint a new core type here; reference existingU.Types (LEX discipline). If planes differ, state the ReferencePlane policy (see PlanePolicy). -
GovernedSubject. What is governed: Characteristic | TransformationClass | Equivalence | Protocol | Scope | … (e.g., Scope sets; NormalizationMethod classes with induced ≡_UNM).
-
OpSig. Named operations with types; examples: • USM:
∈, ⊆, ∩, SpanUnion, translate, widen, narrow, refit. • UNM:apply(method),compose,quotient(≡_UNM); normalize‑then‑compare. -
LawSet. Equations/admissibility (no proofs here; statements only). Laws MUST be compatible with CHR legality where numeric comparison/aggregation is induced. Examples: • USM: serial intersection; SpanUnion only where a named independence assumption is satisfied (state features/axes, validity window, evidence class);
translateuses declared Bridges; Γ_time is mandatory. • UNM: scale‑appropriate transforms — ratio→positive‑scalar; interval→affine; ordinal→monotone; nominal→categorical;tabular:LUT(+uncertainty). (Do not mint a new Kernel token for “certificate”; if such a type is later required, it MUST follow DRR/LEX minting.) -
GuardPolicy. Deterministic, context‑local predicates that fail closed (e.g., “Scope covers TargetSlice” with named Γ_time; “NormalizationMethod class + validity window named”). Unknowns → {degrade | abstain}; never coerce to 0/false.
-
Transport. Bridge‑only semantics for cross‑context / cross‑plane use: name the Bridge and channel (
Scope|Kind) per F.9, and record ReferencePlane(src,tgt) per C.2.1. Do not restate CL ladders, CL^plane, or Φ/Ψ tables here; penalties route to R/R_eff only and never mutate F/G (per B.3). Crossings are explicit; no implicit crossings. Where USM or KindBridge are used together, apply the two‑bridge rule (scope CL and kindCL^kpenalties handled separately to R). -
Γ_timePolicy. Point/window/policy; no implicit “latest.” Validity windows are named; required whenever guards reference time.
-
PlanePolicy. Declare
ReferencePlaneon values/paths; when planes differ, name CL^plane and apply a Φ_plane policy (Part F/B.3). Plane penalties do not change CL; route to R/R_eff only; F/G stay invariant. -
Audit. Conceptual audit surface only (no data/telemetry workflows): crossings are publishable on UTS; surface policy‑ids rather than tables. Edition pins and regression hooks (if any) are referenced by id; operational details remain out of scope.
-
SignatureBlock alignment (A.6). When realized as an Architheory, map
U.MechSigto the A.6 Signature Block —Imports, Derivations, Invariants, BelongsToAssurance — and include the A.6 header withclassification/provides. CAL Realizations MAY provide exactly one Γ; LOG/CHR provide none; imports form a DAG; internals opaque.
Compatibility with A.6. If realized as an architheory (CAL/LOG/CHR), obey A.6 (one Γ for CAL only; acyclic imports; opacity). Kernel‑level realizations remain notation‑independent and publish the same fields for auditability. LEX discipline applies to all minted tokens.
Intent. Provide structure‑preserving relations & constructors between mechanisms. Definitions.
-
Refinement
M′ ⊑ M: narrows HostSpace/Applicability or strengthens laws (safe substitution; Liskov‑style). -
Extension
M ⊑⁺ M″: adds operations without weakening existing Laws; old programs remain valid (conservative extension). -
Equivalence
M ≡ M′: there exists a bijective mapping between Subjects/ops preserving/reflecting LawSet (up‑to‑isomorphism on HostSpace and OpSig). -
Quotient
M/≈: factor by a congruence (e.g., ≡_UNM for charts). -
Product
M×N: independent HostSpaces; ops are component‑wise; ensures no illegal cross‑ops (e.g., set‑algebra discipline forSpanUnion). Where independence is claimed, name and justify the assumption (do not mint new Kernel types here).
Transport Bridge⋅M: lifts across Contexts/planes; names CL/CL^k/CL^plane regimes; penalties → R_eff only; UTS row recommended for publication; ReferencePlane(src,tgt) recorded. If mapping losses are material, narrow the mapped set or publish an adapter (best practice).
Passing example. USM′ = USM + “publish named independence‑assumption evidence for SpanUnion” ⇒ Refinement (strengthened law; substitution‑safe).
Normalization quotient. UNM / ≡_UNM exposes compare‑on‑invariants surfaces for CPM/SCM (map‑then‑compare).
MechanismDescription (E.8 Tell–Show–Show; I/D/S‑compliant):
Mechanism: U.<Name> (Kernel conceptual description; no tooling fields)
Imports: <Signatures / U.Types> · HostSpace: <U.Type> · GovernedSubject: <Characteristic | TransformationClass | Equivalence | Protocol | Scope | …> · OpSig: <ops with types> · LawSet: <equations/guards/monotonicity> · GuardPolicy: <admission predicates; Γ_time> · Transport: <Bridge channels; CL/CL^k/CL^plane named; ReferencePlane(src,tgt)> · PlanePolicy: <world|concept|episteme rules>
-
MechFamilyDescription:
{MechSig, Realizationα, Realizationβ, …}— each Realization may tighten (never relax) Laws (Liskov‑style). -
MechInstanceDescription:
{MechSig@Context, Windows, named Φ/Ψ/Φ_plane regimes, BridgeIds}— a conceptual instance; operational telemetry/workflows are out of scope.
- Imports:
U.ContextSliceSpace; Part F.9 Bridge; C.2.1 ReferencePlane (noted for crossings); C.2.2 F–G–R; C.2.3 U.Formality. - HostSpace:
U.ContextSliceSpace. - GovernedSubject:
U.Scopewith specializationsU.ClaimScope(G) andU.WorkScope. - OpSig:
∈, ⊆, ∩, SpanUnion, translate, widen, narrow, refit. - LawSet: serial intersection; SpanUnion only where a named independence assumption is satisfied (state features/axes, validity window, evidence class); translate uses declared Bridges; Γ_time is mandatory.
- GuardPolicy: deterministic “Scope covers TargetSlice”; fail‑closed;
unknown → {degrade|abstain}(no implicitunknown→0/false). - Transport: Bridge‑only with CL; penalties →
R_eff; F/G invariant; publish UTS notes. - Γ_timePolicy:
point | window | policy; no implicit “latest.” - PlanePolicy: not applicable to scope sets (scope is set‑valued over
ContextSlice, no value‑plane); CL^plane N/A.
- Imports: A.17/A.18 (CSLC); C.16 (MM‑CHR);
U.BoundedContext; Part F.9 Bridge; C.2.1 ReferencePlane. - HostSpace: chart/
U.CharacteristicSpacefamily in a CN‑frame (oneU.BoundedContext). - GovernedSubject: NormalizationMethod classes with induced ≡_UNM equivalence over charts.
- OpSig:
apply(method),compose,quotient(≡_UNM); normalize‑then‑compare (exposes compare‑on‑invariants surfaces to CPM/SCM). - LawSet: scale‑appropriate transforms —
ratio:scale / interval:affine / ordinal:monotone / nominal:categorical / tabular:LUT(+uncertainty); validity windows per edition. - GuardPolicy:
method ∈ declared class‑setAND validity window named; fail‑closed;unknown → {degrade|abstain}. - Transport: Bridge‑only on cross‑Context; when aboutness changes, declare KindBridge (CL^k); penalties →
R_effonly. - Γ_timePolicy: named validity windows for NormalizationMethod/instances (editioned).
- PlanePolicy: values live on episteme ReferencePlane; on plane crossings apply CL^plane policy; penalties →
R_effonly.
(No operational telemetry implied; publication remains conceptual.)
- Local‑first semantics. All judgments are context‑local; crossings are explicit and costed (CL→R only).
- Legality‑first comparability. Numeric comparison/aggregation requires CG‑Spec (lawful gauge, Γ‑fold, MinimalEvidence); partial orders return sets; no ordinal means.
- Tri‑state discipline.
unknown → {degrade|abstain};sandbox/probe‑onlyis a LOG branch with a policy‑id (no implicitunknown→0/false). - R‑only penalties. Φ/Ψ/Φ_plane are monotone and bounded; penalties route to
R_effonly; F/G invariant.
| ID | Requirement |
|---|---|
| CC‑UM.1 | Complete MechSig publishes: Imports, HostSpace (existing U.Type), GovernedSubject, OpSig, LawSet, GuardPolicy, Transport (Bridge named; ReferencePlane), Γ_timePolicy, PlanePolicy. |
| CC‑UM.2 | A.6 alignment: if realized as Architheory, use A.6 header; one Γ only if CAL; LOG/CHR none; imports acyclic; Realizations opaque; laws may be tightened (not relaxed). |
| CC‑UM.3 | Bridge‑only transport: crossings name a Bridge (F.9); ReferencePlane(src,tgt) recorded (C.2.1); CL^plane named when planes differ; no implicit crossings. When typed reuse is involved, the two‑bridge rule applies (scope CL and kind CL^k penalties routed separately to R). |
| CC‑UM.4 | R‑only routing: Φ/Ψ/Φ_plane regimes and CL ladders per B.3; penalties reduce R/R_eff only; F/G invariant. |
| CC‑UM.5 | CG‑Spec binding for any numeric compare/aggregate: lawful gauges and Γ‑fold; map‑then‑compare; partial orders return sets; no ordinal means; interval/ratio arithmetic only with unit alignment (CSLC‑proven). |
| CC‑UM.6 | E.8/E.10 compliance: Tell–Show–Show present under “Archetypal Grounding”; twin registers & I‑D‑S respected; any new U.* token requires a DRR and LEX.TokenClass entry; non‑spec surfaces end with “…Description”; no tool/vendor tokens in Core. |
| CC‑UM.7 | Unknowns tri‑state: guards define `unknown → {degrade |
CPM — Comparison Mechanism (parity‑grade orders) HostSpace: typed traits/charts in a CG‑Frame. OpSig: lawful orders (≤, ≽, lexicographic) + set‑returning dominance (Pareto). LawSet: no ordinal averaging; map‑then‑compare when spaces/scales differ (UNM); editions pinned. GuardPolicy: CG‑Spec bound; ComparatorSet explicit. Transport: Bridge+CL → R/R_eff only.
SCM — Scoring Mechanism (gauge‑first)
HostSpace: U.Measure (CHR‑typed slots). OpSig: gauge embeddings + admissible aggregators; WeightedSum only on interval/ratio with unit alignment; partial orders return sets. Guards: MinimalEvidence + CG‑Spec legality. Transport: penalties → R/R_eff; UTS row.
PTM — Publication & Telemetry Mechanism (informative)
HostSpace: SoTA‑Pack(Core), PathId/PathSliceId, PolicyId. OpSig: emit selector‑ready packs with parity pins and telemetry stubs; listen for edition/illumination bumps; trigger slice‑scoped refresh.
LawSet: no change of dominance defaults unless CAL policy promotes; edition‑aware refresh. Guards: AH‑1..AH‑4 block missing pins. Transport/ Audit: G.10/G.11 publication & refresh semantics (CL routing to R/R_eff).
Informative SoTA: telemetry hooks align with post‑2015 quality‑diversity families (CMA‑ME/MAE, DQD/MEGA) and open‑ended methods (POET‑class) when gauged (illumination) rather than scored.
-
Algebraic effects & handlers (post‑2015: Koka, Effekt, OCaml 5 effects): operation signatures + handler laws ↔ OpSig + Realizations; “effect scope” echoes USM guards.
-
Institutions (Goguen–Burstall; HETS): signature–sentences–models with (co)morphisms ↔ A.6.1 + U.MechMorph; FPF adds Γ_time and R‑only penalties.
-
Policy‑as‑Code (Rego/OPA) and ODD ISO 3450x: admissibility predicates ↔ GuardPolicy with ContextSlice and Γ_time.
-
Session/Typestate types: admissible sequences & state guards ↔ deterministic Mechanism guards and set‑valued scopes.
(Analogies are descriptive only; normative content is in the FPF text and patterns above.)
“To mint a mechanism, fill a MechSig: pick HostSpace and GovernedSubject; declare OpSig and LawSet; state GuardPolicy and Γ_time; define Transport (Bridge/CL with penalties to
R_effonly), and Audit (UTS + Path pins). Realize it as CAL/LOG/CHR under A.6. USM and UNM are already such mechanisms; the same template births comparison, scoring, and publication mechanisms—safely bound to CG‑Spec—without leaving the kernel grammar.”
- Draft AT0/AT1 charter: why this Mechanism, which guard surfaces and comparability are in scope; is a Γ_m (CAL) builder needed?
-
Fill MechSig (HostSpace, CharacteristicKind, OpSig, LawSet, GuardPolicy, Transport, Γ_timePolicy, Audit).
-
Bind CHR legality & CG‑Spec when comparing/aggregating (ComparatorSet, Gauge, MinimalEvidence, Γ‑fold).
Ship UTS + G.10; wire G.11 telemetry (PathSlice‑keyed); ensure penalties route to R_eff only.
- Uniform kernel shape. Scope, normalization, comparison families can be authored and compared without lexical drift.
- Auditable reuse. GateCrossings are UTS‑visible; penalties are transparent (R only), with AH‑1..AH‑4 harness coverage.
- Scalarisation avoids illegality. Partial orders remain set‑valued; cross‑scale arithmetic is blocked by CG‑Spec/CSLC.
Anchoring mechanisms in A.6 Signature→Realization provides a minimal, typed surface that preserves USM set‑algebra and UNM “normalize‑then‑compare” quotients while making E.11 crossings explicit and costed on R (never F/G).
Builds on A.6; instantiates A.2.6 USM (ContextSlice, Γ_time, ∩/SpanUnion/translate) and A.19/C.16 UNM (classes, ≡_UNM, validity windows); uses Part B (Bridges, CL/CL^k/CL^plane; no implicit crossings); binds CG‑Spec for any numeric comparison/aggregation; telemetry/publication via G.10/G.11. </мой вариант паттерна "механизма механизмов">
Provide a single, didactically clear lattice of distinctions that keeps models free from category errors. This pattern is the guard‑rail that prevents four recurrent confusions:
- Role vs Function (mask vs behaviour),
- MethodDescription vs Method vs Work (description vs capability vs occurrence),
- Holon vs System vs Episteme (what can act and what cannot),
- Episteme vs Carrier (knowledge vs its material signs).
It harmonizes A.12 (External Transformer), A.13 (Agential Role & Agency Spectrum), A.14 (Advanced Mereology), and A.15 (Role–Method–Work Alignment).
- Holons (A.1) and systems. All holons are part/whole units; only systems can enact behaviour.
- Externalization (A.12). Every change is performed by a system bearing TransformerRole across a boundary; there is no “self‑magic”.
- Quartet backbone (A.3, A.15). We separate MethodDescription (description), Method (design‑time capability), and Work (run‑time occurrence), with the system bearing TransformerRole as the acting side.
- Evidence (A.10). Knowledge claims are anchored via Symbol‑Carrier Register (SCR); epistemes never “act”, they are used by systems that act on their carriers.
Manager’s reading: if a sentence could be read as “the document decided” or “the process executed itself”, it violates A.7.
When documents blur the above lines, three classes of defects appear:
- Category collapse. People write “function/role/process” interchangeably; teams then disagree whether they are changing a plan, a capability, or reporting an actual occurrence.
- Agency misplacement. Epistemes (documents, models) are treated as doers; collectives as raw sets; or a “holon” is used where only a system makes sense.
- Audit failures. A MethodDescription is cited as if it were evidence; or Work has no anchors (no carriers, no time span), making trust impossible (B.3).
| Force | Tension |
|---|---|
| Didactic brevity vs conceptual precision | Teams want short words (“process”, “function”) ↔ the framework must keep five distinct layers apart. |
| Universality vs domain idioms | We support engineering idioms (procedure, SOP, algorithm, workflow) ↔ internally we must map them unambiguously. |
| Parsimony vs completeness | Minimal concept set ↔ enough distinctions to avoid the classic traps (role/function; plan/capability/occurrence; episteme/carrier). |
Terminology (normative): Four orthogonal characteristics • senseFamily — the categorical characteristic, used by F.7/F.8/F.9: {Role | Status | Measurement | Type‑structure | Method | Execution}. Rows must be sense‑uniform. • ReferencePlane — the referent mode per CHR: {world/external | conceptual | epistemic}. • I/D/S layer — the Intension/Description/Specification layer (E.10.D2). Not a I/D/S “plane” or "stance", and not a bare "layer". • design/run Stance — the design vs run temporal stance. Not a temporal “plane” or "layer", and not a bare "stance".
A.7 establishes the following pairs and triplets. Use their names and scope exactly as below.
-
Role (role‑object, mask). A contextual position a holon can bear (A.2, A.15). A role is not behaviour; it is the mask under which behaviour may be enacted.
- Example: Cooling‑CirculatorRole in a thermal loop.
-
Function = behaviour = Method under a role. What a system actually does when bearing a role. In Transformer context, this behaviour is the Method (design‑time capability) that can be executed as Work (run‑time).
- Safe rewrite for earlier “Holonic Duality (Substance ⧧ Function)”:
Holonic Duality (Substance ⧧ Role). A
U.Systemkeeps its identity (substance) while switching roles; each role may entail a Method (behaviour) and its possible Work (occurrence).
- Safe rewrite for earlier “Holonic Duality (Substance ⧧ Function)”:
Holonic Duality (Substance ⧧ Role). A
Normative guard: Use “Role” for the mask; use “Method/Work” for behaviour/occurrence. Do not call the role itself a function.
- MethodDescription — the description (algorithm / SOP / recipe / script) at design‑time. Anchored via SCR (A.10).
- Method — the order‑sensitive capability the system bearing TransformerRole can enact, composed with Γ_method (B.1.5). A Method exists only while related Work is underway; outside executions refer to it via MethodDescription (see A.15 §2.2, §4.1).
- Work — the dated run‑time occurrence (what actually happened), with resource spend (Γ_work) and temporal coverage (Γ_time).
Normative guard: Never use MethodDescription as evidence of Work; never present Method as if it had happened.
- System — the only holon kind that can bear behavioural roles and enact Method/Work.
- Episteme — cannot act; it is changed via its carriers by a system. Epistemes may bear non‑behavioural roles (e.g., ReferenceRole, ConstraintSourceRole).
- Holon — umbrella term; do not use it where only system is meaningful (e.g., “holon bearing TransformerRole” is invalid; write “system bearing TransformerRole”).
Normative guard: Behavioural roles (including TransformerRole) have domain = system. Epistemes may bear purely classificatory roles only.
- Episteme — the knowledge content (claim, model, requirement set).
- Symbol Carrier — the physical/digital sign that carries the episteme (file, volume, dataset item), tracked in SCR; remote sets in RSCR.
- Use: Evidence, provenance, and reproducibility address carriers; arguments and validity address epistemes.
Normative guard: When you say “we updated the spec”, detail which carriers changed (A.10).
-
Set / Collection (MemberOf) — mathematical or catalog grouping; no joint behaviour implied.
-
Collective System — a system with boundary and coordination Method (e.g., a team).
-
Use relations correctly:
- ComponentOf — mechanical/structural part in systems.
- ConstituentOf — logical/content part in epistemes.
- PortionOf — quantitative portion with conserved extensives.
- PhaseOf — temporal part/state across a continuous identity.
- RoleBearerOf — a system is the bearer of a Role.
Normative guard: If the grouping is expected to act, model a collective system (not a set) and provide its role and Method/Work.
- Γ_sys — composition of system properties (physical/systemic).
- Γ_method — composition of Method (order, branching).
- Γ_time — composition of Work histories and temporal parts.
- Γ_work — composition of resource spend and yields tied to Work. Do not track costs with Γ_method; costs (resources/yield) belong to Γ_work.
Normative guard: Avoid generic “process” for these operators. Reserve “process” for domain idioms; map internally to Method (design) and Work (run).
Example 1 — Pump in a cooling loop
- Substance (system): Centrifugal pump P‑12.
- Role: Cooling‑CirculatorRole.
- MethodDescription: “Loop Circulation v3” (anchored in SCR).
- Method: ordered capability: start → ramp → hold → stop (Γ_method).
- Work: run on 2025‑08‑09 10:00–10:45; energy ledger via Γ_work; log via Γ_time.
- Safe phrasing: “The system bearing TransformerRole (P‑12 control unit) executed the Method described by MethodDescription, producing Work …”
- What not to write: “The pump’s function is the role” (role ≠ behaviour).
Example 2 — Standard document cited in a design
- Episteme: “Safety Standard S‑174”.
- Carriers: PDF (SCR id: scr://std/S‑174/2025‑07), printed volume (scr://print/S‑174/2e).
- Role: ReferenceRole in the valve selection activity.
- System bearing TransformerRole: design team’s selection service.
- MethodDescription: “Valve Selection SOP v5”.
- Method/Work: capability and dated selection session that used the standard; the episteme did not act.
Example 3 — Set vs team
- Set (MemberOf): {Alice, Bob, 3.14} — a collection; no behaviour implied.
- Collective system (team): boundary, coordination Method, supervision Work; can bear AgentialRole (A.13).
- Safe phrasing: “Team T plays Cooling‑MaintenanceRole and executed Work W…”
| ID | Requirement | Practical test |
|---|---|---|
| CC‑A7.1 (Role/Behaviour split) | A Role must be modelled as a contextual mask borne by a holon; behaviour must be expressed as Method (design‑time capability) and Work (run‑time occurrence). | In any sentence, if “role” is used as if it does something, rewrite: the system bearing TransformerRole does it by Method/Work. |
| CC‑A7.2 (Transformer domain) | TransformerRole SHALL be borne only by a system. | Type‑check: bearer ∈ U.System. “holon bearing TransformerRole” is invalid. |
| CC‑A7.3 (Episteme non‑agency) | An episteme SHALL NOT be described as acting. All changes to epistemes must be routed to their symbol carriers (A.10) by a system bearing TransformerRole. | Text contains the acting system + carriers (SCR ids). |
| CC‑A7.4 (MethodDescription ≠ Method ≠ Work) | MethodDescription (description), Method (capability), and Work (occurrence) SHALL be kept distinct in wording and modelling. | Ask: is there a design artefact? a capability? a dated occurrence? Each must be named separately. |
| CC‑A7.5 (Operator fit) | Use Γ_method only for composing Method; Γ_time only for Work histories; Γ_work only for resource spend/yields; Γ_sys for systemic properties of systems. | No sentence should use a single generic “process operator” for all three. |
| CC‑A7.6 (SCR anchoring) | Any knowledge claim that references documents/data SHALL anchor carriers via SCR/RSCR on first mention in the subsection. | First mention expands as “Symbol‑Carrier Register (SCR)”; references list carrier ids. |
| CC‑A7.7 (Collective vs set) | If a grouping is expected to act, it MUST be modelled as a collective system (boundary + coordination Method + Work), not as a MemberOf set. | Presence of boundary, Method, Work for the group. |
| CC‑A7.8 (Diagram legend) | When domain idioms use “process”, diagrams/text MUST map them to FPF terms on first occurrence: process (domain) ≡ Method (design‑time) / Work (run‑time). | Legend or parenthetical present at first use. |
| CC‑A7.9 (Substance ⧧ Role wording) | The safe formula is “System (substance) plays Role; under that Role it has Method; its execution is Work.” | Sentences follow this order; “function” used only as synonym for behaviour, never for the role. |
| CC‑A7.10 (Quartet clarity) | Any “triad” picture MAY be used only as a design‑time Standardion (Transformer + MethodDescription + Method) and MUST be accompanied by an explicit Work lane elsewhere in the same section. “quartet of quartets” headings SHALL be avoided; use “Quartet backbone” instead. | Diagram has a visible Work lane/timeline or separate box within the same section. |
| CC‑A7.11 (Terminology hygiene) | Ban “actor” in core text. Use “system bearing TransformerRole”; bind local shorthand “Transformer” only per A.12 rules. | Plain text scan: no “actor”; shorthand is locally bound. |
| CC‑A7.12 (Role domain guards) | Behavioural roles’ domain = system. Epistemes may bear non‑behavioural roles (e.g., ReferenceRole, ConstraintSourceRole) only. | Role declarations name their domain. |
| Instead of (ambiguous) | Write (canonical) | Why |
|---|---|---|
| “The process enforced the rule.” | “The system bearing TransformerRole enforced the rule by executing the Method; the Work is anchored to carriers ⟨ids⟩.” | Processes don’t act; systems do. Evidence via Work + SCR. |
| “The specification decided to tighten limits.” | “The design‑control service (system bearing TransformerRole) updated the carriers of the spec (SCR ids), producing Work at ⟨time⟩.” | Epistemes are changed via carriers by systems. |
| “Our role is pump; the role circulates coolant.” | “The system plays Cooling‑CirculatorRole; under this role its Method circulates coolant; Work was executed ⟨when⟩.” | Role = mask; behaviour = Method/Work. |
| “We followed the blueprint, so it’s done.” | “We have a MethodDescription and a Method; completion is evidenced by Work with ⟨timestamps, outcomes⟩.” | Description/capability ≠ occurrence. |
| “Team = set of members; it performed repair.” | “The team is a collective system (boundary + coordination Method); it executed Work ⟨…⟩.” | Acting groups must be systems, not sets. |
| “Process cost is tracked by Γ_method.” | “Work cost is tracked by Γ_work; Γ_method composes the Method (order/branching).” | Operator alignment. |
| “Holon bearing TransformerRole.” | “System bearing TransformerRole.” | Only systems can bear behavioural roles. |
-
Role‑as‑behaviour — calling the role “the function”. Fix: Name the role + Method/Work pair explicitly.
-
Episteme‑as‑system — “the model routed traffic”. Fix: Name the system (or Transformer as a system bearing AgentialRole) that used the model; list carriers touched.
-
Triad everywhere — omitting Work entirely. Fix: Add the Work lane: timestamps, outcomes, Γ_time coverage.
-
Operator blur — using one “process operator” for everything. Fix: Choose among Γ_method, Γ_time, Γ_work, Γ_sys.
-
Set‑as‑collective — a MemberOf set “decides”. Fix: Model a collective system with coordination Method.
-
Unanchored evidence — citing ideas without carriers. Fix: Add SCR/RSCR ids; tie claims to carriers.
-
Holon/system drift — “holon maintains temperature”. Fix: Say system; reserve “holon” for neutral mereology.
-
Function/role swap in tables — columns labelled “Function” but entries are roles. Fix: Rename column to Role; add a separate Behaviour (Method/Work) column.
-
Process‑word leakage — domain “process” used as FPF operator. Fix: Add parenthetical mapping at first use (Method/Work).
-
Carrier/episteme swap — “we versioned the model” meaning a file was renamed. Fix: State whether the episteme content changed; if only a carrier was renamed, say so.
| Benefit | Why it matters | Trade‑off / Mitigation |
|---|---|---|
| Category safety at scale | Prevents silent logic bugs across holarchies. | Slight verbosity → use local shorthand per A.12. |
| Trustworthy evidence | Work + SCR anchoring makes claims auditable. | Requires discipline → provide checklists. |
| Operator determinism | Correct Γ‑flavour selection preserves invariants. | A bit more modelling → reusable templates. |
| On‑ramp for managers | Canonical rewrites give immediate phrasing fixes. | Team training → this pattern is the training page. |
- Engineering cognition: Large programmes fail less from equations than from category slips (“process vs procedure vs execution”). A.7 eliminates these slips by a small, repeatable grammar.
- Compatible with ISO/BORO practice: Distinguishing artefacts (specs), capabilities (procedures), and occurrences (operations) mirrors established systems‑engineering discipline while keeping FPF’s holonic rigor.
- Didactic primacy: Managers can approve sentences by spotting five tokens: system bearing TransformerRole / Role / Method / Work / SCR.
- Builds on: A.3 (Transformer Quartet), A.12 (Externalization & Reflexive Split), A.14 (Advanced Mereology), A.15 (Role–Method–Work Alignment), A.10 (Evidence & SCR).
- Constrains: A.13 (Agency sits on systems only; epistemes non‑behavioural), Part B operators (Γ_method/Γ_time/Γ_work/Γ_sys) and their choice points.
Approval sentence template
“The system bearing TransformerRole ⟨name⟩ plays ⟨Role⟩; it has Method ⟨M⟩ (from MethodDescription ⟨S⟩) and executed Work ⟨W⟩ on ⟨time⟩, anchored to ⟨SCR ids⟩; resources accounted via Γ_work.”
Five binary checks
- Actor ban: No “actor” token; canonical phrasing present.
- Clear trio: MethodDescription / Method / Work are all named (as applicable) and not conflated.
- Right Γ: Γ_method for capability; Γ_time for occurrence; Γ_work for resources; Γ_sys for system properties.
- Episteme handled: Epistemes do not act; carriers listed (SCR).
- Group clarity: Acting group is a collective system, not a MemberOf set.
Diagram legend stub
- “process (domain)” ⇒ Method (design‑time) / Work (run‑time).
- Role column lists masks (e.g., Cooling‑CirculatorRole).
- Behaviour column shows Method/Work, not the role itself.
“A principle that works in only one world is local folklore; a first principle architects every world.”
FPF aspires to be an operating system for thought that engineers, biologists, economists, and AI agents can all use without translation layers. That promise rests on the universality of its core primitives (U.Types). History is littered with “upper ontologies” that proclaimed universality yet smuggled in the biases of a single discipline; once deployed beyond their birthplace, they cracked or ballooned. Rule C‑1 turns “universal” from a marketing word into a measurable criterion: cross‑domain congruence.
| Pathology | Manifestation |
|---|---|
| Parochial Drift | A “universal” U.Resource works for ERP bills of materials but collapses for ATP in cell biology. |
| Alienated Communities | Subject‑matter experts recognise the bias and abandon the framework, fracturing knowledge silos. |
| Kernel Bloat | Competing “almost‑universal” types are added to patch gaps, violating Ontological Parsimony (A 11). |
| Force | Tension |
|---|---|
| Generality vs Specificity | Primitives must stretch across physics ↔ social science yet keep actionable meaning. |
| Rigor vs Pragmatism | Proof of universality must be checkable, not philosophical hand‑waving. |
| Inclusivity vs Coherence | Welcoming new ideas should not swamp the kernel with domain jargon. |
| Cognitive Load vs Grounding | Examples help readers, but too many examples obscure the essence. |
Normative Rule (C‑1) A
U.Typeenters the kernel only if it is shown to play the same Role in at least three foundationally distinct domains.
Heterogeneity & QD‑triad guarantee (C‑1.QD).
In addition to distinct domain‑families (choose from: Exact Sciences · Natural Sciences · Engineering & Technology · Formal Sciences · Social & Behavioural Sciences), the triad SHALL demonstrate quality diversity:
(a) Hetero‑test. Each projection adds at least one non‑trivial DescriptorMap signal or Bridge path not subsumed by the other two (no aliasing by mere renaming).
(b) QD evidence. Publish Creativity‑CHR / NQD‑CAL evidence for the triad: Diversity_P (set‑level) and its IlluminationSummary gauge with ≥3 non‑empty cells and occupancyEntropy > 0 under the declared grid.
(c) Policy disclosure. Declare the Context‑local QD_policy (binning/grid, kernel, time‑window) used to compute the gauges.
(References: C.17 Diversity_P & illumination gauge; C.18 U.DescriptorMap, U.IlluminationSummary.)
Implementation steps (Domain Families):
-
source domain‑families from the active F1‑Card (taxonomyRef/embeddingRef edition). The five coarse families {Exact, Natural & Life, Engineering & Tech, Formal, Social & Behavioural} are informative only; if used for pedagogy, publish an explicit mapping to the F1‑Card taxonomy. The triad gate is measured by MinInterFamilyDistance ≥ δ_family (per F1‑Card), not by labels alone.
-
Role‑Projection Records For each domain, author a short
Role‑Projectiontuple:{domain, indigenous term, Role, exemplar}. Example:{physics, "Free Energy", extremum driver, closed gas system}. -
Congruence Check All three exemplars must satisfy the same abstract intent; superficial analogy is rejected.
-
Living Index Track the ratio
$$ U\text{-Index}=\frac{\text{# kernel types lacking 3 projections}}{\text{# kernel types}} $$ as a health signal; target ≤ 0.05 (not a bureaucratic gate).
Rule of thumb for busy managers: “One idea, three worlds. If you can’t point to the trio, park it in a plug‑in.”
Universal U.Type |
Domain 1 · Physics | Domain 2 · Life Sci. | Domain 3 · Tech & Soc. | Congruent Role |
|---|---|---|---|---|
U.Objective |
Free Energy minimum in thermodynamics | Fitness maximisation in evolution | Loss minimisation in ML | Extremum driver of change |
U.System |
Thermodynamic control volume | Biological organism (cell membrane) | Cyber‑physical system (IoT edge) | Bounded interacting whole |
U.Resource |
Joules of energy | ATP molecules | Budget dollars | Conserved, spendable quantity |
These juxtapositions give engineer‑managers an immediate sense of why each primitive is worth learning.
| ID | Requirement | Purpose |
|---|---|---|
| CC‑UC 1 | A proposed U.Type SHALL include ≥ 3 Role‑Projection records, each taken from a different domain family. |
Enforces the Three‑Domain Test. |
| CC‑UC 2 | Each Role‑Projection MUST explain in ≤ 30 words how the domain notion fulfils the same Role as the proposed U.Type. |
Blocks superficial analogies. |
| CC‑UC 3 | No single artefact may serve as exemplar for more than one domain projection. | Prevents contrived “triple duty” examples. |
| CC‑UC 4 | A specialised U.SubType inherits its parent’s projections and adds ≥ 1 new domain projection, never fewer. |
Keeps refinements as universal as their parents. |
| CC‑UC 5 | While the U‑Index > 0.05, authors SHALL prioritise supplying missing projections over adding new core concepts. | Maintains kernel health without procedural bureaucracy. |
| CC‑UC‑2‑QD‑triad. | The three Role‑Projections come from different domain‑families AND the triad PUBLISHES: {FamilyCoverage, MinInterFamilyDistance, Diversity_P, IlluminationSummary} with MinInterFamilyDistance ≥ δ_family (per F1‑Card DistanceDef & edition). + Provenance MUST cite DescriptorMapRef (incl. DistanceDef/edition), F1‑Card id+edition, and the grid/binning policy used for IlluminationSummary. |
quality diversity of domains |
| Benefit | Trade‑off | Mitigation |
|---|---|---|
| Lean, trusted kernel – every primitive earns its place by real work in three worlds. | Authoring effort for projections. | Patterns A 5/A 6 provide templates and exemplar libraries. |
| Cross‑disciplinary uptake – physicists, managers, and biologists see their own language reflected. | Some novel ideas wait to gather evidence. | They live safely in plug‑ins until mature. |
| Resilience to domain drift – if one field’s jargon changes, the other two anchors preserve continuity. | Possible oversimplification of niche nuances. | Domain‑specific elaborations belong in architheories. |
Deep research over the last decade shows structural homologies across domains:
- Free‑energy minimisation ↔ negative log‑likelihood ↔ Bayesian surprise (Friston 2023).
- Conservation laws in physics mirror budget balancing in economics (Rayo 2024).
By demanding three independent manifestations, FPF captures these convergences without privileging any single vocabulary. The principle operationalises Popperian falsifiability for universality: a concept that cannot survive a three‑domain cross‑examination is, by definition, not a first principle. This guards Pillars P‑1 (Cognitive Elegance) and P‑4 (Open‑Ended Kernel) simultaneously.
| Relation | Linked Pattern | Contribution |
|---|---|---|
| Supports | A 11 Ontological Parsimony | Filters candidates before sunset reviews. |
| Prerequisite for | A 9 Cross‑Scale Consistency | Only universal types can propagate invariants up and down holarchies. |
| Complementary | A 7 Strict Distinction | Together provide clarity (A 7) and breadth (A 8). |
| Enables | B 1 Universal Algebra of Aggregation | Γ‑operators rely on domain‑agnostic operands. |
- Energy ↔ Budget ↔ Attention – Engineering teams reused
U.Resourceto reason about battery charge, project funds, and user‑attention minutes with one algebra, cutting integration effort by half (2024 pilot). - Objective unification – An AI lab mapped loss functions, a bio‑lab mapped Darwinian fitness, and a factory mapped scrap‑rate all to
U.Objective, enabling shared optimisation tooling.
These cases validated that the Three‑Domain Test is achievable in practice, not theoretical paperwork.
- Domain taxonomy stability – Should the five domain families be versioned as science evolves (e.g., quantum‑bio‑tech)?
- Automated congruence checks – Can category‑theoretic tooling semi‑automate the functional‑role equivalence test?
- Edge‑case hybrids – How are bio‑cyber‑physical chimera systems counted: a new domain or a composite projection?
“The logic of a bolt must still be the logic of the bridge.”
FPF models reality as a nested holarchy: parts → assemblies → systems → supra‑systems; axioms → lemmas → theorems → paradigms. Designers and analysts must zoom freely without logical whiplash. Classical mereology and modern renormalisation theory both warn: if rules mutate across scales, predictions and audits collapse. FPF therefore mandates a single, scale‑invariant Standard.
| Failure Mode | Real‑World Symptom |
|---|---|
| Invalid extrapolation | Unit‑tested module fails once integrated. |
| Brittle dashboards | Portfolio KPI “green” hides a red supplier averaged away. |
| Compositional chaos | Different teams’ roll‑ups yield non‑deterministic results. |
These pathologies derail safety cases and budget decisions across disciplines.
| Force | Tension |
|---|---|
| Local autonomy vs Global coherence | Free optimisation of parts ↔ predictable behaviour of whole. |
| Simplicity vs Fidelity | Single rule‑set ↔ non‑linear, emergent effects. |
| Determinism vs Emergence | Stable roll‑ups ↔ need to legitimise genuine synergy jumps. |
| Didactic clarity vs Formal rigour | Managers grasp intent quickly ↔ analysts can prove it. |
Any aggregation operator Γ that claims FPF conformance MUST preserve these five invariants :
| Code | Invariant | One‑line Intuition |
|---|---|---|
| IDEM | Idempotence | Folding a singleton changes nothing. |
| COMM | Local Commutativity | Order of independent folds is irrelevant. |
| LOC | Locality | Worker or partition choice cannot affect result. |
| WLNK | Weakest‑Link Bound | Whole never outperforms its frailest part. |
| MONO | Monotonicity | Improving a part cannot worsen the whole. |
Mnemonic: S‑O‑L‑I‑D (Same · Order‑free · Location‑free · Inferior cap · Don’t‑regress).
Inter‑Layer Standard note When holons are composed as a Layered‑Control stack, each Planner ↔ Regulator pair MUST publish an inter‑layer Standard: {referenceSignal, guaranteedTrackingError, cycleTime}. Matni 2024 (https://arxiv.org/abs/2401.15185) prove such Standards satisfy COMM + LOC invariants, giving a constructive instance of the Quintet.
If empirical data show a true violation (e.g., redundancy raises WLNK limit), the modeller declares an MHT: the collection becomes a new holon tier, and the quintet applies anew at that scale .
| Invariant | U.System — Pump Skid |
U.Episteme — Meta‑Analysis |
|---|---|---|
| IDEM | One‑pump skid ≅ that pump. | Single‑study review ≅ that study. |
| COMM / LOC | Pumps welded in any order / yard → same spec. | Labs contribute in any order → same statistics. |
| WLNK | Pressure rating ≤ weakest pump. | Reliability ≤ least‑replicated study. |
| MONO | Stronger motor never lowers flow. | Larger sample size never lowers confidence. |
| ID | Requirement | Purpose (manager‑friendly) |
|---|---|---|
| CC‑A9‑1 | Every calculus that defines an aggregation operator Γ SHALL provide a plain‑language note and a formal argument for how Γ upholds all five invariants (IDEM, COMM, LOC, WLNK, MONO). |
Makes the Standard both human‑readable and checkable. |
| CC‑A9‑2 | A singleton fold (card (parts) = 1) MUST return the part unaltered (IDEM). |
Locks the recursion base case. |
| CC‑A9‑3 | Folding two independent sub‑graphs in any order or on any compute site MUST yield equal results (COMM + LOC). | Enables safe parallel work and reproducible analytics. |
| CC‑A9‑4 | No aggregate metric MAY exceed the minimum of that metric across parts unless an MHT is declared (WLNK). | Prevents stealth inflation of reliability or truth. |
| CC‑A9‑6 | A declared Meta‑Holon Transition SHALL: (a) name the new supervisory holon; (b) cite the data triggering the transition; (c) restate how the quintet holds at the new scale. | Ensures emergence is captured explicitly, not hand‑waved. |
| Benefit | Why it matters | Trade‑off / Mitigation |
|---|---|---|
| Stable roll‑ups | Summaries and reports remain faithful as parts evolve. | Requires early agreement on Γ; offer reference libraries. |
| Visible risk floor | WLNK blocks “averaging away” critical weaknesses. | Can look overly conservative; redundancy, when real, lifts the minimum honestly. |
| Parallel progress | COMM + LOC allow distributed teams to integrate without re‑work. | Needs explicit independence assumptions; templates guide authors. |
| Objective emergence flag | Quintet failure becomes a measurable R&D signal. | Teams must learn to document MHTs instead of ignoring anomalies. |
Post‑2015 evidence across domains
- Physics ‑ Renormalisation coherence echoes IDEM, COMM, LOC.
- Distributed data platforms rely on COMM + LOC for deterministic aggregations.
- Safety engineering ‑ Fault‑tree analyses hinge on WLNK; aviation failures (2018‑24) confirm its necessity.
- Lean improvement ‑ MONO underpins Kaizen: fix a bottleneck, never worsen the plant.
Packaging these insights as one memorisable quintet → Cognitive Elegance with formal bite.
| Relation | Linked Pattern | Contribution |
|---|---|---|
| Builds on | A 1 Holonic Foundation | Supplies part/whole semantics. |
| Reinforces | A 7 Strict Distinction | Prevents layer‑mixing during folds. |
| Enabled by | A 8 Universal Core | Guarantees operands share truly universal meaning. |
| Foundation for | B 1 Universal Algebra of Aggregation | B‑section implements operators that satisfy this pattern. |
| Triggers | B 2 Meta‑Holon Transition | When invariants fail through synergy, an MHT is invoked. |
- Spacecraft avionics ‑ Applying WLNK exposed a sub‑grade connector, saving a $40 M launch window.
- Global vaccine meta‑reviews ‑ COMM + LOC let five epidemiology teams merge data independently; results converged within 0.1 % effect size.
- Distributed ML training ‑ MONO guaranteed optimiser swaps never reduced accuracy, cutting iteration time by 20 %.
- Order‑sensitive physics – Should quantum‑circuit folds live in a plug‑in with a relaxed invariant set?
- Synergistic redundancy – Can WLNK be reframed using an “effective minimum” when true redundancy lifts the floor?
- Didactic tooling – Which visual cues best alert non‑formal audiences to an approaching Meta‑Holon Transition?
- Layer depth — In an LCA (layered control architectures, https://arxiv.org/abs/2401.15185) stack every Planner is external to its Regulator; should FPF limit the number of nested layers, or is indefinite chaining acceptable?
“A claim without a chain is only an opinion.”
FPF is a holonic framework: wholes are built from parts (A.1, A.14), and reasoning travels across scales via Γ‑flavours (B.1). To keep this reasoning honest and reproducible, every published assertion must be anchored in concrete symbol carriers and well‑typed transformations performed by an external TransformerRole (A.12, A.15). Managers can read this as a simple rule of thumb:
Claim → (Proof or Test) → Confidence badge …where the proof/test is traceable to real carriers and to an external system/Transformer who executed an agreed method.
This pattern defines the Evidence Anchoring Standard common to all Γ‑flavours (Γ_sys — formerly Γ_core, Γ_epist, Γ_method, Γ_time, Γ_work) and clarifies: (a) the difference between mereology (part‑whole; builds holarchies) and provenance (why a claim is admissible; does not build holarchies); (b) the run‑time / design‑time separation (A.4) across Role–Method–Work (A.15).
Without a uniform anchor, models drift into five failure modes:
- Weightless claims. Metrics or arguments appear in the model with no link to their symbol carriers (files, datasets, lab notebooks, figures).
- Collapsed scopes. Design‑time method specs are silently mixed with run‑time traces; results cannot be reproduced because “what was planned” and “what actually ran” are conflated.
- Self‑justifying loops. A holon attempts to evidence itself (violates A.12 externality), producing cyclic provenance and unverifiable conclusions.
- Source loss during aggregation. As Γ combines parts, some sources “fall out”; later audit cannot reconstruct why a compound claim was accepted.
- Temporal ambiguity. Time‑series are aggregated without interval coverage or dating source; gaps/overlaps invalidate comparisons and trend claims.
The business effect is predictable: confidence badges cannot be defended, cross‑scale consistency (A.9) is broken, and iteration slows because every review re‑litigates “where did this come from?”.
| Force | Tension |
|---|---|
| Universality vs. burden | One Standard must fit systems and epistemes ↔ Authors should not drown in paperwork. |
| Externality vs. reflexivity | Evidence must be produced by an external TransformerRole (A.12) ↔ Some systems adapt themselves (need reflexive modelling without self‑evidence). |
| Atemporal vs. temporal | Many claims are state‑like ↔ Many others are histories; evidence must respect order and coverage (Γ_time). |
| Rigor vs. flow | Formal proofs and controlled tests raise confidence ↔ Engineering cadence needs lightweight, incremental anchors. |
| Mereology vs. provenance | Part‑whole edges build holarchies ↔ Evidence edges never do; the two graphs must interlock without leaking semantics. |
The Standard is a small set of primitives applied uniformly, with manager‑first clarity and formal hooks for proof obligations.
4.1 EPV‑DAG (Evidence–Provenance DAG).
A typed, acyclic graph disjoint from mereology. Node types: SymbolCarrier (a U.System in CarrierRole, A.15), TransformerRole (external Transformer, A.12), MethodDescription (design‑time blueprint of a method, A.15), Observation (a dated assertion/result), U.Episteme (knowledge holon). Edge vocabulary is small and normative: evidences, derivedFrom, measuredBy, interpretedBy, usedCarrier, happenedBefore (temporal), etc.
Manager view: it is the “because‑graph”: every claim answers “because of these carriers, by this Transformer, using that method, then.”
4.2 Anchors (two relations, two flavours).
verifiedBy— links a claim to formal evidence (proof obligations, static guarantees, model‑checking artefacts).validatedBy— links a claim to empirical evidence (tests, measurements, trials, observations). Both anchors terminate in the EPV‑DAG, not in the mereology graph.
4.3 SCR / RSCR (Symbol Carrier Registers).
Every Γ_epist aggregation SHALL emit an SCR: an exhaustive register of symbol carriers materially used in the aggregate, with id, type, version/date, checksum, source/conditions and optional PortionOf (A.14) for sub‑carriers.
Every Γ_epist^compile SHALL emit an RSCR: SCR specialised to a bounded context (vocabularies, units) with publication‑grade identifiers and hashes.
Why this matters: it prevents “lost sources” during composition and underwrites reproducibility without mandating any specific tool.
4.4 Scope alignment (A.4) across Role–Method–Work (A.15).
- Design‑time: MethodDescription lives here; methods are blueprints; anchors reference what would constitute proof or test.
- Run‑time: Work (actual execution) lives here; traces reference which MethodDescription they instantiate and record
happenedBefore. Bridging edges are explicit (“this run trace instantiates that spec”), so scopes never silently mix.
4.5 External TransformerRole (A.12). The system that produces or interprets evidence is external to the holon under evaluation. If true reflexivity is essential, model a meta‑holon (A.12): the self‑updating holon becomes the object of a higher‑level external transformer (the “mirror”), restoring objectivity.
4.6 Γ‑flavour hooks (how each flavour anchors).
- Γ_sys (formerly Γ_core): physical properties are anchored by measurement models, boundary conditions, calibration carriers, and dated observations.
- Γ_epist: always outputs SCR/RSCR; every provenance/evidence node resolves to an SCR/RSCR entry.
- Γ_method: order‑sensitive composition; at design‑time a Method Instantiation Card (MIC) states
Precedes/Choice/Joinand guards; at run‑time traces recordhappenedBeforeand point to the MethodDescription they instantiate. - Γ_time: temporal claims state interval coverage; Monotone Coverage (no unexplained gaps/overlaps) is required.
- Γ_work: resource spending and yield are evidenced by instrumented carriers (meters, logs) and their MethodDescriptions; keep resource rosters separate from SCR/RSCR.
Manager’s shortcut: If you can answer what carriers, which system, which method, when, the anchor is likely sufficient; if any of the four is missing, it is not.
| Aspect | U.System — Autonomous Brake |
U.Episteme — Meta‑analysis |
|---|---|---|
| Claim | “Stop within 50 m from 100 km/h.” | “Drug A outperforms control on endpoint E.” |
| Anchor | verifiedBy: static‑analysis proof of no overflow; validatedBy: instrumented track tests. |
verifiedBy: power‑analysis proof of sample size; validatedBy: pooled effect sizes with bias checks. |
| Carriers (SCR/RSCR) | Scale logs, calibration certificates, test track telemetry; SCR lists all; RSCR adds context units. | PDFs of studies, data tables, analysis code; SCR lists carriers; RSCR adapts vocabularies/units for the target audience. |
| External TransformerRole | Independent test team / metrology lab. | Independent synthesis team / statistician. |
| Temporal | Dated runs; happenedBefore between setup → test → teardown. |
Publication dates; dataset versions; monotone coverage of included studies. |
| ID | Requirement | Purpose (what it prevents) |
|---|---|---|
| CC‑A10.1 (EPV‑DAG Presence) | Every published claim MUST have a path in the Evidence–Provenance DAG (EPV‑DAG) to concrete SymbolCarrier nodes and to the external TransformerRole that produced or interpreted the evidence. |
Stops “weightless claims” and self‑justifying text. |
| CC‑A10.2 (SCR) | Any Γ_epist^synth operation SHALL output an SCR listing all symbol carriers materially used in the aggregate U.Episteme. |
Prevents source loss during aggregation. |
| CC‑A10.3 (RSCR) | Any Γ_epist^compile operation SHALL output an RSCR adapted to the target bounded context (vocabularies, units) with publication‑grade identifiers/hashes; SCR→RSCR MUST preserve carrier identity/integrity. |
Keeps releases auditable and context‑consistent. |
| CC‑A10.4 (Resolution) | Every provenance/evidence node in the dependency graph MUST be resolvable to an SCR/RSCR entry. Unresolved links invalidate the claim. | Eliminates dangling references and unverifiable citations. |
| CC‑A10.5 (Scope Separation) | A single EPV‑DAG instance SHALL NOT mix design‑time MethodDescription nodes with run‑time Work traces. Bridges (“this run trace instantiates that spec”) MUST be explicit. | Avoids conflating intent and execution. |
| CC‑A10.6 (Externality) | The evidencing TransformerRole MUST be external to the holon under evaluation (A.12). Reflexive cases require modelling a meta‑holon and an external mirror. |
Prevents self‑creation/self‑evidence paradoxes. |
| CC‑A10.7 (Temporal Coverage) | For Γ\_time claims, interval coverage MUST be monotone and fully specified; gaps/overlaps require explicit justification or rejection. |
Stops invalid time‑series aggregation. |
| CC‑A10.8 (Integrity & Immutability) | SCR/RSCR entries MUST include version/date and checksums; published SCR/RSCR are immutable—updates create a new revision id with a pointer to the prior one. | Guards against silent drift and tampering. |
| CC‑A10.9 (Holarchy Firewall) | EPV‑DAG MUST use provenance edges only; mereological edges (ComponentOf, MemberOf, PortionOf, PhaseOf, etc.) MUST NOT appear in EPV‑DAG; conversely, provenance edges MUST NOT be used to build holarchies. |
Keeps part‑whole and evidence semantics disjoint. |
| CC‑A10.10 (Γ_sys Anchors) | Physical claims aggregated by Γ_sys MUST reference measurement models (quantity, unit, uncertainty), boundary conditions, and calibration carriers. |
Ensures physical plausibility and comparability. |
| CC‑A10.11 (Γ_method Anchors) | For order‑sensitive composition, design‑time MUST include a Method Instantiation Card (MIC) (Precedes/Choice/Join, guards, exceptions); run‑time traces MUST record happenedBefore and reference the MethodDescription they instantiate. |
Preserves order semantics and reproducibility. |
| CC‑A10.12 (Γ_work Anchors) | Resource spending/yield claims MUST be evidenced by instrumented carriers (meters, logs) and their MethodDescriptions; resource rosters MUST NOT be conflated with SCR/RSCR. | Distinguishes cost accounting from knowledge carriers. |
Manager’s audit (non‑normative, quick): For any claim, ask What carriers? Which system? Which method? When? If any answer is missing, A.10 is not satisfied.
| Benefit | Why it matters | Trade‑off / Mitigation |
|---|---|---|
| Cross‑scale reproducibility | Any composite metric or argument can be walked back to its carriers and method. | Overhead of maintaining SCR/RSCR. Mitigation: keep entries minimal but complete; use checklists from the pedagogical companion. |
| Design/run clarity | Intent (MethodDescription) is cleanly separated from execution (Work traces). | Discipline needed at boundaries. Mitigation: MIC templates; explicit “instantiates” bridges. |
| Objective evidence | External TransformerRole eliminates self‑evidence loops. |
Reflexive systems require a mirror meta‑holon. Mitigation: provide a “reflexive modelling” appendix with examples. |
| Comparable numbers over time | Temporal coverage invariants prevent “trend” claims built on gaps. | Extra dating work for legacy data. Mitigation: allow provisional labels until dating is completed. |
| Safe composition of knowledge | SCR/RSCR keep sources intact as Γ_epist composes epistemes. | Initial friction in teams new to carrier thinking. Mitigation: start with “top‑10 carriers per claim” rule, expand as needed. |
| Feeds Trust Calculus (B.3) | Anchors provide the inputs (R, CL, etc.) needed to score confidence. | — |
- Metrology & assurance. The requirement to name quantities, units, uncertainty, calibration carriers reflects long‑standing metrology practice and modern assurance cases: numbers are only comparable when their measurement models are stated.
- Knowledge provenance. The EPV‑DAG and SCR/RSCR embody post‑2015 best practices in provenance for knowledge artefacts: keep a complete, machine‑checkable trail from claims to carriers; separate provenance from part‑whole.
- Temporal reasoning. Monotone coverage (no unexplained gaps/overlaps) aligns with temporal knowledge graph practice and avoids “impossible histories.”
- Holonic parsimony. By drawing a firewall between mereology (A.14) and provenance, A.10 prevents semantic leakage and keeps the holarchy well‑typed.
- Role–Method–Work clarity. Anchoring explicitly rides on A.15: roles act via methods specified at design‑time and produce work observed at run‑time. This keeps agency, policy, and execution disentangled yet connected.
- Builds on: A.1 Holonic Foundation; A.4 Temporal Duality; A.12 Transformer Externalization; A.14 Advanced Mereology; A.15 Role–Method–Work Alignment.
- Constrains / Used by: B.1 (all Γ‑flavours:
Γ_sys,Γ_epist,Γ_method,Γ\_time,Γ_work); B.1.1 (Dependency Graph & Proofs). - Enables: B.3 Trust Calculus (R/CL inputs, auditability); B.4 Canonical Evolution Loop (clean design/run bridges).
Apply these text edits only in the holonic working file:
-
Terminology
manifest→ “Symbol Carrier Register (SCR)”;release manifest→ “Release SCR (RSCR)”.creator/observer(as internal evidencer) →TransformerRole (external).- “symbol register” (ambiguous) → “Symbol Carrier Register (SCR)”.
- Keep resource rosters in
Γ_workseparate from SCR/RSCR.
-
Γ naming
Γ_core(legacy) →Γ_syseverywhere (note once: formerly Γ_core).
-
Boilerplate inserts
- In A.10 (this pattern): retain definitions of EPV‑DAG, SCR/RSCR, and the flavour‑specific anchors.
- In B.1.3 (
Γ_epist): add the Obligations — SCR/RSCR block (“Γ_epist^synthSHALL output SCR…Γ_epist^compileSHALL output RSCR…”). - In B.1.5 (
Γ_method): ensure MIC is referenced (Precedes/Choice/Join, guards, exceptions) and run‑time traces reference the MethodDescription they instantiate. - In B.1.6 (
Γ_work): say “resource rosters are not SCR/RSCR; anchor meter/log readings via EPV‑DAG.”
“Add only what you cannot subtract.”
The FPF kernel aspires to remain small enough to learn in a week yet broad enough to model engines, proofs and budgets alike. Unchecked growth of primitives—well‑known from earlier “enterprise ontologies”—bloats diagrams, stalls tooling and intimidates new adopters. C‑5 therefore demands minimal‑sufficiency: a new core concept enters the kernel only when all routes of composition, refinement or role‑projection fail to express it without semantic loss.
| Pathology | Real‑world symptom |
|---|---|
| Concept creep | Near‑synonyms proliferate (U.Worker, U.Employee, U.Staff), breaking queries. |
| Zombie types | Legacy primitives linger unused yet block name space. |
| Tool churn | Every fresh primitive forces IDE, validator and dashboard updates. |
Result: steep learning curves, fragile integrations, eroded trust in “first‑principles” promises.
| Force | Tension |
|---|---|
| Expressiveness vs Simplicity | Fine granularity helps static checks ↔ fewer nouns aid cognition. |
| Inclusivity vs Purity | New domains want vocabulary ↔ kernel must not be a dumping ground. |
| Evolution vs Stability | Framework grows ↔ users depend on a stable core. |
| Prestige vs Utility | Authors enjoy naming things ↔ every name tcharacteristics everyone else. |
A proposal to add a U.Type or core relation MUST clear all four gates before admission and survives under a Sunset Timer thereafter.
| Gate | Test question | Rationale |
|---|---|---|
| G‑1 Composition | Can existing primitives + roles/attributes express the concept without material loss? | Follows “composition over creation.” |
| G‑2 Non‑Redundancy | Does the proposal overlap ≥ 80 % with anything already live? | Blocks synonyms. |
| G‑3 Functional Naming | Does the chosen name state what the thing does, not what it is made of? | Prevents vague catch‑alls; supports didactic clarity. |
| G‑4 Sharp Boundary | Is there a one‑sentence litmus test that unambiguously includes or excludes any candidate instance? | Ensures crisp taxonomy edges. |
Lifecycle — Sunset Timer A cleared type enters the kernel provisionally with a timer (default = 4 quarters). If usage count remains zero at expiry, the type faces Sunset Review: delete, demote to plug‑in, or renew with fresh evidence.
Manager’s mnemonic: “Compose, Unique, Functional, Crisp — or sunset.”
| Gate | Rejected candidate (why) | Accepted approach |
|---|---|---|
| G‑1 | U.CoolantPump – expressible as U.System:Pump + CoolingCirculatorRole. |
Composition via Role. |
| G‑2 | U.Actuator vs existing U.Transformer (90 % overlap). |
Retain broader U.Transformer. |
| G‑3 | U.MiscellaneousObject – name signals no function. |
Reject; unclear purpose. |
| G‑4 | U.SmallPart – boundary depends on subjective size. |
Reject; fails crisp test. |
| — | U.ProvenanceChain – required to record immutable evidence lineage; cannot be composed; functionally named; crisp membership rule (“ordered list of evidence anchors with forward integrity hash”). |
Accepted, timer started. |
| ID | Requirement | Didactic aim |
|---|---|---|
| CC‑OP 1 | A Minimal‑Sufficiency Form (≤ 1 page) MUST accompany every new kernel‑type proposal, documenting answers to Gates G‑1…G‑4 and a draft Sunset‑Timer. | Forces authors to think compositionally before adding nouns. |
| CC‑OP 2 | Kernel inventory tooling SHALL stamp each admitted type with sunset_due: <date> (default = +4 quarters). |
Schedules later pruning; no forgotten zombies. |
| CC‑OP 3 | A quarterly Usage Scan MUST flag any core type with reference‑count = 0; flagged items enter Sunset Review automatically. | Turns parsimony into a living maintenance loop. |
| CC‑OP 4 | Renaming, aliasing, or splitting an existing type REQUIRES re‑passing all four gates and documenting a migration note. | Prevents redundancy re‑entering via back door. |
| CC‑OP 5 | Architheories SHOULD favour Role + attributes over proposing new domain types; proposals rejected when Gate G‑1 answer is “yes.” |
Extends parsimony culture beyond the kernel. |
| Benefit | Impact for engineer‑managers | Trade‑off / Mitigation |
|---|---|---|
| Lean kernel | Fewer primitives → faster onboarding & clearer mental map. | Initial author effort to fill Minimal‑Sufficiency Form; template wizard auto‑fills 70 %. |
| Reduced tool churn | Stable set of nouns keeps dashboards, linters, reasoners in sync for years. | Occasionally slows acceptance of niche concepts; plug‑in layer absorbs urgency. |
| Automatic house‑cleaning | Sunset cycle prevents accrual of deadwood. | Rare risk of deleting a sleeper hit; renewal path allows appeal. |
| Encultured composition mindset | Teams default to roles & attributes, boosting reuse and cross‑domain dialogue. | Requires role libraries and attribute taxonomies; provided in Part C. |
Cognitive science shows working memory tops out around 4 ± 1 unfamiliar chunks (Cowan 2021). Combining that with Gate discipline keeps kernel size tractable (≈ 40 primitives). Software metrics from lean DSLs (Rust traits, Kubernetes CRDs) reveal that compositional modelling reduces change propagation cost by ~30 %. The Sunset Timer borrows from Kubernetes feature gate “alpha/beta/GA” progression model — proved effective at pruning half‑baked APIs.
| Relation | Pattern | Interaction |
|---|---|---|
| Builds on | A 8 Universal Core | A candidate must already pass the Three‑Domain Test. |
| Supports | A 7 Strict Distinction | Prevents near‑duplicate roles that blur layer boundaries. |
| Feeds | B 5 Kernel Change‑Log | Records admissions, renames, sunsets. |
| Complementary | A 10 Evidence Anchoring | Proposals cite evidence of irreducibility. |
### 10 · Illustrative Uses (2022 – 2025)
- Robotics CAL 2023 –
U.LiDARSensorrejected (Gate G‑1 passed via role composition), saving three schema migrations. - Green‑Finance CAL 2024 –
U.CarbonCreditadmitted provisionally, but Sunset Review (usage = 0) demoted it to sector plug‑in, avoiding kernel noise. - Neuro‑informatics 2025 –
U.ProvenanceChainaccepted; by Q3 its heavy reuse in three architheories lifted timer and marked it established.
### 11 · Open Questions
- Hard size cap — should the kernel enforce an absolute limit (e.g., 64 live types) beyond which any new entry forces retirement of an old one?
- Semantic similarity tooling — can embedding models automate Gate G‑2 overlap detection reliably across domains?
- Gate calibration — is default Sunset Timer (4 quarters) optimal for research‑oriented architheories with slower uptake?
The principle of causality is the bedrock of engineering and scientific reasoning: every change has a cause. In FPF, this translates to a strict architectural rule: no "self-magic." An action cannot happen without an actor. This pattern establishes the formal mechanism for modeling causality, ensuring that every transformation is attributed to an explicit, external agent.
This pattern operationalizes the Agent Externalization Principle (C-2). It builds directly upon:
- A.3 (Transformer Constitution): Which defines the core quartet of action: the
Agent(who acts), theMethodDescription(the recipe), theMethod(the capability), and theWork(the event). - A.2 (Contextual Role Assignment): Which provides the universal syntax
Holder#Role:Contextfor defining agents.
The intent of this pattern is twofold:
- To mandate that every transformation is modeled as an interaction between a distinct Agent (playing a
TransformerRole) and a distinct Target across a defined Boundary. - To provide a rigorous pattern, the Reflexive Split, for modeling systems that appear to act upon themselves (e.g., self-calibration, self-repair) without violating the principle of external causality.
Without a strict rule of agent externalization, models become ambiguous and untraceable, leading to critical failures in design and audit:
- Causality Collapse ("Self-Magic"): Phrases like "the system configures itself" or "the document updates itself" create a causal black hole. It becomes impossible to answer the question, "What caused this change?" This makes debugging, root cause analysis, and assigning responsibility impossible.
- Audit Dead-Ends: An auditor tracing a change finds that the system is its own justification. There is no external evidence, no log from an independent actor, and therefore, no way to verify the integrity of the transformation. This is a direct violation of Evidence Anchoring (A.10).
- Hidden Dependencies: In a "self-healing" system, the healing mechanism (the regulator) and the operational part (the regulated) are modeled as a single monolithic block. This hides the critical internal dependency between them. A failure in the regulator might go unnoticed until the entire system collapses, because its role was never made explicit.
| Force | Tension |
|---|---|
| Causal Clarity vs. Modeling Simplicity | The need to explicitly model every cause-and-effect link vs. the desire to keep diagrams simple by representing self-regulating systems as single blocks. |
| Objectivity vs. Internal States | The need for an external, objective observer/actor to ground all claims vs. the reality that many systems have internal feedback loops that control their own state. |
| Accountability vs. Automation | In fully automated systems, it can be tempting to say "the system did it," but for assurance and safety, we must always be able to trace an action back to a specific, responsible component. |
The solution is a two-part architectural mandate: (1) all transformations must be modeled with an external agent, and (2) apparent self-transformation must be modeled using the Reflexive Split.
Every transformation in FPF is a U.Work event that is the result of an Agent acting upon a Target.
- The Agent: The agent is a Contextual Role Assignment of the form
System#TransformerRole:Context. This is the cause, the "doer." - The Target: The target is the
U.Holonbeing changed. This can be anotherU.Systemor the symbol carrier of aU.Episteme. - The Boundary: The agent and the target are always separated by a
U.Boundaryand interact through aU.Interaction.
Crucial Rule: The holder of the Agent's U.RoleAssignment cannot be the same holon instance as the Target.
holder(Agent) ≠ Target
This simple inequality is the core of the externalization principle. It constitutionally forbids self-magic.
FPF distinguishes reflexive transformation from episteme‑level reference. Reflexive cases (e.g., “self‑calibration”) MUST be modeled by the Reflexive Split (Regulator→Regulated) and remain within the world ReferencePlane. When a claim refers to another claim/episteme, model it with epistemeAbout(x,y) and set ReferencePlane(x)=episteme. Such references do not perform transformations and MUST NOT be used to bypass the external‑agent rule. Evaluation of chains of episteme‑about relations MUST remain acyclic within a single evaluation chain; otherwise, abstain and request a split or external evidence.
So, how do we model a system that does act on itself, like a self-calibrating sensor? We use the Reflexive Split. We recognize that the system is not a monolith; it contains at least two distinct functional parts.
The Procedure:
- Identify the Roles: Decompose the system's function into two distinct roles: the part that regulates and the part that is regulated.
- Model as Two Holons: Model these two parts as two distinct (though possibly tightly coupled)
U.Systemholons, even if they share the same physical casing. - Define the Internal Boundary: Model the interface between them as an internal
U.Boundarywith a definedU.Interaction(e.g., a data bus, a mechanical linkage). - Assign the Transformer Role: The regulating part becomes the
holderof theTransformerRole. The regulated part becomes theTarget.
Now, the "self-action" is modeled as a standard, external transformation that just happens to occur inside the larger system's boundary. Causality is restored, and the model becomes auditable.
Didactic Note for Engineers & Managers: The "Two Hats" Analogy
Think of the Reflexive Split like a manager who needs to review their own work. To do it properly, they must metaphorically wear "two hats."
- Hat 1: The Doer. They perform the task.
- Hat 2: The Reviewer. They step back, put on their "reviewer hat," and inspect the work as if it were done by someone else.
The Reflexive Split formalizes this. The "Doer" is the Regulated subsystem. The "Reviewer" is the Regulator subsystem, which plays the TransformerRole. By modeling them as two separate entities, we make the internal quality control loop explicit and prevent the logical error of a system magically grading its own homework.
The principle of external causality and the Reflexive Split pattern are universal. They apply equally to physical systems, epistemic artifacts, and socio-technical organizations.
| Scenario | Naive Description ("Self-Magic") | FPF Model with Reflexive Split | Agent & Target |
|---|---|---|---|
| System Archetype | "The robot calibrates itself." | The robot is modeled as a composite holon containing two subsystems: 1. CalibrationController (U.System) 2. SensorSuite (U.System) They interact across an internal data bus ( U.Boundary). |
Agent: CalibrationController#TransformerRole:RobotInternals Target: SensorSuite |
| Episteme Archetype | "The document automatically updates its cross-references." | The "document" is a system comprising: 1. UpdateScript (a U.System that executes code) 2. DocumentFile.xml (a U.System acting as a symbol carrier) They interact via the file system ( U.Boundary). |
Agent: UpdateScript#TransformerRole:DocumentSystem Target: DocumentFile.xml (the carrier of the U.Episteme) |
| Socio-Technical Archetype | "The team reviews its own performance." | The team is modeled as a collective U.System that enacts two roles at different times: 1. ExecutionTeam (doing the sprint work) 2. ReviewTeam (conducting the retrospective) The "boundary" is the formal separation created by the retrospective ceremony. |
Agent: Team#ReviewerRole:RetrospectiveContext Target: The U.Work logs and artifacts produced by the Team#ExecutionRole. |
Key takeaway from grounding: These examples demonstrate that there is no such thing as self-action in a well-formed model. Every action, even an internal one, can and must be decomposed into an external interaction between a distinct agent and a distinct target. This makes the causal chain explicit and auditable in all domains.
To enforce the principles of externalization and causal clarity, all FPF models must adhere to the following normative checks.
| ID | Requirement (Normative Predicate) | Purpose / Rationale |
|---|---|---|
| CC-A12.1 (External Agent Mandate) | Every transformation (U.Work) MUST be attributed to an Agent (U.RoleAssignment) whose holder is distinct from the target holon. |
This is the core rule that forbids self-magic. It ensures every action has an identifiable, external cause. |
| CC-A12.2 (Reflexive Split for Self-Action) | Any narrative of "self-modification" (e.g., self-repair, self-configuration) MUST be modeled using the Reflexive Split pattern. | Forces the modeler to make internal control loops explicit by identifying the distinct Regulator (Agent) and Regulated (Target) subsystems. |
| CC-A12.3 (Boundary Explicitness) | The U.Boundary and U.Interaction between the Agent and the Target MUST be explicitly modeled. |
Makes interfaces a first-class citizen of the model. Prevents hidden dependencies and ensures interactions are auditable. |
| CC-A12.4 (Episteme Carrier as Target) | When a U.Episteme is modified, the Target of the transformation MUST be its symbol carrier (U.System), not the U.Episteme itself. |
Reinforces Strict Distinction (A.7). Knowledge doesn't change by magic; a physical agent must act on its physical representation. |
| CC-A12.5 (No Self-Evidence) | The Agent that performs a transformation cannot be the sole source of evidence for the success or properties of that transformation. Evidence MUST be anchored via an independent Observer. |
Prevents conflicts of interest in assurance. The Transformer does the work; a separate Observer (another RoleAssignment) validates it. This aligns with A.10 (Evidence Anchoring). |
| Benefits | Trade-offs / Mitigations |
|---|---|
| Causal Traceability & Auditability: Every change is linked to a specific agent and interaction, creating a complete and unambiguous audit trail. This is essential for root cause analysis and accountability. | Increased Model Granularity: The Reflexive Split requires creating more model elements than a simple monolithic block. Mitigation: This is not a bug, but a feature. The "extra" elements represent real, critical parts of the system's architecture that were previously hidden. FPF tooling can help manage this via views that can "collapse" a split system for high-level diagrams. |
| Architectural Honesty: The pattern forces designers to be explicit about internal control loops, interfaces, and dependencies, leading to more robust and well-understood system architectures. | Requires a Shift in Thinking: Modelers accustomed to "self-x" narratives must learn to think in terms of external interactions. Mitigation: The "Two Hats" analogy and clear archetypes (Section 5) serve as powerful didactic tools to facilitate this shift. |
Enables True Modularity: By making interfaces explicit, the pattern supports modular design. A Regulator subsystem could potentially be swapped out for a different one as long as it respects the same U.Interaction Standard. |
- |
| Unlocks Deeper Analysis: Once an internal control loop is made explicit, it can be formally analyzed for stability, performance, and failure modes using tools like the Supervisor-Subsystem Feedback Loop pattern (B.2.5). | - |
The principle of externalization is not an arbitrary rule imposed by FPF; it is a distillation of foundational concepts from multiple rigorous disciplines.
- Cybernetics & Control Theory: As Ashby's Law of Requisite Variety and modern control theory (e.g., Matni et al., 2024) demonstrate, regulation is fundamentally an interaction across a boundary between a controller and a plant. Conflating the two hides the causal structure and makes stability analysis impossible. The Reflexive Split is the FPF's implementation of this core cybernetic principle.
- Physics (Constructor Theory): As discussed in A.3, Constructor Theory recasts physics in terms of what transformations are possible. A transformation is always performed by a "constructor" (our
Transformer) on a substrate. The theory does not contain "self-constructing" substrates. FPF's externalist stance is fully aligned with this physical worldview. - Philosophy of Science (Objectivity): The scientific method is built on the principle of external observation and verification. A theory cannot validate itself; its predictions must be checked by an independent experiment. The
No Self-Evidencerule (CC-A12.5) is the direct implementation of this principle in the FPF's assurance calculus. - Software Engineering (Dependency Inversion): The principle that high-level modules should not depend on low-level modules, but both should depend on abstractions, is a form of externalization. It enforces clean separation and makes systems more modular and testable. The explicit
U.Boundaryin our pattern serves the same architectural purpose as a well-defined interface in software.
By mandating externalization, FPF is not adding bureaucratic overhead. It is enforcing a set of first principles that are demonstrably essential for building complex systems that are understandable, auditable, and trustworthy.
- Directly Implements:
C-2 Agent Externalization Principle. - Builds Upon:
A.1 Holonic Foundation: Provides theU.SystemandU.Epistemeholons that act as agents and targets.A.2 Role Taxonomy: Provides the Contextual Role Assignment (U.RoleAssignment) mechanism to define the Agent.A.3 Transformer Constitution: Defines theTransformerRolethat the Agent plays.
- Enables and Constrains:
A.10 Evidence Anchoring: Provides the causal structure (who did what) that evidence must be anchored to.B.2 Meta-Holon Transition (MHT): A Reflexive Split is often the first step in identifying an emergent supervisory layer that may later be promoted to a new meta-holon.B.2.5 Supervisor-Subsystem Feedback Loop: This pattern provides the detailed architecture for theRegulator-Regulatedinteraction that the Reflexive Split reveals.
“Agency is not a kind of thing; it is a way some systems operate.”
The concept of "agency"—the capacity of an entity to act purposefully—is central to engineering, biology, and AI, yet it remains one of the most overloaded and ambiguous terms. Without a precise, falsifiable, and substrate-neutral definition, models of autonomous systems risk descending into "self-magic," where actions have no clear cause and accountability is lost.
This pattern builds directly upon the foundations laid in the FPF Kernel to provide that definition. A.1 established that only a U.System can be the bearer (holder) of behavioral roles. A.2.1 defined the universal U.RoleAssignment (Holder#Role:Context) as the canonical way to assign roles. A.3 and A.12 defined the TransformerRole and the principle of the external agent.
The intent of this pattern is to:
- Formally define agency not as an intrinsic type of holon, but as a contextual Role Assignment.
- Introduce a measurable, multi-dimensional spectrum of agency via a dedicated Characterization (
Agency-CHR), moving beyond a simple binary "agent/not-agent" switch. - Provide a clear, didactic grading system that allows engineers and managers to assess and communicate the level of autonomy of any system in a consistent, evidence-backed manner.
If agency is treated as a monolithic, intrinsic property or a mere label, four critical failure modes emerge, undermining the rigor of FPF:
- Episteme-as-Actor: Models might incorrectly assign agency to knowledge artifacts (
U.Episteme), leading to nonsensical claims like "the specification decided to update the system." This is a direct violation of Strict Distinction (A.7). - Type Inflation: Introducing a
U.Agentas a new base type alongsideU.SystemandU.Epistemewould violate Ontological Parsimony (C-5) and create conflicts with the dynamic nature of roles. A system might act as an agent in one context and a passive component in another; a static type cannot capture this. - Unfalsifiable Claims: Without a measurable basis, "agency" becomes a subjective label. A team might call their system an "agent" for marketing purposes, but this claim has no verifiable meaning and cannot be audited, violating Evidence Anchoring (A.10).
- The Binary Trap: A simple "agent/not-agent" classification is too coarse. It fails to distinguish between a simple thermostat, a predictive cruise control system, and a strategic, self-learning robotic swarm, even though their cognitive capabilities differ by orders of magnitude.
| Force | Tension |
|---|---|
| Scientific Fidelity vs. Simplicity | Contemporary science (e.g., Active Inference) models agency as a continuous, scale-free spectrum. FPF needs to honor this rigor while providing a simple, teachable model for practitioners. |
| Role vs. Type | The intuition is to think of an "Agent" as a type of thing. FPF's architecture demands that it be modeled as a role to preserve dynamism and ontological hygiene. |
| Measurement vs. Label | Engineers and managers need a quick, intuitive label (e.g., "this is a Level 3 agent"), while formal assurance requires a detailed, multi-dimensional, evidence-backed measurement. |
| System-only Action vs. Collective Action | How does agency apply to groups like teams or swarms? This requires a clear link to the rule from A.1 that any acting group must be modeled as a U.System. |
FPF's solution is threefold: it defines an Agent via U.RoleAssignment (A.2.1), makes agency measurable with a dedicated Characterization, and provides a didactic summary via a graded scale.
An "Agent" in FPF is not a fundamental type. It is a convenience term (a Register 1 / Register 2 label) for a specific kind of Contextual Role Assignment (U.RoleAssignment):
Agent ≍ U.RoleAssignment(holder: U.System, role: U.AgentialRole, context: U.BoundedContext)
This means an Agent is simply a U.System that is currently playing an AgentialRole within a specific U.BoundedContext.
- No
U.AgentType: To be clear, there is noU.Agentbase type in the FPF Kernel. This avoids type inflation and preserves the dynamic nature of roles. - Epistemes Cannot Be Agents: As the
holdermust be aU.System, this definition constitutionally forbidsU.Epistemes from being agents, preventing the "episteme-as-actor" category error. - Canonical Syntax: The technical notation for an agent is
System#AgentialRole:Context.
U.AgentialRole: This is the abstractU.Rolethat grants aU.Systemthe capacity for goal-directed action within a context. It is the "license to act."- Specialized Roles: More specific behavioral roles like
TransformerRoleandObserverRoleare considered specializations or sub-roles ofAgentialRole. They describe what kind of agential action is being performed at a given moment.- A system playing
TransformerRoleis an Agent that is currently modifying another holon. - A system playing
ObserverRoleis an Agent that is currently gathering information. This creates a clean hierarchy: aTransformeris always anAgent, but anAgentis not always aTransformer(it could be observing, planning, or idle).
- A system playing
Agency is not a binary switch; it is a multi-dimensional spectrum of capabilities. FPF models this using a dedicated architheory, Agency-CHR (C.9), which is a Characterization that attaches a set of measurable properties to a U.RoleAssignment.
The Agency-CHR profile is grounded in contemporary research (e.g., Active Inference, Basal Cognition) and includes the following key characteristics. Each is measured for a specific agent in a specific context and must be backed by evidence (A.10).
- Boundary Maintenance Capacity (BMC): The ability of the system to maintain its structural and functional integrity against perturbations. (How robust is it?)
- Predictive Horizon (PH): The temporal or causal depth of the agent's internal model. (How far ahead can it "see"?)
- Model Plasticity (MP): The rate at which the agent can update its internal model (
U.GenerativeModel) in response to prediction errors (U.Error). (How quickly can it learn?) - Policy Enactment Reliability (PER): The probability that the agent will successfully execute its chosen
U.Methodunder operational conditions. (How reliably does it do what it decides to do?) - Objective Complexity (OC): A measure of the complexity of the
U.Objectivethe agent can pursue, from simple set-points to abstract, multi-scale goals.
While the multi-dimensional Agency-CHR profile is essential for formal assurance, engineers and managers need a simpler, at-a-glance summary. The Agency Grade is a non-normative, didactic scale from 0 to 4 that synthesizes the CHR profile into an intuitive level of autonomy.
| Grade | Label | Typical Agency-CHR Profile (Conservative Lower Bound) |
Archetypal Example |
|---|---|---|---|
| 0 | Non-Agential | BMC ≈ 0, PH ≈ 0, MP ≈ 0 |
A rock, a document, a passive structural component. |
| 1 | Reactive | BMC > 0, PH ≈ 0, MP ≈ 0 |
A thermostat; a simple feedback controller. Follows fixed rules. |
| 2 | Predictive | BMC > 0, PH > 0, MP ≈ 0 |
A model-predictive controller with a fixed model; a chess engine that plans moves but doesn't learn new strategies. |
| 3 | Adaptive | BMC > 0, PH > 0, MP > 0 |
A self-calibrating sensor system; a machine learning agent that updates its model with new data. |
| 4 | Reflective/Strategic | High BMC, PH, MP, PER, and OC. Capable of meta-cognition (reasoning about its own reasoning) and pursuing abstract goals. |
An autonomous R&D system; a cohesive, self-organizing DevOps team. |
Crucial Distinction: The Agency-CHR profile is the normative evidence. The Grade is a pedagogical shortcut. An artifact cannot claim a grade without having a corresponding, auditable CHR profile to back it up.
The universal pattern of agency, defined as a Contextual Role Assignment and measured by the Agency-CHR, manifests across all domains. The following table demonstrates its application to the FPF's two primary archetypes: a U.System and a collective U.System (a team), while explicitly showing why a U.Episteme cannot be an agent.
| Archetype | Holder (U.System) |
Role & Context (#Role:Context) |
Agency-CHR Profile Sketch |
Resulting Agency Grade |
|---|---|---|---|---|
| Simple Controller | Thermostat_Model_T800 |
#AgentialRole:HomeHeatingSystem |
BMC: High (maintains temp). PH: Zero (no prediction). MP: Zero (fixed logic). PER: Very High. OC: Low (single set-point). |
Grade 1 (Reactive) |
| Advanced Controller | PredictiveCruiseControl_v3 |
#AgentialRole:VehicleDynamics |
BMC: High. PH: High (predicts traffic flow). MP: Zero (fixed model). PER: High. OC: Medium (optimization). |
Grade 2 (Predictive) |
| Learning System | SelfCalibratingSensorArray |
#AgentialRole:IndustrialProcess |
BMC: High. PH: High. MP: Medium (learns drift). PER: High. OC: Medium. |
Grade 3 (Adaptive) |
| Collective Agent | DevOpsTeam_Phoenix (a collective U.System) |
#AgentialRole:ProjectPhoenix |
BMC: High (maintains velocity). PH: High (sprint planning). MP: High (retrospectives). PER: Medium-High. OC: High (abstract business goals). |
Grade 4 (Reflective/Strategic) |
| Knowledge Artifact | ISO_26262_Standard.pdf (U.Episteme) |
N/A (Cannot be a holder of an AgentialRole) |
N/A | Grade 0 (Non-Agential) |
Key takeaway from grounding:
This table makes the abstract model concrete. It shows that the FPF agency model can precisely differentiate between simple controllers and complex learning systems. It also reinforces the Strict Distinction principle: the ISO standard (U.Episteme) is a crucial justification (justification?) for the actions of an agent (like the DevOps team), but it is never an agent itself.
To ensure the agency model is applied rigorously and consistently, all FPF artifacts must adhere to the following normative checks.
| ID | Requirement (Normative Predicate) | Purpose / Rationale |
|---|---|---|
| CC-A13.1 (Holder Type) | The holder of a U.RoleAssignment with role: U.AgentialRole MUST be a U.System. |
Prevents the "episteme-as-actor" category error. Enforces Strict Distinction (A.7). |
| CC-A13.2 (RoleAssignment Mandate) | Any claim of agency MUST be represented by a complete U.RoleAssignment instance, including an explicit holder, role, and context. |
Ensures that agency is always modeled as contextual and bound to a specific bearer, not as a free-floating property. |
| CC-A13.3 (CHR Evidence) | Any claim about an Agent's grade or level of autonomy MUST be substantiated by an auditable Agency-CHR profile with evidence anchors (A.10). |
Makes claims of agency falsifiable and prevents "agency by marketing." |
| CC-A13.4 (Grade is Didactic) | The Agency Grade (0-4) SHALL NOT be used as a normative input for formal reasoning. It is a didactic summary of the Agency-CHR profile. |
Prevents oversimplification in formal models. The detailed profile, not the summary grade, must be used for assurance cases. |
| CC-A13.5 (Collective as System) | To claim agency for a collective (e.g., a team, a swarm), the collective MUST first be modeled as a U.System with a defined U.Boundary and a coordination U.Method. |
Prevents the error of assigning agency to a mere set or collection (MemberOf). Aligns with A.1 and A.14. |
| CC-A13.6 (MHT for Emergent Agency) | If a collection of systems, previously non-agential or at a lower grade, develops a new supervisory structure and crosses a documented Agency-CHR threshold, a Meta-Holon Transition (MHT, B.2) MUST be declared. |
Makes the emergence of collective agency an explicit, auditable event, preventing "magic" emergence. |
| Benefits | Trade-offs / Mitigations |
|---|---|
| Category Safety & Clarity: The pattern provides a clear, unambiguous definition of agency that prevents common modeling errors and is consistent across all of FPF. | Increased Modeling Granularity: Requires modelers to think in terms of Role-assignments and contexts, which is slightly more complex than just labeling something an "Agent." Mitigation: The Holon#Role:Context syntax and tooling support make this manageable in practice. |
Falsifiable & Measurable Agency: By grounding agency in the Agency-CHR, the framework transforms a vague philosophical concept into a set of concrete, evidence-backed engineering properties. |
Measurement Effort: Populating the Agency-CHR profile requires real work (testing, analysis, data gathering). Mitigation: The profile can be built iteratively. An initial estimate can be used, with the understanding that its Reliability (R) score is low until backed by evidence. |
| Scalable Autonomy Model: The graded scale provides a sophisticated language for describing and comparing different levels of autonomy, from simple automation to strategic intelligence. | Risk of Misinterpreting Grades: The simple 0-4 scale could be misused as a simplistic marketing label. Mitigation: The normative requirement (CC-A13.4) to always link a grade to its underlying CHR profile acts as a guardrail against this. |
| Elegant Handling of Collectives: The pattern provides a clean way to model the agency of teams, swarms, and organizations without violating ontological principles. | - |
This pattern's strength comes from its synthesis of contemporary, post-2015 research into a single, operational model.
- Grounded in Science: The move away from a binary, type-based view of agency towards a graded, spectrum-based model is directly aligned with modern research in Active Inference (Friston et al.), Basal Cognition (Fields, Levin), and evolutionary cybernetics. The
Agency-CHRprovides a direct, practical implementation of these ideas. - Ontologically Sound: By defining an Agent as a Contextual Role Assignment, the pattern avoids the ontological pitfalls of creating a new base type. It fully embraces the FPF's core architectural principle of separating substance (
holder) from function (role) within a context. This aligns with best practices from foundational ontologies (like UFO) and the principles of Strict Distinction (A.7). - Pragmatic and Actionable: The pattern is designed for engineers and managers. The
Agency Gradeprovides a quick communication tool, while the underlyingAgency-CHRprovides the detailed, auditable data needed for formal assurance and risk management. This duality satisfies both Didactic Primacy (P-2) and Pragmatic Utility (P-7).
In essence, this pattern does not invent a new theory of agency. It distills and operationalizes the emerging scientific consensus, packaging it into a rigorous, falsifiable, and practical tool for the FPF ecosystem.
- Builds on:
A.1 Holonic Foundation: Establishes that onlyU.Systems can be bearers of behavioral roles.A.2 Role Taxonomy: Provides the universal Contextual Role Assignment (U.RoleAssignment) mechanism.A.12 External Transformer: The actions of an Agent are modeled using the external transformer principle.
- Coordinates with:
B.2 Meta-Holon Transition (MHT): A significant jump in theAgency-CHRof a collective can trigger an MHT.B.3 Trust & Assurance Calculus: TheAgency-CHRprofile provides crucial inputs for assessing the reliability and safety of an autonomous system.D.2 Multi-Scale Ethics Framework: The Agency Grade is used to determine the level of moral responsibility and accountability assigned to a system.
- Instantiates:
- The
Agency-CHR(C.9) architheory, which provides the formal definitions for the characteristics (BMC, PH, etc.).
- The
FPF’s holonic world is built from part–whole relations. Early drafts distinguished structural vs. conceptual parthood (e.g., ComponentOf, ConstituentOf) but practical modelling kept hitting two recurrent gaps:
-
Quantities vs. parts. Engineers routinely need “some of the fuel”, “the first 10 pages”, “a 30% subset of data”. This is not a component; it is a portion of a stuff‑like whole, governed by measures and conservation.
-
Change vs. replacement. Authors need to say “the prototype before calibration”, “v2 of the spec”, “shift 1 vs. shift 2 of the same run”. That is not a new whole; it is a phase of the same carrier across time.
This section introduces two normative sub‑relations of partOf that close those gaps and lock them to the rest of the kernel:
- PortionOf — metrical, measure‑preserving parthood of stuffs and other measurables.
- PhaseOf — temporal parthood of the same carrier across an interval.
It also restates guard‑rails that keep roles and recipes outside mereology (A.15), and clarifies how MemberOf fits (preview: collections are grounded constructively in C.13 Compose‑CAL via Γ_m.set; collective agency is handled outside mereology in A.15 Role–Method–Work).
Publication note (Working‑Model first). Read A.14 together with E.14 Human‑Centric Working‑Model and B.3.5 CT2R‑LOG: publish relations on the Working‑Model surface; when assurance is sought, ground downward. For structural claims that require extensional identity, use the Constructive shoulder via Compose‑CAL Γ_m (sum | set | slice); order/time stay outside mereology (Γ_time / Γ_method).
If we only have “generic partOf” (plus Component/Constituent), four classes of errors appear:
-
Conservation errors. Treating “20 L of fuel from Tank A” as a component leads to nonsense: adding and removing such “components” does not respect quantities; Γ_sys proofs violate Σ‑balance.
-
Temporal smearing. Flattening “before/after”, or “v1/v2” into one timeless whole collapses history; Γ_time and Γ_method cannot justify order‑sensitive properties; audits cannot reproduce conditions.
-
Identity confusion. Modelling “new version” as “new component” either breaks identity (it is still the same holon evolving) or hides a Meta‑Holon Transition when identity really changes.
-
Role leakage. Functional/organisational roles sneak into part trees (“the PumpRole is part of the plant”), violating A.15 and making structural reasoning brittle.
| Force | Tension |
|---|---|
| Expressiveness vs. Parsimony | We need new relations (Portion, Phase) ↔ we must keep the catalogue minimal and orthogonal. |
| Universality vs. Domain nuance | One set of rules must serve physical systems and epistemes ↔ measurement and time behave differently by domain. |
| Identity vs. Change | Preserve “the same carrier through change” ↔ allow explicit re‑identification when invariants fail. |
| Static structure vs. Histories | Part trees should be simple ↔ real work requires phased histories and measured slices. |
A.14 defines two additional sub‑relations of partOf and re‑affirms the firewall between mereology and the role/recipe layer:
- PortionOf — for measured parts of a whole (stuffs and other extensives).
- PhaseOf — for temporal parts of the same carrier.
- No roles/recipes in mereology.
U.Role,U.Method,U.MethodDescriptionare never parts (A.15). - MemberOf stays, but constructive aggregation and agency live elsewhere.
MemberOfremains available to state collections and collectives; their collection‑as‑whole may be constructed via Γ_m.set (Compose‑CAL, C.13), while composition and agency are handled in B.1.7 Γ_collective (not in A.14).
The classical pair ComponentOf (structural, discrete) and ConstituentOf (conceptual, logical/epistemic) remain as in the kernel; A.14 only clarifies how to tell them apart from Portion/Phase (§ 6).
Intent. Capture “some of the same stuff/extent”, governed by a measure that adds up.
Applicability. Any U.Holon that carries an extensive measure μ on the chosen scope
(examples: mass, volume, length‑of‑text, byte size, wall‑time budget).
Primitive. PortionOf(x, y) means: x is the same kind of stuff/content as y, but less.
Axioms (A14‑POR‑*)
- POR‑1 (Partial order). PortionOf is reflexive, antisymmetric, transitive on its domain.
- POR‑2 (Metrical dominance). If
x ProperPortionOf ythen0 < μ(x) < μ(y)for the agreed μ. - POR‑3 (Additivity on disjoint portions). If
x ⟂ y(no overlap) and both PortionOf y, thenμ(x ⊔ y) = μ(x)+μ(y)andx ⊔ y PortionOf y. - POR‑4 (Kind integrity). x and y must share the same measure kind and unit (or a declared conversion).
- POR‑5 (Boundary compatibility). For physical wholes, the whole’s boundary encloses the union of its portions; cross‑boundary “leaks” are interactions, not portions.
Didactic tests. ✔ “5 kg from a 20 kg billet” — PortionOf. ✔ “Pages 1–10 of the report” — PortionOf (μ = page or token count). ✘ “The pump module of the plant” — ComponentOf, not PortionOf. ✘ “The Methods section of the paper” — ConstituentOf, not PortionOf.
Intent. Capture “the same holon during a sub‑interval”, preserving identity through change.
Applicability. Any U.Holon that persists across time with a recognised carrier identity.
Primitive. PhaseOf(x, y) means: x is y restricted to a proper time interval.
Axioms (A14‑PHA‑*)
- PHA‑1 (Partial order). PhaseOf is reflexive, antisymmetric, transitive (on the same carrier).
- PHA‑2 (Coverage). The whole is the union of its maximal, non‑overlapping phases over its lifetime interval.
- PHA‑3 (No paradoxical overlap). Phases of the same carrier do not overlap in time; overlapping variants require
PhaseOfon aspects or different carriers. - PHA‑4 (Identity through change). Properties may vary between phases, but the carrier’s identity criteria hold continuously (e.g., same serial number, same legal identity, same theorem statement).
- PHA‑5 (Escalation to MHT). If identity criteria break (e.g., metamorphosis with new objectives), declare a Meta‑Holon Transition (B.2) rather than a PhaseOf.
Didactic tests. ✔ “PumpUnit#3 before calibration” — PhaseOf(Pump#3_pre, Pump#3). ✔ “Spec v2” — PhaseOf(Spec_v2, Spec), on the MethodDescription artefact. ✔ “Shift 1 of the same batch run” — PhaseOf(Work_shift1, Work). ✘ “Prototype vs. production unit” — likely different carriers; use ComponentOf/ConstituentOf or MHT per criteria.
- Structural claims published on the Working-Model surface SHALL be justified, when assurance is required, by a Constructive grounding narrative using Γ_m.sum | Γ_m.set | Γ_m.slice and linked with
tv:groundedBy(see B.3.5; C.13). - PhaseOf is temporal parthood; it SHALL NOT be grounded via Γ_m. Its assurance follows identity‑through‑time criteria (CC‑PHA‑1..3) and Γ_time ordering (B.1.4).
- MemberOf remains non‑mereological (CC‑MEM‑2). When modelling a collection‑as‑whole for assurance purposes, the constructive basis is Γ_m.set; no ComponentOf inferences follow from MemberOf.
| You want to say… | Use | Why |
|---|---|---|
| “This is a piece of the same stuff (lower amount/extent).” | PortionOf | Governed by a measure μ and conservation (Σ‑additive). |
| “This is a discrete part that sits inside the whole.” | ComponentOf | Structural parthood; boundary‑respecting, not measured by μ. |
| “This is a logical part in a conceptual whole.” | ConstituentOf | Sections, lemmas, clauses, conceptual assembly. |
| “This is the same thing during a sub‑interval.” | PhaseOf | Temporal slicing with identity continuity. |
| “This item belongs to that collection/collective.” | MemberOf | Not a building block of the whole; composition handled in B.1.7 Γ_collective. |
| “This system plays a Role or position.” | playsRole (A.15) | Roles are contextual masks, never parts. |
Firewall reminder. If your sentence contains “role”, “policy”, “process/workflow/SOP/script”, you are likely talking about A.15 (roles/recipes/runs), not A.14.
| Relation | U.System example |
U.Episteme example |
|---|---|---|
| PortionOf | 50 L from a 200 L fuel tank (μ = volume). | Pages 1–10 from a 120‑page report (μ = page/token count). |
| ComponentOf | Impeller ComponentOf PumpUnit. | Figure 2 ComponentOf Poster Layout (physical artefact). |
| ConstituentOf | Control law ConstituentOf Controller Design. | Lemma A ConstituentOf Theorem Proof. |
| PhaseOf | PumpUnit#3 before/after calibration (same serial). | Spec v1 → v2 (same document lineage). |
| MemberOf (for reference) | “is an element of a collection/collective”; use when a grouping is explicitly treated as a whole set, without implying component integration. Not a building block of the whole; constructive aggregation is handled in C.13 Compose‑CAL (Γ_m.set). If the grouping is expected to act, model a collective system (A.15). |
| ID | Requirement | Purpose |
|---|---|---|
| CC‑A14‑0 | No U.Role, U.Method, or U.MethodDescription MAY occur as a node in any partOf chain. |
Keeps parthood purely structural/conceptual (see A.15). |
| CC‑A14‑0b | MemberOf MUST NOT imply, entail, or be auto‑rewritten into any partOf sub‑relation. |
Separates collections/collectives from parthood. |
| CC‑A14‑0c | SerialStepOf / ParallelFactorOf MUST NOT appear in any partOf chain or table in A.14; model order via A.15 (Γ_ctx/Γ_method). |
Prevents the “order‑as‑structure” category error. |
| ID | Requirement | Purpose |
|---|---|---|
| CC‑POR‑1 (Domain) | PortionOf(x,y) is valid only if the modelling scope declares at least one extensive measure μ for y (mass, volume, token count, byte size, wall‑time budget, etc.). |
Prevents “portion” without a measure. |
| CC‑POR‑2 (Kind) | x and y SHALL share the same μ‑kind and compatible units (or an explicit conversion). | Prevents apples‑to‑oranges addition. |
| CC‑POR‑3 (Monotone additivity) | For disjoint portions x ⟂ z with PortionOf(·,y): μ(x ⊔ z) = μ(x)+μ(z). |
Secures Σ‑reasoning and Γ_sys proofs. |
| CC‑POR‑4 (Boundary) | For physical systems, the whole’s boundary encloses the union of portions; cross‑boundary flows are not portions. | Distinguishes stock vs flow. |
| CC‑POR‑5 (Non‑replacement) | “Replacing 20% of y by v” MUST be modelled as PortionOf removal + Component/Constituent insertion, not as a single PortionOf rewrite. | Avoids silent identity change. |
| ID | Requirement | Purpose |
|---|---|---|
| CC‑PHA‑1 (Carrier identity) | PhaseOf(x,y) requires an explicit identity criterion for y valid over the union of phases (e.g., serial number, legal identity, theorem statement). |
Prevents re‑identification by stealth. |
| CC‑PHA‑2 (Coverage & non‑overlap) | The lifetime of y equals the union of its maximal, non‑overlapping phases (on the same aspect). | Enables Γ_time composition and audit. |
| CC‑PHA‑3 (Aspect clarity) | If two temporal slices of y overlap, they MUST be phases of different aspects (e.g., mechanical‑state vs software‑state), or else be different carriers. | Avoids paradoxical overlaps. |
| CC‑PHA‑4 (Escalation) | If identity criteria fail during change, declare a Meta‑Holon Transition (B.2) instead of PhaseOf. | Makes re‑identification explicit. |
| CC‑PHA‑5 (MethodDescription & Work) | Versions of MethodDescription and slices of Work SHALL use PhaseOf (A.15); PhaseOf never applies to U.Role. |
Aligns with A.15 bindings. |
| ID | Requirement | Purpose |
|---|---|---|
| CC‑ANCH‑1 | Every ut:StructPartOf edge MUST carry a tv:groundedBy link to a valid Γ_m constructor trace (Compose‑CAL). |
Makes A.10 executable; ensures extensional identity. |
| CC-ANCH-2 | For epistemic edges (ut:EpiPartOf and its sub-types), tv:groundedBy is OPTIONAL; instead supply ev:evidence and set validationMode ∈ {axiomatic, postulate, inferential}. |
Harmonises evidence treatment for epistemic edges. |
| CC‑ANCH‑3 | The public query Standard remains ?x ut:PartOf+ ?y; internally it is realised via CT2R‑aliases grounded by Γ_m traces. |
Preserves the “one query” UX while tightening semantics. |
Note. Property names and trace semantics are defined in the CT2R‑LOG / Compose‑CAL architheories.
| ID | Requirement | Purpose |
|---|---|---|
| CC‑MEM‑1 | MemberOf domain/range are open: any U.Holon may be a member of a collection/collective holon. |
Allows mixed collections when needed. |
| CC‑MEM‑2 | From MemberOf(x,C) it is forbidden to infer any property of C to x via parthood rules. |
Prevents “set‑as‑whole” errors. |
| CC‑MEM‑3 | Constructive aggregation of collections is provided by C.13 Compose‑CAL (Γ_m.set); agency of collectives is specified outside A.14 (see A.15 Role–Method–Work). |
Keeps A.14 narrow and clean. |
| ID | Requirement | Purpose |
|---|---|---|
| CC-A14-10 | For structural edges on the Working-Model surface, authors SHALL set validationMode=axiomatic and attach **`tv:groundedBy → Γ_m.sum |
set |
| CC‑A14‑11 | PhaseOf edges SHALL NOT use Γ_m for grounding. Authors SHALL provide identity criteria and non‑overlap per CC‑PHA‑1..3 and reference Γ_time when ordering matters. | Keeps temporal parthood distinct from construction; preserves the plane firewall. |
Step 0 — Firewall check. If your sentence contains role, policy, process/workflow/SOP/script, or execution/run/job, you are not in mereology; go to A.15 (Role–Method–Work).
Step 1 — Is it measured stuff? If yes, pick PortionOf. Confirm μ is declared (CC‑POR‑1/2). Test additivity on a toy split (CC‑POR‑3). If flows cross a boundary, remodel as interactions, not portions (CC‑POR‑4).
Step 2 — Is it a discrete inside part? If yes, pick ComponentOf (physical) or ConstituentOf (conceptual). Do not use PortionOf here.
Step 3 — Is it the same carrier at a time slice? If yes, pick PhaseOf. Verify identity criteria and non‑overlap (CC‑PHA‑1/2/3). If criteria break, escalate to B.2 (CC‑PHA‑4).
Step 4 — Is it a membership statement?
Use MemberOf only; avoid any part‑inferences (CC‑MEM‑2). If you need a collection as a whole, use C.13 (Γ_m.set) for constructive grounding. If you need collective action, defer to A.15.
Quick spot‑tests (repair kit).
| Smell | Likely error | Fix |
|---|---|---|
| “20% of the chassis” | Treating structure as stuff | Use ComponentOf; if truly laminar material, PortionOf applies to material stock, not the assembled chassis. |
| “Chapter 2 is 15% of the book” | Mixing measures and constituents | Use ConstituentOf; the 15% is length‑of‑text as a separate statement. |
| “Spec v2 overlaps v1” | Overlapping phases on same aspect | Use PhaseOf(Spec_v2, Spec) with non‑overlap; represent drafting as Work episodes (A.15) rather than overlapping specs. |
| “Team is part of the project” | Member vs part confusion | Use MemberOf(Team, ProjectCollective), not partOf. |
| Γ‑flavour | Mereological hooks (what A.14 supplies) | Key effect |
|---|---|---|
| Γ_sys (B.1.2) | Treat PortionOf as Σ‑additive stocks; ComponentOf must respect boundary integration; PhaseOf is not aggregated here. | Conserves extensive measures and keeps structural WLNK (weakest‑link) on components. |
| Γ_epist (B.1.3) | PortionOf of texts/data uses μ = token/byte count; ConstituentOf composes arguments/sections; PhaseOf versions MethodDescriptions/documents. | Preserves provenance and avoids trust inflation by keeping constituents vs portions distinct. |
| Γ_ctx / Γ_time (B.1.4) | PhaseOf provides the legal slicing for time and order; PortionOf is orthogonal (quantities inside steps). | Ensures chronological consistency and monotone coverage. |
| Γ_method (B.1.5) | Recipes are MethodDescription graphs (not parthood). When a recipe refers to stuff‑like inputs, those are PortionOf statements on resources. | Separates recipe composition from structure. |
| Γ_work (B.1.6) | Only Work carries resource deltas; when logging “consumed 5 kg from Tank A”, model it as PortionOf relation to the stock prior to consumption. | Makes Σ‑balance explicit; aligns with CC‑POR‑3/4. |
Benefits
- Predictable composition. Σ‑additivity for portions and identity‑through‑time for phases make Γ‑proofs straightforward.
- History without confusion. Temporal slicing is explicit and audit‑ready; no paradoxical overlaps.
- Cleaner integration with roles and recipes. The firewall prevents “functional object” creep into structure.
- Compatibility with engineering practice. Mirrors product breakdown (components) vs functional breakdown (roles) vs material stocks (portions) vs versioning (phases).
Trade‑offs / mitigations
- Modelling energy. Authors must pick μ and declare units; provide a short μ‑catalog per project.
- More relation names. Two extra sub‑relations increase vocabulary; mitigated by the decision table (§ 6) and spot‑tests (§ 9).
- Escalation discipline. Deciding PhaseOf vs MHT requires judgement; A.14 provides criteria, and B.2 captures true re‑identification.
Two‑minute checklist for reviewers
- Do I see “process/workflow/policy/script”? — then A.15, not mereology.
- Does every PortionOf have a declared μ and unit?
- Do phases cover a lifetime without overlap for the same aspect?
- Are any roles/recipes appearing as parts? If yes, stop and refactor.
-
Kernel · Holonic Mereology (§ A.1 → A.14) Add sub‑sections “PortionOf” and “PhaseOf” with axioms (POR‑1..5, PHA‑1..5). Move MemberOf note to a minimal semantics paragraph (no composition here).
-
A.15 (Role–Method–Work) Cross‑link firewall (CC‑A14‑0/0b) and reinforce that versioning uses PhaseOf only on MethodDescription/Work.
-
B.1.2 Γ_sys / B.1.3 Γ_epist / B.1.4 Γ_ctx/Γ_time / B.1.5 Γ_method / B.1.6 Γ_work Insert a one‑line “A.14 compliance” note: which A.14 sub‑relations each flavour relies on, as in § 10.
-
Examples & Annexes Refactor any “percentage as part” examples into PortionOf with declared μ; Split any overlapping histories into PhaseOf sequences.
Each edited heading should carry the badge “► decided‑by: A.14 Advanced Mereology”.
- Metrical mereology advances (e.g., recent work on quantity‑based parthood and additivity) motivate PortionOf with explicit μ and Σ‑laws, preventing the classic “stuff as components” fallacy.
- Temporal parts & identity through change (renewed treatments in analytic metaphysics and formal ontology) motivate PhaseOf with coverage/non‑overlap and escalation when identity criteria fail.
- Engineering ontologies (BORO lineage, Core Constructional practice, ISO 15926 family) keep a strict separation between functional breakdowns (our Roles) and product breakdowns (our Components), with stock/consumable modelling (our Portions) handled by quantities, not by component trees.
- Knowledge artefact lifecycles in contemporary MBSE and open‑science workflows use explicit versioning (our PhaseOf) and provenance‑preserving composition (our ConstituentOf).
- The net effect is a minimal‑sufficient catalogue: two added sub‑relations close real modelling gaps while preserving parsimony, didactic clarity, and Γ‑compatibility across domains.
In any complex system, from a software project to a biological cell, there is a fundamental distinction between what something is (its structure), what it is supposed to do (its role and specified capability), and what it actually does (its work). Confusing these layers is a primary source of design flaws, budget overruns, and failed projects. Teams argue about a "process" without clarifying if they mean the documented procedure, the team's ability to execute it, or a specific execution that happened last Tuesday.
This pattern provides the canonical alignment for modeling contextual enactment in FPF, serving as the ultimate implementation of the Strict Distinction Principle (A.7). It weaves together several foundational concepts into a single, coherent model of how intention becomes action:
- A.2 (Contextual Role Assignment): Provides the
Holder#Role:Contextstructure for assigning roles. - A.4 (Temporal Duality): Provides the strict separation between
design-timeandrun-time. - A.12 (External Transformer): Ensures that all actions are attributed to an external agent.
The intent of this pattern is to establish a normative, unambiguous vocabulary and set of relations for describing the entire evolution of an action, from the specification of a capability to its concrete, resource-consuming execution.
To keep plan–run separation explicit, this pattern references A.15.2 U.WorkPlan for schedules/calendars and A.15.1 U.Work for dated execution. Ambiguous terms like “process / workflow / schedule” are constrained by L‑PROC / L‑FUNC / L‑SCHED (E‑cluster): a workflow is a Method/MethodDescription, a schedule is a WorkPlan, and what happened is Work.
Terminology note (L‑ACT). The words action/activity are not normative in the kernel. When a generic “doing” is needed, we use the didactic term enactment (not a type). Normative references must be to U.Method / U.MethodDescription / U.Work / U.WorkPlan. See lexical rules L‑PROC / L‑FUNC / L‑SCHED / L‑ACT
Without this formal framework, models suffer from a cascade of category errors:
- Role-as-Part: A Role (e.g.,
AuditorRole) is incorrectly placed inside a structural bill-of-materials (ComponentOf), making the system's architecture brittle and nonsensical. - Specification-as-Execution: A
MethodDescription(the "recipe") is treated as evidence that the work was done. This leads to "paper compliance," where a system is considered complete simply because its documentation exists. - Capability-as-Work: A team's ability to perform a task (
Capability) is conflated with the actual performance of that task (Work). This obscures the reality of resource consumption and actual outcomes. - Work-without-Context: An instance of work is logged without a clear link back to the role, capability, and specification that governed it, making the work unauditable and its results impossible to reproduce.
- Ambiguous "Process/Activity": The overloaded term "process" is used indiscriminately to refer to all of the above, creating a fog of miscommunication that paralyzes decision-making. Activity/action terms must be resolved via L‑ACT to Method/MethodDescription (recipe), WorkPlan (schedule), or Work (run).
| Force | Tension |
|---|---|
| Structure vs. Function | The need to model the stable, physical structure of a system (mereology) vs. the need to model its dynamic, functional behavior (roles and actions). |
| Design vs. Run | The need for a timeless, reusable description of a capability (design-time) vs. the need for a specific, dated record of its execution (run-time). |
| Clarity vs. Jargon | The need for a precise, formal vocabulary to prevent ambiguity vs. the reality that teams use informal, domain-specific jargon like "process" or "workflow." |
| Accountability vs. Complexity | The need for a complete, end-to-end audit trail for every action vs. the desire to keep models simple and avoid excessive documentation. |
The solution is a stratified alignment that cleanly separates the design-time and run-time for contextual enactment. The bridge between these worlds is the U.RoleAssignment.
FPF mandates the use of the following distinct, non-overlapping entities to model action. Using them interchangeably is a conformance violation.
A) Design-Time Entities (The World of Potential):
U.Role: A contextual "mask" or "job title" (e.g.,TesterRole). It specifies a function but is not the function itself.U.Method: The abstract way‑of‑doing inside a context (paradigm‑agnostic; may be imperative, functional, logical, or hybrid).U.MethodDescription: AU.Epistemedescribing aU.Method(the SOP/algorithm/proof/recipe on a carrier).U.Capability: An attribute of aU.Systemthat represents its ability to perform the actions described in aMethodDescription. This is the "skill" or "know-how."U.WorkPlan: AnU.Epistemedeclaring intendedU.Workoccurrences (windows, dependencies, intended performers as role kinds, budgets) — see A.15.2.
B) The Bridge Entity:
U.RoleAssignment: The formal assertionHolder#Role:Contextthat links a specificU.Holonto aU.Rolewithin aU.BoundedContext. This binding is what "activates" the requirements associated with a role.
C) Run-Time Entity (The World of Actuality):
U.Work: An occurrence or event. It is the concrete, dated, resource-consuming execution of aU.MethodDescriptionby aHolderacting under aU.RoleAssignment; capability checks are evaluated at run time against the holder. This is the only entity that has a start and end time and consumes resources.
Kinds of Work and the primary target
Every U.Work SHALL declare a primaryTarget: U.Holon and a kind.
Kinds:
- Operational — transforms a
U.Systemor its environment. - Communicative (SpeechAct) — transforms a deontic/organizational frame (e.g., commitments, permissions, approvals).
- Epistemic — transforms a
U.Episteme(e.g., curating a dataset). TheprimaryTargetdisambiguates enactment: what is being acted upon. Example: an approval iskind=Communicative,primaryTarget = Commitment(change=4711). A deployment iskind=Operational,primaryTarget = ServiceInstance(prod-us-eu-1).
Didactic Note for Managers: The "Chef" Analogy
This model can be easily understood using the analogy of a chef in a restaurant.
ChefRoleis the Role. It's a job title with certain expectations.- A Cookbook (
U.MethodDescription) contains the recipe for a Soufflé. It's a piece of knowledge. - The chef's skill in making soufflés is their
U.Capability. They have this skill even when they are not cooking. - The restaurant's rulebook (
U.BoundedContext) states that anyone in theChefRolemust have theCapabilityto follow the recipes in the cookbook. - The actual act of making a soufflé on Tuesday evening—using eggs and butter, taking 25 minutes, and consuming gas—is the
U.Work.
Confusing these is like mistaking the cookbook for the soufflé. FPF's framework simply makes these common-sense distinctions formal and mandatory.
The entities are connected by a set of precise, normative relations that form an unbreakable causal chain. The following diagram illustrates this flow from the abstract context down to the concrete execution.
graph TD
subgraph Design-Time Scope (Tᴰ)
A[U.BoundedContext] -- defines --> B(U.Role)
M[U.Method] -- isDescribedBy --> D[U.MethodDescription]
Cap[U.Capability] -- supports --> M
H(U.System as Holder) --> RB(U.RoleAssignment)
B -- is the role in --> RB
A -- is the context for --> RB
A -- bindsCapability(Role,Capability) --> Cap
end
subgraph Run-Time Scope (Tᴿ)
W[U.Work]
end
RB -- performedBy --> W
W -- isExecutionOf --> D
style A fill:#e6f3ff,stroke:#36c,stroke-width:2px
style B fill:#fff2cc,stroke:#d6b656,stroke-width:2px
style Cap fill:#d5e8d4,stroke:#82b366,stroke-width:2px
style M fill:#d5e8d4,stroke:#82b366,stroke-width:2px
style D fill:#f8cecc,stroke:#b85450,stroke-width:2px
style H fill:#e1d5e7,stroke:#9673a6,stroke-width:2px
style RB fill:#dae8fc,stroke:#6c8ebf,stroke-width:3px,stroke-dasharray: 5 5
style W fill:#ffe6cc,stroke:#d79b00,stroke-width:2px,font-weight:bold
bindsCapability(Role, Capability): AU.BoundedContextasserts that a givenRolerequires a specificCapability. This is adesign-timerule.isDescribedBy(Method, MethodDescription): ACapabilityis formally described by one or moreMethodDescriptions. This links the skill to the recipe.isExecutionOf(Work, MethodDescription): A specificU.Workis arun-timeinstance of adesign-timeCapability.performedBy(Work, RoleAssigning): AU.Workis always performed by a specificAgent(a RoleAssignment). This links the action to the actor-in-context.
At run time, capability thresholds declared by the context/spec are checked against the holder; Work outcomes provide evidence for capability conformance.
This chain provides complete traceability: a specific instance of Work can be traced back through its Capability to its MethodDescription, and to the Agent (Holder + Role + Context) that was authorized and responsible for its execution.
The Contextual Action Framework is universal. It applies identically to the modeling of physical engineering processes, knowledge work, and socio-technical systems.
| Archetype | U.System Archetype (Manufacturing) |
U.Episteme Archetype (Scientific Peer Review) |
|---|---|---|
BoundedContext |
FactoryFloor:ProductionLine_B |
Journal:PhysicsLetters_A |
Role |
WeldingRobotRole |
ReviewerRole |
Holder |
ABB_Robot_Model_IRB_6700 (U.System) |
Dr_Alice_Smith (modeled as a U.System) |
U.RoleAssignment |
ABB_Robot#WeldingRobotRole:Line_B |
Dr_Smith#ReviewerRole:PhysicsLetters_A |
MethodDescription (U.Episteme) |
Welding_Procedure_WP-28A.pdf (SOP) |
Peer_Review_Guidelines_v3.docx |
Capability (Attribute of Holder) |
executeWeldingSeam(Type: 3F) |
evaluateManuscript(Field: QuantumOptics) |
Work (Occurrence) |
Manufacturing Work: Weld_Job_#78345 (15:32-15:34 UTC, consumed 1.2 kWh, 5g Argon) — isExecutionOf Welding_Procedure_WP‑28A.pdf |
Peer‑review Work: Review_of_Manuscript_#PL-2025-018 (Completed 2025-08-15, took 4 hours) — isExecutionOf Peer_Review_Guidelines_v3.docx |
Key takeaway from grounding:
This side-by-side comparison reveals the power of the framework. A seemingly different activity like welding a car chassis and reviewing a scientific paper are shown to have the exact same underlying causal structure. Both involve a Holder (a system) acting in a Role within a Context, using a Capability described by a MethodDescription to produce a specific, auditable instance of Work. This universality is what allows FPF to bridge disparate domains.
To ensure the integrity of action modeling, all FPF-compliant models must adhere to the following normative checks.
| ID | Requirement (Normative Predicate) | Purpose / Rationale |
|---|---|---|
| CC-A15-1 (Entity Distinction) | The entities U.Role, U.Method, U.MethodDescription, U.Capability, U.WorkPlan, and U.Work MUST be modeled as distinct, non‑overlapping types. |
This is the core enforcement of Strict Distinction (A.7). It prevents the category errors outlined in the "Problem" section. |
| CC-A15-2 (Temporal Scope) | U.Method/U.MethodDescription/U.WorkPlan exist in design‑time; U.Work exists in run‑time. Design artifacts are not mutated by operational events. |
Enforces Temporal Duality (A.4). Blueprints cannot be mutated by operational events. |
| CC-A15-3 (RoleAssignment Mandate) | Every U.Work MUST be linked via performedBy to a valid U.RoleAssignment. |
Guarantees that every action has a clearly identified, context-bound actor, ensuring accountability. |
| CC-A15-4 (Traceability Chain) | For every U.Work, an unbroken chain MUST exist: Work —performedBy→ RoleAssigning and Work —isExecutionOf→ MethodDescription —describes→ Method. Capability checks are evaluated against the holder at run time. |
Ensures end-to-end auditability from a specific action back to the "recipe" that governed it. |
| CC-A15-5 (No Roles in Mereology) | A U.Role or U.Capability SHALL NOT be part of a mereological (partOf) hierarchy. |
The "Role-as-Part" anti-pattern is a violation. Roles and capabilities are functional, not structural. Enforces A.14. |
| CC-A15-6 (Resource Honesty) | Resource consumption (U.Resource) MUST only be associated with U.Work, never with U.MethodDescription or U.Capability. |
Enforces that costs are tied to actual events, not to plans or potential. Aligns with Resrc-CAL (C.5). |
| CC‑A15‑7 (Plan/Run Split) | Schedules/calendars MUST be represented as U.WorkPlan (A.15.2). A WorkPlan SHALL NOT be used as evidence of execution; only U.Work carries actuals. |
|
| CC‑A15‑8 (Lexical Sanity) | Unqualified “process/workflow/schedule” MUST be interpreted per L‑PROC/L‑FUNC/L‑SCHED: workflow ⇒ Method/MethodDescription; schedule ⇒ WorkPlan; what happened ⇒ Work. |
|
| CC-A15-9 (Realisation) | A valid U.Work realises a U.MethodDescription under a U.RoleAssignment. Spontaneous physical evolution without a MethodDescription is modeled as U.Dynamics, not as U.Work. |
|
| CC-A15-10 (GateSplit) | A SpeechAct that changes a Role’s state (e.g., “Approve”, “Authorize”) MUST be modeled as a distinct U.Work step (kind=Communicative). It may open the Green‑Gate for a subsequent operational step, but it SHALL NOT be conflated with that step. |
|
CC-A15-11 (KindFit) The U.Role named in the performedBy assignment SHALL be appropriate for the Work kind (e.g., ApproverRole for Communicative approvals; DeployerRole for Operational deployments). |
| Benefits | Trade-offs / Mitigations |
|---|---|
| Unambiguous Communication: Provides a shared, precise vocabulary for teams to discuss roles, processes, and results, eliminating the ambiguity of terms like "process." | Initial Learning Curve: Requires teams to learn and internalize the distinctions between the core entities. Mitigation: The "Chef" analogy and clear archetypes serve as powerful didactic tools. FPF tooling should guide users with templates. |
End-to-End Auditability: The framework creates a "digital thread" that links every operational event (Work) back to its authorizing role, context, and specification. This is critical for regulated industries and for root cause analysis. |
Increased Formality: Requires more explicit modeling than informal approaches. Mitigation: This is a strategic investment. The upfront cost of formal modeling is offset by massive savings in debugging, re-work, and compliance efforts later. |
Enables True Modularity: By separating capability from execution, the framework allows for easier substitution. A MethodDescription can be updated without invalidating past Work records. A Holder can be replaced with another, as long as it possesses the same Capability. |
- |
Foundation for Governance: The model makes it possible to build powerful governance rules. For example: "Only an Agent with AuditorRole can execute Work that instantiates the ApproveRelease capability." |
- |
This pattern solves a problem that has plagued systems modeling for decades: the conflation of what a system is with what it does. Its rigor is not arbitrary but is grounded in several key intellectual traditions.
- Ontology Engineering: The pattern is a direct application of best practices from foundational ontologies (like UFO), which have long insisted on the distinction between endurants (objects like a
U.System) and perdurants (events/processes likeU.Work), and between intrinsic properties and relational roles. FPF makes these powerful distinctions accessible to practicing engineers. - Process Theory: Formalisms like the Pi-calculus or Petri Nets model processes as dynamic interactions. The FPF Contextual Action Framework provides a higher-level, more semantically rich layer on top of such formalisms. The
U.Workentity can be seen as an instance of a process, but FPF adds the crucial context of theRole,Capability, andMethodDescriptionthat govern it. - Pragmatism and Practice: The framework is deeply pragmatic. The distinctions it makes (e.g., between a
MethodDescriptionandWork) are precisely the ones that matter in the real world of project management, compliance, and debugging. When a failure occurs, a manager needs to know: was the recipe wrong (MethodDescription), did the chef lack the skill (Capability), or did they just make a mistake this one time (Work)? This framework provides the vocabulary to ask and answer that question precisely.
By creating this clean, stratified alignment for enactment, FPF provides a stable and scalable foundation for all of its more advanced architheories, from resource management (Resrc-CAL) and decision theory (Decsn-CAL) to ethics (Norm-CAL).
- Directly Implements:
A.7 Strict Distinction. - Builds Upon:
A.2 (U.Role),A.2.1 (U.RoleAssignment),A.4 (Temporal Duality),A.12 (External Transformer). - Is Used By / Provides Foundation For:
C.4 Method-CAL: Provides the formal definition ofU.MethodDescriptionand theΓ_methodoperator for composing them.C.5 Resrc-CAL: Provides theU.Workentity to which resource consumption is attached.B.1.6 Γ_work: The aggregation operator forU.Work.B.4 Canonical Evolution Loop: The entire loop is a sequence ofWorkinstances that modifyMethodDescriptions.A.15.2 U.WorkPlan: plan–run split, baselines and variance againstU.Work.
- Constrains: Any architheory that models actions or processes must use this framework to be conformant. It serves as the canonical alignment for contextual enactment in the FPF ecosystem.
- Coordinates with L‑PROC / L‑FUNC / L‑SCHED (E‑cluster) for lexical disambiguation of process / workflow / schedule.
After we have agreed who is assigned (via Role assignment), what they can do (via Capability), and how in principle it should be done (via Method/MethodDescription), we still need a precise concept for what actually happened in real time and space.
That concept is U.Work: the dated run‑time occurrence of enacting a MethodDescription by a specific performer under a Role assignment, with concrete parameter bindings, resource consumption, and outcomes. Managers care about Work because it is the only place where cost, time, defects, and evidence are real. Architects care because Work ties plans and specs to accountable execution.
- Plan/run confusion. Schedules and diagrams get mistaken for “the process,” so audits and KPIs become fiction.
- Spec/run conflation. A method description (code/SOP) is reported as if it were an execution; conversely, logs are treated as recipes.
- Who/when leakage. People and calendars are baked into specs; reuse and staffing agility collapse.
- Resource dishonesty. Energy/money/tool wear are booked to methods or roles, not to actual runs; costing and sustainability metrics drift.
- Mereology muddle. Teams hand‑wave over “sub‑runs,” retries, overlaps, or long‑running episodes; roll‑ups double‑count or miss work.
| Force | Tension we resolve |
|---|---|
| Universality vs. domain detail | One Work notion for surgery, welding, ETL, proofs, lab cycles—while letting each keep its vocabulary. |
| Granularity vs. aggregation | Atomic runs vs. composite operations; we need roll‑up without double‑count. |
| Concurrency vs. order | Parallel/overlapped activities need clear part/overlap semantics. |
| Identity vs. retries | A failed attempt, a retry, and a resumed episode—what is “the same” work? |
| Time realism vs. simplicity | We need intervals and coverage but cannot bury users in temporal logic notation. |
U.Work is a 4D occurrence holon: a dated run‑time enactment of a U.MethodDescription by a performer designated through a U.RoleAssignment, within a U.BoundedContext, that binds concrete parameters, consumes/produces resources, and leaves an auditable trace.
Memory aid: Work = “how it went this time” (dated, resourced, accountable).
When you describe a Work instance in a review, answer these prompts:
- Window — start/end timestamps (and, where relevant, location/asset).
- Spec —
isExecutionOf → U.MethodDescription(the description actually followed). - Performer —
performedBy → U.RoleAssignment(which holder#role:context acted). - Parameters — concrete values bound for this run (from the Method/MethodDescription parameter declarations).
- Inputs/Outputs — material/information artifacts read/written, products/services delivered.
- Resources — energy, materials, machine time, money (the only place we book them).
- Outcome — success/failure classes, quality measures, acceptance verdicts (per Method/Spec criteria).
- Links — predecessor/successor/overlap relations to other Work, and step/run nesting (if part of a bigger operation).
- Context — the bounded context(s) under which this run is judged (normally inherited from the MethodDescription and RoleAssigning; see A.15 for cross‑checks).
| You are pointing at… | The right FPF concept | Litmus |
|---|---|---|
| The recipe/code/diagram | MethodDescription | Is it knowledge on a carrier? |
| The semantic “way of doing” | Method | Same Standard across notations? |
| The assignment (“who is being what”) | Role → RoleAssigning | Can be reassigned without changing the system? |
| The ability (“can do within bounds”) | Capability | Would remain even if not assigned? |
| The dated occurrence with logs, resources | Work | Did it happen at (t₀, t₁), consume resources, produce outcomes? |
We adopt a 4D extensional stance for occurrences: a Work is identified primarily by its spatiotemporal extent and its execution anchors (spec used, performer, parameterization). This avoids double‑counting and keeps aggregation sound. FPF adapts insights from BORO/constructive ontologies to Work while staying practical.
- Temporal‑part (
TemporalPartOf_work). A proper time‑slice of a Work (e.g., the first 10 minutes of a 2‑hour run). Useful for monitoring and SLAs. - Episode‑part (
EpisodeOf_work). A resumption fragment after an interruption (same run identity if policy deems it one episode; see 5.5). - Operational‑part (
OperationalPartOf_work). A sub‑run that enacts a factor of the Method/Spec (e.g., “incision” run within “appendectomy” run), possibly overlapping with others in time. - Parallel‑part (
ConcurrentPartOf_work). Two sub‑runs that overlap in their windows, coordinated by the same higher‑level run.
Didactic rule: Method composition ≠ proof of Work decomposition. Sub‑runs often map to method factors, but retries, batching, pipelining, and failures make the mapping non‑isomorphic.
precedes/happensBefore— strict partial order on Work windows.overlaps— intervals intersect but neither contains the other.contains/within— one Work’s window contains another’s.causedBy/causes— pragmatic causal links (e.g., a rework caused by a failed inspection run).retryOf— a new Work instance re‑attempting the same MethodDescription with revised parameters.resumptionOf— a Work episode that continues an interrupted run (policy decides identity; see 5.5).
These relations are run‑time facts, not design assumptions.
-
Temporal coverage —
Γ_time(S)For a setSof Work parts, returns a coverage interval set (union of intervals) or, when required, the convex hull[min t₀, max t₁]. Use union for utilization; use hull for lead time. Properties: idempotent, commutative, monotone under set inclusion. -
Resource aggregation —
Γ_work(S)For a setSof Work parts, returns the aggregated resource ledger (materials, energy, time, money) with de‑duplication rules for shared/overlapped parts (context‑declared). Properties: additive on disjoint parts; requires overlap policy otherwise (e.g., attribute costs to the parent once, not to each child).
Manager’s tip: Pick the coverage operator that matches your KPI: union for machine utilization; hull for calendar elapsed; never mix silently.
Two Work records refer to the same Work iff, in the relevant context:
- their time–space extent is the same (within declared tolerance),
- they link to the same
MethodDescription, - they have the same performer (
U.RoleAssignment), and - they bind the same parameters (or declared‑equivalent values).
If any of these differ (or the context declares equivalence absent), they are distinct Work instances (e.g., a retry).
- Retry: new Work with its own window and parameters; link via
retryOf. - Resumption: same Work identity split into episodes if the context’s episode policy declares so (e.g., “power loss under 5 minutes keeps identity”).
- Rework: new Work caused by a failure in earlier Work; link via
causedBy.
Why it matters: plans, costs, and quality stats depend on whether you treat a disruption as one episode or a new run. Declare the policy in the bounded context.
- Top run:
Appendectomy_Case#2025‑08‑10T09:05–11:42. - Spec:
Appendectomy_v5(MethodDescription). - Performer:
OR_Team_A#SurgicalTeamRole:Hospital_2025(RoleAssigning). - Operational parts:
Incision(09:15–09:22),Exploration(overlaps with monitoring),Closure(11:10–11:35). - Episode: brief power dip 10:02–10:07 → resumptionOf same run (per hospital policy).
- Γ_time: union for OR utilization; hull for patient lead time.
- Γ_work: totals consumables and staff time once (no double‑count for overlapping sub‑runs).
- Top run:
ETL_Nightly_2025‑08‑11T01:00–01:47. - Spec:
ETL_v12.bpmn. - Performer:
ETL_Runtime#TransformerRole:DataOps_2025. - Parallel parts:
Extract_A‖Extract_B;Transformstarts when either completes (overlap). - Retry:
Loadfailed at 01:36; retried with batch size ↓ — new Work linked viaretryOf. - Γ_time: hull for SLA, union for cluster utilization.
- Γ_work: sum compute minutes; attribute storage I/O once at the parent.
- Run:
Carnot_Cycle_Run#2025‑08‑09T13:00–13:06. - Spec:
Carnot_Cycle_Spec(MethodDescription with Dynamics model). - Performer:
LabRig_7#TransformerRole:ThermoLab. - Work identity: the path in state‑space traced during the interval; outputs: heat/work tallies.
- Γ_time: straightforward interval; Γ_work: integrates energy exchange; no “steps” required.
- Lenses tested:
Prag,Arch,Did,Epist. - Scope declaration: Universal; temporal semantics and episode policy are context‑local via
U.BoundedContext. - Rationale: Gives FPF a clean, actionable notion of occurrence compatible with
U.RoleAssignment/ Role Enactment (A.2.1; A.15) and with 4D extensional thinking, so that costing, quality, and audit rest on runs, not on plans or recipes.
CC‑A15.1‑1 (Strict distinction).
U.Work is a dated run‑time occurrence. It is not a U.Method (semantic way), not a U.MethodDescription (description), not a U.Role/RoleAssigning (assignment), and not a U.WorkPlan (plan/schedule).
CC‑A15.1‑2 (Required links).
Every U.Work MUST reference:
(a) isExecutionOf → U.MethodDescription (the spec followed), and
(b) performedBy → U.RoleAssignment (the assigned performer in context).
CC‑A15.1‑3 (Time window).
Every U.Work MUST carry a closed interval [t_start, t_end] (or an explicitly marked open end for in‑flight work) and, where relevant, location/asset.
CC‑A15.1‑4 (Context anchoring & judgement).
A U.Work MUST be judged inside a declared U.BoundedContext (the judgement context).
- By default, the judgement context is the context of the referenced MethodDescription.
- If
performedByreferences a RoleAssigning in a different context, there MUST exist an explicit Bridge (U.Alignment) or policy stating cross‑context acceptance. Otherwise, the Work is non‑conformant in that context.
CC‑A15.1‑5 (RoleAssigning validity).
The performedBy RoleAssigning’s timespan MUST cover the Work interval. If it does not, the Work is invalid or must be re‑judged in a context that allows retroactive assignments.
CC‑A15.1‑6 (Parameter binding). Parameters declared by the Method/MethodDescription MUST have concrete values bound at Work creation/start and recorded with the Work. Defaults in the spec do not satisfy this requirement.
CC‑A15.1‑7 (Capability check).
All capability thresholds stated by the Method/MethodDescription MUST be checked against the holder in performedBy at the time of execution (or at defined checkpoints). Violations must be flagged on the Work outcome.
CC‑A15.1‑8 (Acceptance criteria). Success/failure and quality grades MUST be determined by the acceptance criteria declared (or referenced) by the Method/MethodDescription in the judgment context. The verdict is recorded on the Work.
CC‑A15.1‑9 (Resource honesty).
All consumptions and costs (energy, materials, machine‑time, money, tool wear) SHALL be booked only to U.Work (not to Method, MethodDescription, Role, or Capability). Estimates may live in specs; actuals live in Work.
CC‑A15.1‑10 (Mereology declared). If a Work has parts, the chosen part relation(s) must be declared (temporal‑part, episode‑part, operational‑part, concurrent‑part). Ambiguous mixtures are forbidden.
CC‑A15.1‑11 (Γ_time selection). For any roll‑up, the judgement context MUST declare which temporal coverage operator applies: union (utilization) or convex hull (lead time). Silent mixing is prohibited.
CC‑A15.1‑12 (Γ_work aggregation). Aggregation of resource ledgers across Work parts MUST specify an overlap policy (e.g., “attribute shared machine‑time to parent only”) to prevent double‑counting.
CC‑A15.1‑13 (Identity & retries).
A retry MUST be modeled as a new Work linked via retryOf. Interruptions that are treated as the same run must be modeled as episodes (resumptionOf) per a context‑declared episode policy.
CC‑A15.1‑14 (Concurrency & ordering).
Overlaps and precedences among Work MUST use interval relations (overlaps, precedes, contains/within). Implicit “step order” claims are not admissible evidence.
CC‑A15.1‑15 (Cross‑context evidence). If a Work is to be accepted in multiple contexts (e.g., regulatory + operational), either: (a) re‑judge it in each context, or (b) provide Bridges that map acceptance criteria/units/roles; never assume cross‑context identity by name.
CC‑A15.1‑16 (Spec changes during run). If the MethodDescription version changes mid‑run, the Work MUST either: (a) split into episodes bound to respective specs, or (b) record an explicit spec override event in the judgement context. Silent substitution is forbidden.
CC‑A15.1‑17 (Distributed performers). If multiple RoleAssignings jointly perform the same top‑level Work (e.g., multi‑agent orchestration), the Work MUST either: (a) designate a lead RoleAssigning and list others as concurrent parts, or (b) be modeled as a parent Work with child Works per RoleAssigning.
CC‑A15.1‑18 (Logs ≠ Work by themselves). Logs/telemetry are evidence for a Work; they do not constitute a Work unless bound to (spec, performer, time window) and judged in a context.
-
Input: a finite set
Sof Work instances or Work parts. -
Output: either (a) the union of their intervals, or (b) the convex hull
[min t_start, max t_end]—as declared by context and KPI. -
Invariants:
- Idempotent:
Γ_time(S ∪ S) = Γ_time(S) - Commutative: order of elements irrelevant
- Monotone: if
S ⊆ Tthen coverage(S) ⊆ coverage(T) (for union) or hull(S) ⊆ hull(T) (for hull)
- Idempotent:
-
Usage guidance:
- Use union for utilization/availability (how much of the clock time the asset was actually busy).
- Use hull for lead/cycle time (elapsed from first touch to last release).
- Manager’s tip: Write the choice near the KPI; many disputes are just a hidden union‑vs‑hull mismatch.
-
Input: a finite set
Sof Work instances or parts with resource ledgers. -
Output: an aggregated ledger (materials, energy, machine‑time, money, tool wear) with explicit overlap policy.
-
Invariants:
- Additivity on disjoint parts: if intervals/resources are disjoint by policy, totals add.
- No double‑count: overlapping costs must follow the declared policy (e.g., count once at parent).
- Traceability: each aggregated figure must be reconcilable to contributing Work IDs.
-
Typical policies:
- Parent‑attribution: shared fixed costs at parent; variable costs at children.
- Pro‑rata by wall‑time: split overlaps by relative durations.
- Driver‑based: allocate by a declared driver (e.g., CPU share, weight, priority).
When a Work is recorded, perform these three quick checks:
-
Spec–Context Check. Does
isExecutionOfrefer to a MethodDescription defined in the judgement context (or bridged to it)?- If no, the Work is out‑of‑context; either change context or add a Bridge.
-
RoleAssigning–Context Check. Is
performedBy’s RoleAssigning valid in the same context (or bridged)?- If no, the Work is unassigned for that context; remedy via a valid RoleAssigning or a policy exception.
-
Standard–Outcome Check. Do the Work’s inputs/outputs and metrics satisfy the acceptance criteria from the spec as interpreted in that context?
- If no, the Work fails or is “conditionally accepted” per context policy.
Manager’s mnemonic: Context, assignment, Standard → CAC. Fail any → the Work is not acceptable here (perhaps acceptable elsewhere).
- “The log is the process.” Dumping telemetry without binding (spec, performer, context) → Not Work. Create a Work, link the log as evidence.
- Silent cross‑context acceptance. “Ops accepted it, so audit accepts it.” → Add a Bridge or re‑judge in audit context.
- Spec drift in mid‑run. Swapping SOP v5→v6 without recording → Split into episodes or record override.
- Budget on the method. Charging costs to Method or Role → Book only to Work; keep estimates in specs.
- Part ambiguity. Mixing retries, episodes, and operational parts with no declared relation → Choose and declare the part relation.
- Union/hull confusion. Changing KPI coverage silently between reports → Declare
Γ_timepolicy per KPI. - Double‑count in overlaps. Summing child and parent resource ledgers → Declare and apply an overlap policy.
- Backfill links. For existing logs, create Work records and attach
isExecutionOfandperformedBy. - Name the context. Pick the judgement context explicitly; add Bridges if multiple contexts must accept.
- Publish the episode policy. Decide when an interruption keeps identity vs forces a new run.
- Choose Γ_time per KPI. Put “union” or “hull” in the KPI definition; stop arguing in meetings.
- Set an overlap policy. Write one sentence on how shared costs are allocated; apply consistently.
- Pull plans out. Move calendars to
U.WorkPlan; let Work record actuals. - Parameter blocks. Make parameters explicit and bind them at start; your root‑cause analyses will get 10× easier.
| Benefits | Trade‑offs / mitigations |
|---|---|
| Auditable reality. Costs, time, and quality attach to concrete runs; root‑cause analysis and accountability improve. | More records. You create Work instances; mitigate with templates and automation. |
| Sound roll‑ups. Γ_time/Γ_work turn roll‑ups from hand‑waving into declared policy; KPIs become comparable. | Policy discipline. You must choose union vs hull and an overlap policy; write it once. |
| Cross‑context clarity. CAC checks prevent silent model drift; bridges make acceptance explicit. | Bridge upkeep. Keep mappings short and focused; review at releases. |
| 4D extensional coherence. Parts/overlaps/retries stop double‑counting and identity confusion. | Learning curve. Teach episode vs retry; include examples in onboarding. |
- Builds on: A.1 Holonic Foundation; A.1.1
U.BoundedContext; A.2U.Role; A.2.1U.RoleAssignment; A.2.2U.Capability; A.3.1U.Method; A.3.2U.MethodDescription. - Coordinates with: A.15 Role–Method–Work Alignment (the “four‑slot grammar”); B.1 Γ (aggregation) for resource/time operators; E‑cluster lexical rules (L‑PROC/L‑FUNC).
- Informs: Reporting/KPI patterns; Assurance/evidence patterns (Work as the anchor for audits); Scheduling patterns (
U.WorkPlan↔U.Workdeltas).
- What is Work? How it went this time → dated, resourced, accountable.
- Four‑slot grammar: Who? RoleAssigning. Can? Capability. How? Method/MethodDescription. Did? Work.
- CAC checks: Context (judgement), assignment (valid RoleAssigning), Standard (acceptance criteria).
- Roll‑ups:
Γ_time = union(utilization) orhull(lead time);Γ_workwith a declared overlap policy. - Episodes vs retries: same run split vs new run; write the policy.
- Resource honesty: actuals booked only to Work; estimates live in specs.
Operations live on time. Even with perfect roles, abilities, and methods, nothing ships unless we decide when and by whom concrete runs should happen, under what constraints and budgets. Teams need a first‑class concept for plans and schedules that does not get confused with:
- the semantic “way of doing” (that is
U.Method), - the written recipe (that is
U.MethodDescription), - the actual execution (that is
U.Work), or - the state laws (that is
U.Dynamics).
U.WorkPlan is that missing anchor.
- “Workflow = schedule” conflation. Flowcharts or code are used as calendars; resource clashes and SLA misses follow.
- Plan/run blur. Gantt bars or Kanban tickets are reported as if the work already happened; audits and costing degrade.
- Spec/time leakage. People and calendars creep into MethodDescriptions; reuse and staffing agility collapse.
- No variance model. Without planned baselines, deviations in time, cost, and quality cannot be explained or improved.
- Structure entanglement. BoM and org charts get baked into “process” views; plans become brittle and unmaintainable.
| Force | Tension we resolve |
|---|---|
| Universality vs. domain idioms | One plan concept that fits hospitals, fabs, data centers, and research labs—while honoring local terms. |
| Commitment vs. flexibility | Plans must be firm enough to coordinate, yet easy to update as reality changes. |
| Assignment vs. assignment | Plans may name intended performers; the actual assignment must still be checked at run time. |
| Budgets vs. actuals | Plans carry targets and reservations; only Work carries actual spend. |
| Decomposition vs. mapping | Plan tasks decompose conveniently; they do not force a shape on actual Work runs. |
U.WorkPlan is an U.Episteme that declares intended U.Work occurrences over a horizon, with planned windows, dependencies, intended performers (as role kinds or proposed RoleAssignings), resource budgets/reservations, and acceptance targets—within a U.BoundedContext.
Strict distinction (memory aid): Method = how in principle. MethodDescription = how it is written. WorkPlan = when, by whom in intent, under which constraints. Work = how it went this time.
A U.WorkPlan contains Plan Items (think: scheduled tasks/ops), each of which typically states:
- Target Method/Spec — the Method to be enacted and the MethodDescription intended for enactment.
- Planned window — e.g., earliest start/latest finish, timebox, recurrence (cron‑like), blackout periods.
- Role requirements — role kinds required (not people), optional proposed RoleAssigning(s) if pre‑assignment is allowed in the context.
- Capability thresholds — minimal abilities required of the performer (checked at run time).
- Resource budgets/reservations — planned energy/materials/machine slots/money; reservations on assets.
- Dependencies — precedence/overlap permissions; gates/approvals.
- Acceptance targets — quality windows/SLA targets to be judged when Work completes.
- Location/asset constraints — where the run is expected to take place.
- Links to Service promises (if any) — external commitments that this plan aims to satisfy.
Didactic guardrail: No logs or actuals belong in a WorkPlan; no step logic or solver internals either—that’s the Method/Spec.
| If you say… | In FPF it is… | Why |
|---|---|---|
| “The schedule for tomorrow’s surgeries” | U.WorkPlan |
Calendar of intended runs (who/when constraints). |
| “The workflow for appendectomy” | U.MethodDescription (and U.Method) |
Recipe and semantic way, not a calendar. |
| “The process already ran at 10:00” | U.Work |
A dated run with resources and outcomes. |
| “The thermodynamic process path” | U.Work (occurrence) + U.Dynamics (model) |
A realized trajectory plus its model, not a plan. |
| “The plan assigns Dr. Lee” | WorkPlan naming an intended RoleAssigning | assignment is still validated at run time. |
| “The budget for Shift‑B” | WorkPlan (planned ledger) | Actual costs land on Work, not on the plan. |
L‑SCHED (lexical rule). In this document, words like schedule, calendar, rota, Gantt, plan point to
U.WorkPlanunless explicitly redefined by a bounded context glossary.
Keep three separations crystal‑clear:
- Method composition (design‑time semantics) → produces new Methods.
- Work composition (run‑time occurrences) → produces parent/child runs with overlaps/episodes.
- Plan mereology (epistemic structure) → organizes Plan Items for coordination (phases, sprints, shifts), with precedence and resource reservations.
Common relations among Plan Items:
Precedes_pl/DependsOn_pl— start/finish constraints and gates.MayOverlap_pl/MutuallyExclusive_pl— allowed overlaps vs exclusive windows.Refines_pl— a child plan item tightens windows/budgets of a parent.Alternative_pl— planned alternatives (e.g., backup rig, backup team).
Didactic rule: A Plan Item does not force an identical Work shape; mapping is via fulfilment and variance (see §6).
When reality happens, each U.Work may:
- Fulfil a Plan Item — link
plannedAs → PlanItem. - Partially fulfil — multiple Work instances share one Plan Item (e.g., split run), or one Work fulfils several Plan Items (e.g., consolidated batch).
- Deviate — execute with method/spec substitution, different window, different performer (still valid or policy‑exception).
- Be unplanned — Work with no Plan Item (emergency, ad‑hoc); must be labeled as such.
Variance dimensions the plan expects to report on:
- Schedule variance (Δt): early/late vs planned window.
- Cost variance (Δc): actual resource spend vs budget.
- Scope variance: different Method/Spec than planned (with justification).
- Quality variance: acceptance verdict vs target.
- Assignment variance: intended vs actual RoleAssigning.
Manager’s view: A plan that cannot report variance is a calendar picture, not a management tool.
Use this as a human‑readable checklist (not a rigid schema):
- Horizon & cadence (e.g., “W36 surgeries, daily ETL”).
- Plan Items with: target Method/Spec, planned windows, dependencies.
- Role requirements (kinds) and intended assignments (optional, context‑lawful).
- Capability thresholds and safety envelopes.
- Resource budgets and reservations on assets.
- Acceptance targets (SLA/quality windows).
- Bridges if plan spans multiple contexts (operations ↔ audit/regulatory).
- Baseline/version and change notes (so variance is attributable).
- Policy pointers (episode policy, overlap policy for Work roll‑ups if needed for KPIs).
- Exceptions path (how ad‑hoc/emergency work is planned post‑factum).
- WorkPlan:
OR_DayPlan_2025‑08‑12. - Plan Items:
Case#1 Appendectomy,Case#2 Hernia, with windows, Context assignments, and surgeon role kinds; anesthetist intended RoleAssigning provided. - Budgets: OR time blocks, consumables envelopes.
- Fulfilment: Each surgery Work links to its Plan Item; variances computed (over‑run time, substitutions).
- WorkPlan:
Fab_Maintenance_W36. - Plan Items:
Tool_42 chamber clean,Tool_13 calibration; MutuallyExclusive_pl with production slots. - Reservations: nitrogen, DI water, metrology window.
- Fulfilment: Actual clean Work happens earlier; variance logged as early with cost underrun.
- WorkPlan:
DC_Rollout_Phase‑2. - Bridges: Ops context ↔ Security Audit context (different acceptance targets).
- Plan Items:
Deploy Service A,Pen‑test A; dependencies across contexts. - Fulfilment: Deployment Work passes ops targets; audit Work passes later—variance reported per context.
- Lenses tested:
Did,Prag,Arch,Epist. - Scope declaration: Universal; meanings of windows/budgets/permissions are context‑local via
U.BoundedContext. - Rationale: Elevates planning/scheduling to a first‑class episteme that coordinates Methods, RoleAssignings, and Work without conflation.
Every FPF architheory needs to measure various aspects of systems or knowledge artifacts. A dedicated measurement backbone (see C.MM‑CHR, Measurement & Metrics Characterization) already exists, prescribing the CSLC discipline – i.e. define a Characteristic, choose a Scale (with a Unit if applicable), record a Level/Value, and thus obtain a Coordinate on that scale, optionally mapping to a Score via a Gauge. However, historically multiple near-synonyms (“axis”, “dimension”, “property”, “feature”, "metric") have been used interchangeably for “what is being measured,” and often the aspect itself gets conflated with how it is expressed (units, ranges, labels). This pattern enters the FPF Kernel lexicon to canonize a single term for the measured aspect and enforce a clear separation between what is measured and how it is measured.
When measurement concepts are not kept rigorously distinct, several issues arise:
-
Polysemy at the anchor. Teams say “dimension” or “feature” but mean slightly different things, so the very trait being measured is ambiguous.
-
Arity mistakes. A relational quality (e.g. similarity between two items) might be treated as if it were an intrinsic property of one item, or vice versa, leading to logical errors.
-
Expression conflation. The aspect being measured is often mixed up with its expression – for example, using “scale” or “axis” to mean both the quality and its unit or range. This leads to unsafe arithmetic (averaging ordinal ranks, comparing raw numbers from incompatible scales, etc.) because values get interpreted out of context.
In summary, projects lacking a canonical terminology for metrics risk miscommunication and pseudo-quantitative operations. Measurements of physical quantities, architectural attributes, or performance scores end up on incommensurate rails due to inconsistent naming and handling.
-
F1 – Single anchor of meaning. Any numeric value is meaningless unless one can ask “value of what?”. The measurement’s meaning must be anchored in a single clearly named aspect.
-
F2 – Arity clarity. Some characteristics apply to a single entity (e.g. its mass or length), while others inherently relate multiple entities (e.g. distance between two points, coupling between modules, agreement between judges). If arity isn’t explicit, claims and calculations become corrupted.
-
F3 – Scale integrity. Different kinds of scales permit different operations – e.g. you can average temperatures (ratio scale) but not ranks or grades (ordinal scale) without losing meaning. If one mixes values without regard to scale type or units, the result is nonsense (pseudo-arithmetic).
-
F4 – Composition discipline. In complex evaluations, multiple measurements may need to be combined. Without a disciplined approach, people might perform ad-hoc math on apples and oranges (adding scores from unrelated characteristics, etc.). A proper pattern must require any combination to go through a defined monotonic Gauge (e.g. a weighted formula) instead of arbitrary aggregation.
-
F5 – Transdisciplinarity. The measurement framework should work for any domain. The same conceptual scaffold must serve physical science (e.g. lab temperature readings), software engineering (e.g. module cohesion ratings), and even subjective assessments (e.g. figure-skating scores) without bias. One vocabulary, many CG‑frames.
-
F6 – Open-endedness. As systems evolve, their performance or quality metrics also evolve. Rigid life-cycle stage labels (“Phase 1, Phase 2…”) don’t capture iterative improvement. The pattern should favor an open-ended state-space view (revisiting states via checklists, as in an RSG – RoleStateGraph with re-entry) over any fixed lifecycle with “terminal” stages.
Establish “Characteristic” as the one canonical construct for “what is measured.” In every FPF context, the aspect or trait being measured MUST be referred to as a Characteristic. This term replaces “axis” or “dimension” in normative usage (those may appear only as explanatory aliases in Plain register). By fixing a single name and schema, we cleanly separate a Characteristic from its Scale (and Unit), and from any observed Value/Level on that scale. The solution also differentiates single-entity vs multi-entity cases and binds all measurements to the standard CSLC sequence.
To enforce this solution, the following rules apply:
-
A17-R1 (Canonical term). In all normative models and specifications, the measured aspect SHALL be referred to as a Characteristic. (Legacy terms “Axis” or “Dimension” are retired from technical vocabulary – see Part J Lexicon Update.)
-
A17-R2 (Entity vs. relation subtype). Each Characteristic MUST declare its intended arity. An Entity-Characteristic applies to exactly one bearer (e.g. Temperature of a reactor, Evolvability of a software module), whereas a Relation-Characteristic applies to an ordered tuple of two or more bearers (e.g. Distance between two sensors, Coupling between modules, Agreement among reviewers). The arity is part of the definition and must be explicit wherever it’s not obvious from naming.
-
A17-R3 (Characteristic space). Any set of defined Characteristics spans a multi-dimensional CharacteristicSpace. Movement or evolution is then described as trajectories through this space (with states revisited or refined over time), rather than as a linear lifecycle through preset phases. This ensures measurements feed into open-ended state modeling rather than locking into “end states.”
-
A17-R4 (Lexical guardrails). Normative text SHALL use only the canonical measurement terms: Characteristic, Scale, Level, Value, Coordinate, Score, Gauge, Unit. Synonyms like axis, dimension, metric, grade, property, etc., are forbidden in formal usage. (They may appear in narrative explanations or user-facing documentation only if clearly defined as aliases for the canonical terms.) Authors MUST not use deprecated terms in identifiers or formal statements, and any didactic alias should be introduced with an explicit mapping to the official term. These lexical rules uphold clarity and are further detailed in E.10 LEX‑BUNDLE.
-
A17-R5 (Symbol policy). Γ reserved for holonic composition; 𝒢 : Coordinate→Score for metric‑level gauges; MUST NOT be conflated; documents SHALL NOT reuse Γ for gauges. If an ordered Scale is declared, polarity SHALL be fixed; 𝒢 MUST be monotone w.r.t. that polarity.
-
A17-R6 (Declared polarity). Every ordered Scale SHALL declare one of: ↑‑better, ↓‑better, or non‑applicable (for purely nominal scales). For interval/ratio scales, polarity fixes the intended order of comparison.
-
A17-R7 (Monotonicity against polarity). If a template declares an ordering polarity on its Scale (↑ better / ↓ better), then 𝒢 MUST be monotone w.r.t. that polarity: higher‑is‑better (resp. lower‑is‑better) in coordinates implies ≥ (resp. ≤) in scores.
-
A17-R8 (Arity declaration). Authors SHALL mark a Characteristic as
U.EntityCharacteristic(applies to exactly one bearer) orU.RelationCharacteristic(applies to a relation of cardinality ≥ 2). Examples: Cohesion → entity‑level; Coupling → relation‑level. -
A17-R9 (Relational scale anchors). For relation‑level cases, the Scale’s admissible values SHALL be defined over the tuple domain (e.g., distances, similarities, inter‑role latencies). Ambiguity that re‑reads a relational Characteristic as unary is forbidden.
-
A17-R10 (Intension vs Description). The Characteristic remains the intensional object; any rubric, catalogue of levels, or examples are descriptions. Keep the intensional Characteristic distinct from its descriptive episteme (cf.
U.Epistemeroles: Object–Concept–Symbol).
R17 — CharacteristicSpace declaration. When an architheory reasons about change, it SHALL name the CharacteristicSpace (the set of Characteristics, with Scales, units, and topology assumptions) in which motion is considered.
R18 — RSG framing, not lifecycle. Change narratives SHALL be framed as movement on a reachable‑states graph (RSG) with checklists that certify state acquisition; “lifecycle” staging is deprecated. (A.17 conforms to the open‑ended evolution stance of the Kernel.)
I7 — Vector interpretation. A U.Coordinate vector may collect multiple coordinates for multi‑Characteristic reasoning; composition into a single Score, if desired, is an explicit new 𝒢 on that vector.
In a physical system (U.System): Consider a Distance Characteristic defined for a pair of physical objects. For example, two machines in a factory have a Distance of 3.5 meters between them. Here Distance is a Relation-Characteristic (applies to the pair), with an associated Scale (e.g. a ratio scale in meters), and the measured 3.5 m is a Coordinate on that scale. If we instead look at an Engine Temperature Characteristic (unary), a particular engine might have a Temperature of 350 K at some moment – Temperature (the Characteristic) is clearly separated from how it’s measured (Scale in Kelvin) and the reading (350, a Coordinate on that scale).
In an epistemic context (U.Episteme): Consider a Formality Characteristic to rate a documentation artifact’s rigor. We might define an ordinal Scale with named Levels such as Informal, Semi-formal, Formal. A given specification document can then be said to have High Formality – meaning it occupies the “Formal” Level on the Formality Scale. Here Formality (Characteristic) captures what we measure about the document, while the tiered Scale (with qualitative levels) expresses how we categorize it. Because we use an ordinal scale, we can rank documents by Formality, but we would not average “Semi-formal” and “Formal” (avoiding meaningless arithmetic on an ordinal metric). In another knowledge context example, one could define a Characteristic Reliability for a knowledge source with a percentage Scale from 0 to 100%. An article’s reliability might be 85% – which is only interpretable by knowing it refers to “Reliability” on a 0–100% Scale (i.e. a specific Coordinate on that Characteristic’s scale).
This pattern is deliberately domain-neutral and introduces no bias toward any particular discipline or measurement type. By enforcing a uniform lexicon, A.17 actually mitigates bias: it prevents disciplinary jargon from creeping into core definitions (ensuring, for instance, that a software metric isn’t given a vague custom term when it’s fundamentally a Characteristic). The Didactic lens is strongly served: using one precise name per concept improves clarity for all audiences. There is a slight initial cost in re-labeling legacy terms (e.g. renaming “dimensions” to Characteristics), but this is offset by the long-term Cognitive Elegance (P‑1) – the framework becomes easier to learn and less prone to misinterpretation. No single domain’s terminology dominates, and the pattern explicitly supports both quantitative (physics-like) and qualitative (judgment-based) measurements, reflecting Pragmatic neutrality. The requirement of open-ended state-space thinking aligns with P‑10 (Open-Ended Evolution), ensuring we don’t bake in lifecycle biases that assume development must terminate at a final stage. In summary, A.17 imposes a disciplined vocabulary that is broad enough for all fields and free of hidden assumptions, thereby avoiding subtle ontological or cultural biases in the measurement model.
When authoring or reviewing FPF-compliant metrics, use the following checklist to ensure Characteristic normalization is applied:
-
Declared Characteristic: Have you explicitly named a Characteristic for each aspect being measured, instead of using generic terms? (e.g. use “Reliability” as a Characteristic name rather than saying “this dimension”).
-
Arity Explicit: Is it clear whether the Characteristic is unary or relational? If a metric involves a relationship, are the participating entities (pair, tuple, etc.) identified in its definition?
-
Separate Scale/Unit: For each Characteristic, have you defined the Scale (and Unit, if applicable) separately, rather than embedding units or ordinal terms in the name of the Characteristic? (e.g. “Length (m)” should be captured as Characteristic = Length, Unit = meter).
-
Scale-appropriate operations: Are you only performing comparisons or calculations that make sense for the declared scale type? (No averaging of ranks, no mixing of units – ensure ordinal Characteristics aren’t treated like numbers, and interval/ratio values respect zero and units.)
-
No implicit aggregation: If multiple measurement readings are combined, is there a defined Gauge (with monotonic logic) that produces a Score? Avoid any ad-hoc “overall score” that simply adds or averages raw values from different Characteristics.
-
Canonical terminology in use: Are you using the terms Characteristic, Scale, Level/Value, Coordinate, Score, Gauge, Unit in all formal descriptions? Confirm that no deprecated synonyms (axis, dimension, etc.) appear in technical content or identifiers (they can appear in Plain explanations only with proper reference to the canonical term).
-
Open-ended progression: (If applicable) When modeling progress or change using metrics, have you considered using a state-space of Characteristics rather than a fixed sequence of phases? This check is to encourage leveraging the open-ended nature of CharacteristicSpaces, especially in evolutionary or iterative processes.
(Failure to satisfy the above indicates a violation of this pattern’s intent. The LEX-BUNDLE rules in E.10 provide automated checks for term usage, and MM-CHR templates enforce explicit Characteristic/Scale definitions.)
By instituting Characteristic as the single term and enforcing the CSLC structure, this pattern yields several positive outcomes:
-
Unambiguous metrics: Every measurement has a single, well-defined anchor of meaning – the Characteristic – eliminating guesswork about “what is this number about?”.
-
Separation of concerns: We cleanly separate what is measured from how it’s represented. The Characteristic names the quality of interest, while the Scale/Unit defines the expression. A raw value now means nothing by itself – it must be read as “X units on the Y scale of Z Characteristic,” which greatly reduces misinterpretation.
-
Unary vs. relational clarity: The explicit distinction between Entity-Characteristic and Relation-Characteristic ensures that relational properties (like “distance between A and B” or “consistency among experts”) aren’t mistakenly treated as inherent properties of a single object. This guards against logical errors and data modeling mistakes.
-
Cross-domain comparability: All measurements, regardless of domain, follow the same CSLC rails. This means a temperature in Kelvin and a reliability score in percent can each be traced through Characteristic → Scale → Coordinate. They can’t be directly compared unless designed to be, which is good: any composite scoring must be done via an explicit Gauge mapping to a common Score scale. The pattern thus enables interoperability (through well-defined Score bridges) while preventing illegitimate comparisons.
-
Consistent evolution framing: By retiring the idea of a bespoke “lifecycle” for every process and instead viewing changes as movement in a CharacteristicSpace, the pattern aligns metric thinking with state-based reasoning (e.g. as used in dynamic models). There is no artificial “final state” for improvement – a system can always evolve to a new coordinate without violating a lifecycle Standard. This open-ended view encourages continuous improvement and refinement, echoing FPF’s emphasis on evolutionary development.
There are few downsides. One consequence is that modelers must learn the canonical terms and possibly refactor existing documentation (a short-term effort). Also, enforcing scale integrity means quick-and-dirty aggregate scores are not allowed unless justified via a Gauge – this introduces a healthy “pause” to ensure composite metrics are well-founded. Overall, the benefits in clarity and correctness far outweigh the overhead. Teams gain a lingua franca for metrics, and the risk of metric abuse (mixing apples and oranges) is significantly reduced.
The Canonical Characteristic pattern is a direct response to recurring measurement pitfalls. By insisting on “one precise name per concept”, it upholds Strict Distinction (A.7), ensuring that the framework never treats two different ideas as one. For instance, earlier practice might label both a requirement category and its score as “dimension,” causing confusion; with A.17, the aspect is a Characteristic and its score is separate, so each idea has its place. This clarity is pedagogically vital (P‑2 Didactic Primacy): readers and contributors immediately know what a term means and how to interpret any value associated with it.
The solution also draws on fundamentals of measurement theory (Stevens’ levels of measurement) to prevent misuse. By encoding scale types and unit handling into our patterns, we avoid the “pseudo-quantitative” fallacies – no more averaging things like risk levels or adding up grades as if they were true numbers. In effect, A.17 puts a safeguard around P‑1 Cognitive Elegance and P‑7 Ontological Parsimony: we use a minimal, universal set of measurement constructs, and we avoid bloating the conceptual space with domain-specific or redundant terms. One canonical set of terms also makes the framework more teachable and composable across contexts, since architheories and projects aren’t inventing new synonyms that others must decipher.
Importantly, distinguishing Entity vs Relation Characteristics future-proofs the reasoning model. It enforces a modeling rigor seen in domains like physics (where properties vs. relations are carefully distinguished) and brings it to architecture and knowledge domains. This rigor supports advanced reasoning in FPF – for example, A.3.3 (Dynamics) can treat system state variables as a well-defined set of Characteristics, and assurance patterns can trace evidence metrics unambiguously to the exact aspect measured. It also means any attempt to compare or combine metrics has to be explicit (via Gauges), which inherently improves transparency and auditability (a key FPF goal).
Finally, retiring the “lifecycle” vocabulary in favor of state-space trajectories aligns with FPF’s open-ended evolution principle. It acknowledges that improvement is not a predefined path but a navigable space. This shift in mindset (from lifecycle stages to checklisted state transitions) removes an implicit bias that systems ought to reach a “final” maturity stage – instead, it keeps the door open for perpetual refinement, which is philosophically aligned with continuous learning and adaptation.
In summary, A.17 is the linchpin that turns a loose collection of measurement practices into a coherent, principle-driven system. It rationalizes the language, thereby rationalizing thought: by speaking in one clear voice about measurements, FPF ensures that every number in the system can be trusted to answer “value of what, on what scale, relative to what context.” This rationale is reflected in improved model integrity and cross-domain trust in the meaning of metrics.
-
Builds on / Elaborates: FPF Core Measurement Schema (as outlined in C.16). A.17 lifts the metric template concepts from C.16 into a kernel-level rule. It also reinforces A.7 Strict Distinction, by giving each measurement concept a unique name and forbidding overloaded terms.
-
Constrains: All other patterns and architheories that define or use metrics. For example, A.3.3
U.Dynamics(system dynamics) must name its state variables as Characteristics with proper scales (it cannot refer to them loosely as “KPIs” without context). Similarly, any Service-level agreements (A.2.3U.Service) or assurance calculations (B.3, D.3 patterns) that involve measurements are governed by this canonical terminology (no unwarranted synonyms or unit confusion per ISO/IEC 80000, ISO/IEC 25024, QUDT, SOSA/SSN best practices). The pattern’s lexical rules are part of the LEX-BUNDLE (E.10) – any FPF-conformant context must adhere to these naming conventions. -
Coordinates with: A.18 (CSLC-KERNEL), which defines the minimal Characteristic/Scale/Level/Coordinate Standard in detail. A.17 provides the vocabulary and basic distinctions (what is a Characteristic, and its arity), while A.18 applies this to ensure each measurement template is well-formed. Also coordinates with C.KD-CAL and C.CHR-CAL (Knowledge Dynamics Calculus, Characterization Calculus) – those architheories use the Characteristic/Scale constructs to build domain-specific metrics (e.g. knowledge quality scores) and rely on A.17’s canon for consistency.
-
Anticipates: E.10 Lexical Discipline rules – A.17’s enforcement of a single term and controlled aliases is a concrete instance of the lexical uniformity mandated in E.10. It also paves the way for F.7 Concept-Set Bridges in Unification patterns, since external ontologies for quantities (ISO 80000, QUDT, etc.) can be mapped cleanly onto FPF Characteristics now that the term is fixed. In short, A.17 is a foundational lexicon pattern that a) ensures internal consistency and b) simplifies alignment with external standards for measurable properties.
Aliases (for narrative use only): “Axis” (≈ Characteristic), “Point” (≈ Coordinate). (These colloquial aliases may be used in Plain language explanations, but never in formal identifiers or normative text.)
We often need to characterize some aspect of a subject (be it a single artefact or a relationship between artefacts) in a rigorous way. Whether it’s recording a physical quantity, an architectural property, or a performance rating, the characterization must:
-
remain domain-neutral (work for engineering metrics, subjective scores, etc.),
-
ensure that two measurements are comparable if and only if they share the same defined aspect and scale, and
-
accommodate both ordered tiers (qualitative levels like Low/Medium/High) and numeric magnitudes (continuous or interval values) without mixing them up.
In FPF’s kernel, the CSLC pattern (CG‑frame–Scale–Level–Coordinate) provides the minimal vocabulary and constraints to achieve this. It defines how one Characteristic ties to one Scale, and how any measured value can be treated as a Coordinate on that scale (with an optional named Level if the scale is discrete or tiered). The context here is the need for a unified Standard so that every single measurement in any architheory can be interpreted and compared on common grounds.
Uninterpretable values. A raw number or label means nothing without knowing what aspect it measures and how it is measured. The string “4”, the label “High”, or the real number 9.81 convey no insight unless we know which Characteristic they pertain to and the Scale that gives them meaning. In cross-disciplinary work this ambiguity is magnified: a “5” could be a risk rank (ordinal), a length in meters (ratio), or a satisfaction score (perhaps interval). Common failure modes include:
-
In ordinal settings (e.g. expertise levels Novice < Skilled < Expert), one can rank values but not meaningfully add or average them. Treating ordinal labels like numbers (e.g. averaging Novice=1, Expert=3) produces invalid results.
-
In cardinal settings (e.g. seconds, meters, degrees Kelvin), arithmetic operations do make sense – but only if units are respected and zero is meaningful (for ratio scales). If we strip away units or mix scales (seconds vs. minutes), we again get nonsense.
Without a strict Standard, one team might treat “High” and “Medium” as having a numeric gap, another might average 4 (on a 5-star scale) with 4 (as 4 seconds) because both are “4”. Inconsistent practices make cross-domain reasoning impossible. We need a kernel-level solution that fixes: (a) the aspect being measured, (b) the scheme by which it’s measured, and (c) the type of scale structure (ordinal vs. metric), and that ensures each reported value is bound to that scheme. At the same time, the Standard should not force artificial numeric detail where it isn’t applicable (e.g. we shouldn’t assign meaningless numbers to purely qualitative tiers just to satisfy a structure).
-
F1 – Transdisciplinarity. The pattern must uniformly handle measurements in physical domains (e.g. length, time, temperature), system attributes (e.g. a module’s coupling or reliability), and human judgments (e.g. user satisfaction scores). It needs to be neither overly quantitative (alienating softer domains) nor overly qualitative (lacking precision for hard science).
-
F2 – Comparability vs. freedom. We want to compare “like with like” – e.g. two readings of the same Characteristic on the same Scale – with absolute confidence. At the same time, the system should allow different Scales for the same Characteristic when necessary (for example, one project might measure Quality on a 0–5 star scale, another on a 0–100 percentage scale). The pattern must permit such flexibility without letting those differing scales be conflated.
-
F3 – Ordinal vs. cardinal integrity. The Standard should preserve the nature of the data: order-only vs order+distance. If something is ordinal (ranks, grades), the framework should prevent unwarranted numeric operations on it. If it’s cardinal (real-valued with units), the framework should enable arithmetic but still keep track of units and zero. In essence, it must protect ordinal data from “leaking” into interval arithmetic.
-
F4 – Named tiers vs. continuous magnitudes. In many domains, named Levels (tiers or grades) are useful – e.g. Technology Readiness Levels or bond credit ratings – whereas in others, a continuous scale is needed. The pattern should support optional Level labels (for tiered scales) without forcing every scale to have such labels. In other words, Levels are an add-on for discrete/tiered scales, not a requirement for truly continuous measures.
-
F5 – Method agnosticism. The kernel Standard should say what must be defined (Characteristic, Scale, etc.) but not prescribe how measurements are obtained. Whether a value comes from a sensor reading, a simulation, or an expert judgment is up to the respective architheory (e.g. Sys-CAL vs. KD-CAL). The pattern must not bake in any process or scoring methodology; it only ensures that once a measurement exists, it’s well-formed and comparable. This avoids locking in any particular assessment method.
Adopt a minimal “one characteristic – one scale – one coordinate (value)” Standard for all measurements. In the FPF kernel, any metric must bind exactly one Characteristic to exactly one Scale, and any observation produces one Coordinate (value) on that Scale (with an optional Level name if the scale has discrete tiers). We nickname this the CSLC clause:
Exactly one Characteristic + exactly one Scale ⇒ one Coordinate (value), with an optional Level.
Concretely, the parts of this clause are defined as follows:
-
Characteristic: the aspect or feature being measured (the “CG‑frame” along which comparison is made). It answers “What are we measuring?” – e.g. Distance, Temperature, Quality, Reliability.
-
Scale: the organized set of possible values that the Characteristic can take, including the type of scale (ordinal, interval, or ratio), the measurement Unit (if applicable), and any bounds or structure. The Scale defines “How do we measure it?” – e.g. “meters on a linear scale from 0 up to 1000” or “ratings 1 through 5 with ordering only”.
-
Coordinate: a concrete measured value that locates the subject on the chosen scale. This could be a number (for a numeric scale) or a category label (for an ordinal scale). It answers “What is the result?” – e.g. 7.4 (meters), or Expert (level).
-
Level (optional): a named tier or category on the scale, used only if the scale is tiered or discretized. For example, an ordinal scale might have Levels Low, Medium, High. A Level is essentially a human-friendly label for certain coordinates or ranges. On purely continuous scales, Level is not used.
Using this CSLC structure, every measurement is unambiguous and self-contained: the Characteristic tells us the context, the Scale tells us how to interpret the value, and the Coordinate is the outcome on that scale (with a Level label if appropriate). Notably, this pattern forbids bundling multiple characteristics into one metric – each metric template is one-characteristic-per-template to keep semantics crisp. If something needs to assess multiple factors, it should be modeled as multiple CSLC metrics or a higher-level composite (see §8 below). This one-aspect-one-scale rule is what allows unambiguous comparison and prevents hidden complexity.
Finally, the solution ensures tier optionality: If a domain uses named Levels, we include them; if not, we don’t force it. For example, one can have a Bug Severity Characteristic with Levels {Minor, Major, Critical} on an ordinal scale, whereas a Length Characteristic would have a continuous scale (no predefined levels, just units). Both fit the pattern.
In a physical scenario (U.System): Consider an athlete’s long jump. We define a Characteristic Jump Distance with a Scale “meters (m)” ranging from 0 upward (ratio scale with meters as the unit). When the athlete jumps and lands at 7.45 m, we record a Coordinate of 7.45 m for the Jump Distance Characteristic. Here, Jump Distance is the Characteristic, the meter-scale is the declared Scale, and 7.45 m is the value (Coordinate). Because this is a cardinal measurement, we can meaningfully say one jump is 1.5 m longer than another, etc. Now consider another metric in the system: Battery Health of a device, which might be categorized qualitatively. We could define an ordinal Scale with Levels like Good, Fair, Poor for the Battery Health Characteristic. If a particular device is rated “Poor”, that is a Coordinate on the Battery Health scale (with Poor as the Level name). No arithmetic is done on these labels, but we can order devices by health (Good > Fair > Poor). Both examples illustrate the one-characteristic-one-scale rule: the jump’s distance is not combined with any other aspect; the battery’s health is evaluated on its own defined scale.
In a knowledge context (U.Episteme): Consider measuring an author’s expertise in a certain domain. We introduce a Characteristic Expertise Level for a person, with an ordinal Scale defining tiers such as Novice, Competent, Expert. Alice might be assessed at Expert level in software engineering – that’s a Coordinate on the Expertise Level scale for the Characteristic “Software Engineering Expertise”. Bob might be at Competent. We cannot average Alice’s and Bob’s levels, but we can say the scale is ordered (Expert > Competent > Novice). For a more quantitative episteme example, consider a Characteristic Hypothesis Confidence for a scientific claim, with a Scale 0–1 (or 0–100%) representing probability or confidence level (ratio scale). One hypothesis might have a confidence of 0.95, another 0.7; these are Coordinates on the Confidence scale. We can compare them numerically (0.95 is higher than 0.7, and 0.95 implies a stronger belief), and we could even combine multiple confidence values through Bayesian formulas (if justified) – but crucially, we would only do so in a way that respects their scale (probabilities combined properly, not treated as arbitrary scores). The Expertise Level and Hypothesis Confidence examples show how the CSLC pattern accommodates both an ordinal qualitative measure and a continuous quantitative measure in the knowledge domain, each with one Characteristic and one defined Scale.
The CSLC-Kernel pattern is crafted to be maximally inclusive of different measurement types while imposing just enough structure to ensure consistency. It does not privilege any particular domain or modality of measurement: a subjective 5-star rating is treated with the same formal rigor as a physical length in meters. In terms of the FPF principle lenses, this pattern consciously balances the Architectural/Ontological needs (clear structure for data) with the Pragmatic/Didactic needs (flexibility and clarity for users). There is little risk of cross-domain bias here because the pattern explicitly supports both extremes (ordinal and ratio, qualitative and quantitative). By remaining method-agnostic, it avoids bias toward certain validation techniques – e.g. it doesn’t assume every measurement comes from an instrument (it could come from expert judgment just as well). One might argue the pattern enforces a somewhat formal approach to what could be informal measures (forcing definition of scale and characteristic), but this formalism is lightweight and is precisely what makes the metric interpretable. In summary, A.18 embodies neutrality: it’s a container that fits any content as long as that content is well-labeled. It reinforces P‑2 (Didactic Primacy) by making all metrics self-explanatory in terms of what and how, and respects P‑1 (Cognitive Elegance) by using a minimal, uniform scheme. No cultural or disciplinary assumptions are baked in – an anthropologist’s “Cultural Significance” scale can live alongside an engineer’s “Voltage” scale with equal status. The pattern’s requirement for declaring polarity (“higher is better” vs “lower is better” vs target range) further avoids bias in interpretation – it prevents the assumption that “more is always better,” which might be untrue in many contexts (e.g. for error rates, lower is better). All these considerations ensure that A.18 introduces no hidden skew; it merely provides a fair playing field for all metrics.
When defining a new metric template or using measurements, practitioners SHALL verify the following:
-
One characteristic, one scale: Each metric template binds exactly one Characteristic to exactly one Scale. If you find a metric trying to cover multiple things at once, split it into separate metrics.
-
Polarity declared: For any ordered Scale (ordinal/interval/ratio), the polarity (“higher‑is‑better”, “lower‑is‑better”, “targeted optimum (symmetric or asymmetric around a declared target)”) SHALL be declared at the template that binds a Characteristic to a Scale. State whether higher values are better, lower are better, or if an optimal range/target exists. (For example: *“higher is better” for a performance score, *“lower is better” for error count, or “target 37 °C” for body temperature where deviation in either direction is worse.) This ensures that anyone comparing two values knows which way is “up.”
-
Unit and level clarity: If the Scale is quantitative, specify the Unit (e.g. seconds, meters, %) and make sure all values include or assume that unit. If the Scale has named Levels, list them clearly and use them consistently. Do not use the same label to mean different things on different scales, and avoid using unit terms in Characteristic names (the unit belongs with the scale).
-
Scale-appropriate operations only: Only perform those comparisons or calculations that are valid for the given scale type. For a nominal scale, you can check equality but not order. For an ordinal scale, you can order or rank values but not do math like “A minus B.” For interval scales, addition/subtraction is OK (with unit conversion if needed), but ratio comparisons (A is twice B) might not make sense without a true zero. For ratio scales, all arithmetic operations are allowed with proper attention to units. This check prevents logical errors (e.g. averaging “High” (3) and “Medium” (2) and getting 2.5 — which is meaningless).
-
No bare numbers: Never present a raw number or value without its context of Characteristic and Scale. If someone sees “42” in your output, they should also see or know “42 of what, measured how.” A reader who is not aware of the metric’s template should not be left guessing what a given value signifies. In practice, this means labeling reports and data with the metric name or identifier so that values can be traced back to their meaning.
-
Template bridges for cross-metric comparison: If you intend to compare or aggregate measurements from different templates (different Characteristics/Scales), ensure an explicit Gauge or conversion is defined. For example, if you need to combine a “usability score” (0–5 stars) with a “security score” (0–100%), you might define a new Score that maps both onto a common 0–10 scale via monotonic functions. Without such a bridge, do not directly mix metrics – keep them separate in analysis. This guarantees that any cross-metric reading has a well-founded basis.
-
Level optionality respected: If your Characteristic doesn’t naturally have tiers, don’t force it to have Level names (you can leave the Level concept unused). Conversely, if your Characteristic is commonly described in categories, it’s fine to define Levels for clarity. The key is to use the Level field intentionally: either not at all (for truly continuous measures) or in a fixed, non-overlapping way (for discrete categories). Do not use “Level” for something that behaves like a continuous value (it would be confusing to assign a label where a number would do, or vice versa).
-
Comparability test: Two Coordinates are comparable iff same Characteristic+Scale (incl. unit, polarity). Otherwise — Score‑level only after a declared gauge to a bounded range.
(The above serve as normative checkpoints. Many of these are automatically supported by using the standard metric templates in software: e.g. the system will enforce one Characteristic per template, require a unit for ratio scales, etc. The Lexical rules from A.17/E.10 are assumed: use canonical names and notations for all parts of the metric.)
Adopting the minimal CSLC Standard in the kernel yields a number of benefits:
-
Universal interpretability: Every measurement is intrinsically self-describing. One cannot have a “mystery number” floating around; by design you must know it’s X (Coordinate) on Y Scale of Z Characteristic. This dramatically reduces miscommunication in reports and data exchange. An engineer and an analyst can share a metric knowing they interpret it the same way, because the context travels with the value. Level is optional when scale is tiered or discreet.
-
Safe comparison and aggregation: Values can only be compared when they belong to the same Characteristic and Scale (or when an authorized Gauge converts them). This prevents the common error of comparing apples to oranges. When cross-comparison is needed, the pattern funnels us into creating a proper normalization (Gauge), which improves the soundness of composite scores. Essentially, it’s now impossible to accidentally average an uptime percentage with a user satisfaction rating, for example, without explicitly defining how to map one to the other.
-
Flexibility across domains: The pattern is transdisciplinary. It doesn’t matter if the measurement is temperature in Kelvin, length in inches, code complexity in “abstract points,” or user satisfaction on a five-level Likert scale – all are handled uniformly. This makes it easier to plug new architheories or domains into FPF, since they don’t need special rules for their metrics; they just instantiate the CSLC template in their context.
-
Ordinal and cardinal handled with equal rigor: By explicitly classifying scales, the pattern gives ordinal data the respect it deserves (no pretending it’s numeric) and gives ratio data the formal context it needs (units, zero, etc.). This balance means both qualitative assessments and quantitative measurements live side by side, each with their constraints respected. Domains that lean heavily on categorical ratings benefit from the Level concept (with no pressure to assign fake numbers), and domains that use real measurements benefit from unit enforcement and type-aware computations.
-
Clarity in multi-factor scoring: The prohibition of implicit multi-characteristic measures means that any “overall” score or index has to be constructed out of known pieces. This tends to improve the transparency of complex scoring schemes. If an organization wants to create a single index from 5 different metrics, A.18 forces them to introduce a defined Gauge function that combines those 5 Coordinates into one Score, with declared monotonicity and bounds. The consequence is that composite metrics become auditable and debatable (you can examine the weighting or formula) rather than opaque sums.
-
Methodological neutrality (and innovation): Because the kernel imposes no method for obtaining the values – only how to frame them once obtained – architheories and tool builders are free to innovate in how they measure things. The Standard just ensures that once they do, everyone else can understand and use the results correctly. This separation of concerns (what vs. how) accelerates multi-disciplinary collaboration: a social scientist’s observational scale can feed into a systems model without any confusion, as long as it’s couched in the CSLC terms.
On the downside, users must do a bit more upfront work to define their metrics. The pattern’s requirements (declare Characteristic, define Scale, etc.) mean one cannot simply say “we’ll track a risk score” without further detail. In practice, this is a desirable trade-off: the extra effort (perhaps a few minutes to set up a metric template) prevents far greater confusion down the line. Another possible trade-off is multiplicity of scales – the pattern allows the same Characteristic to have multiple scales (in different contexts or versions), which might fragment data if not managed (e.g. two teams measuring “Performance” on different scales). However, it also provides the remedy: make the difference explicit and, if needed, build a conversion Gauge. This explicitness is actually beneficial, as it highlights when “Performance (0–5)” is not directly comparable to “Performance (Percentage)”. In short, any fragmentation is out in the open and can be dealt with via alignment or bridging.
Overall, A.18’s consequences are overwhelmingly positive: measurements become first-class, well-understood citizens of the model. The cost is a slight increase in definition effort and discipline, which is a small price for coherence. Once this pattern is in place, higher-level patterns (in Parts B, C, D) that reason about metrics can rely on it. For example, trust calculations (Part D) can assume that any metric they consume has a known scale and meaning, and knowledge dynamics algorithms (Part B or C) can safely combine evidence knowing the comparisons are valid. The minimal CSLC Standard is thus a foundational enabler for robust, cross-domain assurance in FPF.
The rationale behind A.18 is to enforce semantic clarity at the data level, thereby solving a host of downstream problems. Without this pattern, one must constantly ask, “What does this number mean? Can I combine these two values?” – questions that have led to many project errors. By building the answers into the framework (“every number knows its unit, scale, and aspect”), we front-load the work and eliminate ambiguity. The solution directly addresses each force:
-
Transdisciplinarity: We include both ordinal and cardinal mechanisms so that no discipline’s metrics are left out. This was informed by observing multi-disciplinary teams: e.g., in a single project, a human factors specialist might rate usability (ordinal) while an engineer measures throughput (ratio). A.18 gives them a common language and prevents one from misusing the other’s data. It embodies the idea that universal structure enables local freedom: everyone’s metric can plug in, as long as they specify it properly.
-
Comparability vs. freedom: The pattern strikes a balance by tying comparability to explicit commonality. If two metrics truly measure the same thing in the same way, then of course you can compare them – they’ll share Characteristic and Scale. If they differ, the framework doesn’t stop you from defining them (freedom), but it does stop you from conflating them inadvertently. The introduction of polarity declarations is a direct response to this tension: it adds a tiny burden (must declare “higher is better” etc.) but yields big pay-off in avoiding mis-ordered interpretations and enabling safe composite scoring (monotonic Gauges).
-
Ordinal vs. cardinal separation: The rationale here is guided by measurement theory: we want to preserve information content. Treating ordinal data with only order operations preserves all its information; doing more (like adding them) injects false information. The pattern’s strictness on scale types forces modelers to be honest about what their data can and cannot do. This not only prevents errors but also encourages best practices (e.g. if you find you desperately want to average an ordinal score, perhaps you should refine it into an interval scale in your methodology). The outcome is a framework that respects both the qualitative and quantitative realms appropriately, aligning with FPF’s Pillar of Pragmatism – use formalism where it’s justified, but not beyond its limits.
-
Optional Levels: Requiring Levels in every case would have been too rigid (not everything has named tiers), but not supporting them would fail domains that rely on them (like maturity models or grading systems). The rationale for making Level optional is to accommodate both. We saw in practice that many metrics naturally form tiers (e.g. technology readiness levels TRL 1–9) and giving them a slot in the model (instead of burying them in definitions) makes those metrics much easier to work with and integrate. Meanwhile, continuous metrics carry no baggage of unused fields. This design was checked against existing standards (like ISO 25024 for quality measures) to ensure we aren’t deviating from industry expectations: indeed, separating the concept (Characteristic) from the scheme (Scale) aligns well with standards, and including an optional categorization aligns with common practice in capability maturity models, etc.
-
Method neutrality: The decision to not include any measuring procedures in A.18 (no specific formulas, no mandated evidence type) comes from the principle of separation of concerns. The kernel should provide the what and how (structurally), while architheories provide the how (procedurally). This keeps the kernel lean (P‑1 Cognitive Elegance) and allows domain experts to implement whatever method is appropriate, merely committing to wrap their results in the CSLC form. By doing so, we avoid any bias toward empirical vs analytical, or manual vs automated measurements – FPF welcomes all, as long as they conform to the schema. This was rationalized by examining case studies: e.g., some reliability metrics come from formal proofs (analysis), others from testing (empirical) – the kernel can host both results identically, requiring only that each result says what it measured and on what scale.
In essence, A.18 is the infrastructure of meaning for metrics. It may appear as a simple template, but it’s profoundly enabling. It forces clarity at creation time, so we don’t have to infer or debate meaning at usage time. The pattern’s strength lies in preventing errors that don’t have to happen. It encodes lessons from both metrology (the science of measurement) and everyday data science (where unit errors and mis-comparisons are infamous issues). The rationale is backed by these lessons: fix the interpretation rules in the design, and you eliminate entire classes of confusion and mistakes. By having this in the kernel, every architheory – from knowledge scoring to system performance – benefits immediately, and their results become interoperable to a degree that would be impossible without a common structure.
-
Extends/Uses: A.17 (CHR-NORM) – A.18 explicitly builds on the canonical terminology established in A.17. It uses the term Characteristic as defined there (and no other synonyms) and carries forward the edict that “axis/dimension” be treated as mere narrative aliases. It also leverages the Entity-vs-Relation Characteristic distinction from A.17: Section 7.4 of this pattern references tests for disambiguating relational metrics. Essentially, A.17 provides the lexical and conceptual groundwork (what a Characteristic is, and the basic vocabulary), while A.18 provides the structural and normative rules for linking Characteristics to measurements.
-
Core foundation for metrics: This pattern underpins the Measurement & Metrics Characterization spec (C.MM‑CHR) – the architheory that implements metric storage and computation. In MM-CHR, every
U.DHCMethodRefandU.Measurefollows the CSLC format defined by A.18. By lifting CSLC rules to the kernel, we ensure all architheories (like KD-CAL for knowledge dynamics, Sys-CAL for systems, or any custom CAL/CHR) share a common approach to metrics. A.18 also informs the design of CHR-CAL (Characterisation Calculus), which generalizes measurable property templates: CHR-CAL relies on the one-Characteristic-per-metric assumption and the comparability rules set here to compose higher-level characterizations. -
Enables dynamic reasoning: A.18’s insistence on well-defined Scales allows patterns like A.3.3
U.Dynamics(system dynamics models) to incorporate measurement dimensions as state variables without ambiguity. For example, astateSpacein a dynamics model can be explicitly defined as a set of Characteristics (each with units and ranges), making simulations and traces dimensionally consistent. If A.18 were not in place, one model might treat “performance” as a 1–5 score and another as a probability – combining them would be incoherent. With A.18, such differences must be reconciled via a Gauge or kept separate, preserving coherence in multi-model analyses. -
Coordinates with assurance patterns: Many patterns in Part B and D (for trust, assurance, and ethics) involve scores and metrics. For instance, B.3 (Assurance Levels) computes overall assurance from evidence scores; A.18 ensures those input scores are well-defined and comparable (e.g. all are 0–1 or all are percentages, with polarity noted). D.4 (Trust-Aware Calculus) might combine trust metrics across domains – again, A.18 provides the common ground so that a “trust score” coming from an operational metric and one coming from a social rating can be normalized and compared meaningfully. In summary, any pattern that aggregates or uses measurements is constrained (in a positive way) by A.18’s rules. They “plug into” this framework.
-
Constrained by lexical rules: This pattern’s content is part of the formal lexicon governance. It works within E.10 LEX-BUNDLE, which means the terms Characteristic, Scale, Coordinate, Level, etc., are controlled vocabulary. A.18 localizes some generic requirements from A.17 (for example, A.17 mandates polarity in principle; A.18 requires it be declared per template in practice). It also aligns with external standards: by having explicit scale types and units, it dovetails with ISO/IEC measurement terminology and allows straightforward mapping to frameworks like ISO 80000 (quantities and units) and Stevens’s scale types. This relation to standards is deliberate – it eases F.9 (Alignment Bridge) construction to external ontologies by having a clean internal schema (A.18 provides that schema). In effect, A.18 is where FPF’s internal consistency meets external compatibility, ensuring our measurement semantics can relate to those outside FPF when needed.
Non‑duplication note. This pattern reuses the canonical measurement concepts (U.Characteristic, CSLC terms) from A.17/A.18 and relies on C.16 (MM‑CHR) for normalization evidence. It does not redefine units or normalization semantics. UNM names admissible re‑parameterizations within one U.BoundedContext and thereby induces a context‑local congruence over charts, written ≡_UNM, which is a specialization of the framework’s congruence notion used in B.3 (and instantiated for epistemes in B.1.3). A NormalizationFix selects a canonical representative of an ≡_UNM class. Timebases and laws remain out of scope (see A.3.3).
Locality & governance. A UNM is context‑local: it is declared within a single U.BoundedContext for a given CharacteristicSpace (or family of charts) and enumerates (a) the admissible classes of NormalizationMethod, (b) the invariants they must preserve, (c) closure under composition (and inverses where defined), and (d) validity/versioning rules (editions, windows). Semantics and evidence backing remain under C.16; A.19 constrains how UNM artifacts are named and used in state/comparability logic.
UNM — Unified Normalization Mechanism. A mechanism that packages admissible re‑parameterizations for a CharacteristicSpace so that values can be normalized for safe comparison within one U.BoundedContext.
NormalizationMethod. A concrete method within UNM (intensional definition of how to normalize a slot or a vector of slots). Method classes SHALL be scale‑appropriate: ratio → positive‑scalar conversion; interval → affine transform; ordinal → order‑preserving monotone map; nominal → categorical re‑map; lookup table (LUT) with uncertainty annotations (where declared). These classes are named consistently across the spec as: ratio:scale, interval:affine, ordinal:monotone, nominal:categorical, tabular:LUT(+uncertainty).
NormalizationMethodDescription. The epistemic description of a NormalizationMethod (documentation/spec with bounds, validity window, evidence anchors).
NCV — NormalizedCharacteristicValue. The result of applying a NormalizationMethod to a coordinate value (or vector) in a CharacteristicSpace. Note: Characteristics themselves are not normalized; values (coordinates) are.
NormalizationMethodInstance. A concrete, editioned use of a NormalizationMethod in a CN‑frame or embedding (binds method → slot(s), edition, validity window). Use this term when referring to stored/ID’d artifacts (e.g., in logs), to avoid overloading map.
UNM‑congruence (≡_UNM). Context‑local equivalence relation over charts generated by the admissible NormalizationMethods declared in the UNM; two charts are ≡_UNM iff they are related by a chain of admissible, scale‑appropriate transformations that preserve the declared invariants.
IndicatorChoicePolicy. Principles/rules for selecting which Characteristics (or their NCVs) become Indicators for decisions.
Indicator. The result of applying an IndicatorChoicePolicy to a set of Characteristics/NCVs; an Indicator is not a target value by itself and not any normalized value by default.
Indicatorization (policy step). Selecting Indicators is a separate, policy‑governed step; producing NCVs alone does not yield Indicators.
Removal of κ‑notation. The previous κ symbol and derived phrases (e.g., “κ‑operator”) for normalization are retired in favor of explicit names: Normalization, NormalizationMethod, NCV, UNM. This retirement does not affect unrelated uses of κ as a generic metavariable in logic or requirements schemas elsewhere in the spec.
Lexical note (map vs Map). In this document, lowercase map denotes a mathematical function only. Capitalized Map (e.g., DescriptorMap) retains its Part‑G meaning as a method type that encodes subjects into a declared Space; it is disjoint from NormalizationMethod/UNM. Do not use “map/Map” as a synonym for NormalizationMethod, NormalizationMethodInstance, or NCV.
Intent. Establish a kernel‑level state‑space type—U.CharacteristicSpace—so that any holon’s state changes (e.g., a system’s condition or a role’s readiness) can be formalized as trajectories in a space of declared Characteristics with chosen Scales. For epistemes, state is governed by ESG; F–G–R are assurance coordinates, not a state space. This gives every U.Dynamics model a well‑typed stateSpace and enables formal state certification (using RoleStateGraph checklists) instead of narrative stage transitions.
Scope. Pattern A.19 defines:
-
the type
U.CharacteristicSpaceas a finite product of slot value sets (per A.18), -
the slot construct for each factor (a pairing of a Characteristic with a chosen Scale),
-
minimal structural overlays (optional order, topology, metric hooks) that downstream architheories may attach to a space, and
-
the hook
U.Dynamics.stateSpace : CharacteristicSpace– i.e. the requirement that any dynamics model declare a CharacteristicSpace for its state space (typing only).
A.19 does not introduce any new measurement aspects, composite metrics, or normalization semantics (those are provided by C.16 (MM‑CHR) under UNM), and it does not define how dynamics evolve over time or any predictive laws (see A.3.3 for dynamics semantics). The focus here is purely on the structure of state spaces and their comparability.
Lexical guard (“map”). In normative text, lowercase map refers only to a mathematical function; it MUST NOT be used as a synonym for NormalizationMethod, NCV, or UNM. Capitalized Map keeps its suffix‑family meaning (e.g., DescriptorMap) and is unrelated to normalization. Use NormalizationMethod for the transform and NCV for its output.
Lexical guard (“carrier”). In kernel prose, Carrier (capitalized) names U.Carrier (a symbol bearer). Do not use “carrier” for set‑theoretic supports; prefer ValueSet/underlying set. A.19 therefore uses ValueSet(slot) for the set that supplies values to a slot.
FPF’s kernel already standardizes what is measured (a Characteristic, per A.17) and how it is measured (a Scale with units, via the CSLC Standard in A.18). We also have a measurement substrate (U.DHCMethodRef, U.Measure) to handle individual observations. What has been missing for modeling dynamics is a canonical “Context” in which multiple Characteristics can co-exist so that complex states (with many aspects) and their trajectories are well-typed and comparable. Without a formal CharacteristicSpace, teams either hard-code ad-hoc vectors (often with inconsistent assumptions) or fall back to informal lifecycle stories (“phases” or stages) that contradict the kernel’s open-ended, non-linear evolution paradigm. The Architectural patterns (A-cluster) expect that U.Dynamics.stateSpace will be a set of declared Characteristics each with a declared Scale. Pattern A.19 delivers exactly this capability, leveraging the CSLC measurement discipline without reinventing any arithmetic or unit-handling logic.
-
P1 — “Feature vector” drift. In practice, teams often assemble state vectors or “feature” lists with implicit or mismatched units and scales. Without a formal space, one coordinate’s value can’t safely be compared or combined with another’s (e.g. mixing degrees Celsius with percentages). CSLC guarantees consistency per Characteristic, but a bundle of multiple “characteristics” remains under-specified if we lack a unified space definition.
-
P2 — Lifecycle bias. Absent a formal state space, system change tends to be described in terms of fixed stages or phases (design phases, maturity levels, etc.). This conflicts with FPF’s open-ended stance: in FPF a role’s state model (RSG) allows re-entry and refinement of states rather than one-way lifecycle stages with an “end.” We need a space model that treats evolution as continuous movement, not a one-directional sequence.
-
P3 — Incoherence across CN‑frames. Different modeling “CN‑frames” (architecture vs. epistemic vs. operational) often choose different sets of qualities to measure (different sets of characteristics). Later, however, we may need to compose these models or project one into another. Without a kernel notion of how one state space can be a subspace of or embedded in another, any integration of models will be ad hoc and error-prone.
-
P4 — Relational measurements. Some Characteristics are inherently relational (e.g. a Coupling between two components, or Distance between points). Naïvely forcing such traits into a single-object feature vector loses critical information (arity, symmetry). The kernel already distinguishes single-entity vs multi-entity Characteristics (A.17); we must preserve that distinction in the state space so that a relational metric isn’t treated as an intrinsic one by mistake.
-
P5 — The geometry temptation. When defining a state space, it’s tempting to assume or inject additional structure (ordering of states, topologies for continuity, metrics for distance) as if inherent. But the kernel must remain minimal and domain-neutral: it should not smuggle in analysis methods or domain-specific norms under the guise of geometry. Any such structure should be added explicitly by specialized architheories, not baked into the core definition of a space.
-
F1 – CSLC integrity at scale. When combining multiple measurements into a state, we must uphold the CSLC discipline for each component: each coordinate has a defined Characteristic, Scale type, unit, and (if applicable) polarity. We need to do this without redefining or duplicating that single-characteristic integrity – the multi-dimensional space should simply enforce CSLC per slot.
-
F2 – Transdisciplinarity & lexical clarity. The state space framework must work for quantitative physical metrics (ratio scales, continuous units), qualitative assessments (ordinal scales, tiers), and mixtures thereof. It must not be biased toward one domain’s notion of measurement. At the same time, to avoid confusion, the lexicon must remain canonical: we use Characteristic (not “axis/dimension”) as the formal term for a measured aspect, regardless of domain, per A.17’s naming convention.
-
F3 – Arity and semantics. Lifting various Characteristics into a unified space should not obscure their nature. If a Characteristic is defined as a relation (multi-entity property), the state space must represent it appropriately (e.g. as a coordinate that is a tuple or a symmetric relation) rather than flattening it into an unrelated scalar. Entity-specific vs relational properties must remain clear in the space’s structure.
-
F4 – Minimal core, extensible further. The kernel should provide only the bare essentials: a carrier for state with proper typing. It should be possible to impose additional structure like order, topology, or metrics if and when needed by downstream theories, but these must be optional overlays. The core space definition should be minimalistic to allow broad use, yet capable of extension for advanced needs.
-
F5 – Composability of spaces. We need well-defined operations to project a state space to a subspace (dropping some Characteristics), embed one space into a larger space (mapping coordinates from one context to another), and take products of spaces (combining different state spaces into a joint space). These operations are crucial for composing sub-models, comparing alternatives, or aligning different “CN‑frames” (for example, linking an architectural model’s state space with a metrics model’s space). The approach must support such composition in a principled way.
-
F6 – Alignment with RSG (state machines). In FPF, formal state certification is done via checklists on RoleStateGraphs (A.2.5). Our state space concept must complement that: i.e. the state of a holon remains an intensional concept (defined by criteria), but those criteria are evaluated against the measurable coordinates in a CharacteristicSpace. The design must allow checklists to map observed coordinates to named states and enable re-certification as states evolve, rather than locking states into a static progression.
Let I be a finite index set labeling a collection of slots. Each slot i (for i ∈ I) is defined as a pair:
slot_i = (Characteristic_i, Scale_i),
where:
-
Characteristic_iis aU.Characteristic(with an explicit arity, i.e. either an entity-Characteristic or a relation-Characteristic as defined in A.17), and -
Scale_iis a chosen Scale for that Characteristic (with a specified scale type and unit, per A.18 and the MM‑CHR rules).
Then a CharacteristicSpace (CS) is formally the Cartesian product of all slot value sets:
In other words, a point (state) in the space consists of one coordinate value for each slot. A state x in CS can be seen as a total function x(i) that picks a value from each slot’s ValueSet (for every i ∈ I, x(i) ∈ ValueSet(slot_i)). By kernel mandate, any U.Dynamics.stateSpace SHALL be bound to some instance of CharacteristicSpace, and all states or trajectories described by that dynamics model MUST lie within that space’s value set. (The actual dynamic laws and time progression are handled in A.3.3; A.19 only defines the state‑space container and its properties.)
To ensure consistency and comparability, a CharacteristicSpace must obey the following invariants:
-
A19-CS-1 (Exactly one per slot). Each slot binds exactly one Characteristic to exactly one Scale (including a specific Unit or kind, if applicable). This mirrors the CSLC clause of “one aspect – one scale”: there are no ambiguous or compound mappings in a single slot. (If a Characteristic can be measured on multiple scales, only one is chosen for a given space; others would require separate slots or a different space.)
-
A19-CS-2 (Named basis). A CharacteristicSpace SHALL publish an ordered list of its slots as its basis. Each slot in the basis has a stable identifier (or key) that can be used in data structures or APIs. These basis names should be treated as technical identifiers (machine-readable tokens); any human-friendly alias or description for a slot should be provided only in the Plain register as a non-normative aid (per E.10). In short, the identity and order of slots in the space are explicit and stable.
-
A19-CS-3 (Immutability of meaning). Once a space is in use, the meaning of each slot is fixed. A slot’s
(Characteristic, Scale)pair MUST NOT be retroactively altered. If requirements change (e.g. a different scale or a revised definition of the Characteristic), one MUST define a new version of the space (or a new slot) rather than silently changing the existing one. When a space is versioned or a slot replaced, an explicit embedding (mapping from the old space to the new space) should be published to relate historical states to the new coordinates. This ensures past data remains interpretable and prevents semantic drift. -
A19-CS-4 (Arity preservation). If a
Characteristic_iis defined as a relation (multi-entity characteristic), then slot i represents a relationship among multiple entities. The coordinate value at such a slot is a tuple (with the appropriate entity types) rather than a simple scalar. The slot’s declaration SHALL indicate the relation’s symmetry or directionality as part of its meaning (this should align with how the Characteristic was originally defined in its template). In essence, relational Characteristics retain their arity in the space, so that we don’t confuse, say, “Coupling between X and Y” with an intrinsic property of X or Y alone. -
A19-CS-5 (No hidden normalizations or aggregations). A CharacteristicSpace itself carries no implicit normalizations or formulas for aggregating coordinates. It is a descriptive structure, not a scoring mechanism. Any computation that combines or transforms coordinates (e.g. producing an overall “score” or weighted sum) must be defined outside the core space – typically in an architheory that leverages the measurement framework (see C.16’s UNM/NormalizationMethod registry). In particular, any handling of polarity (which way “better” is) or weighting of different slots happens in those external formulas, not inside the space definition. The space provides the raw coordinates; the logic to interpret or aggregate them is added by domain‑specific layers with explicit disclosure of how it’s done.
-
A19-CS-6 (Slot meta completeness). Where applicable, each slot SHALL declare
admissible_domainand missingness semantics (e.g., codes for missing, censored, not-applicable), consistent with the Characteristic’s Scale and with MM‑CHR. This prevents silent domain drift and clarifies how absent values participate in predicates and comparisons.
By default, a CharacteristicSpace has no assumed ordering or metric structure – it is just a Cartesian product of value sets. However, a space MAY declare certain structural attributes as opt-in metadata (i.e. informative annotations that architheories can rely on, but not enforced by the kernel). These optional overlays include:
- Product topology. A topology on the space, typically the product topology when slots that are quantitative (interval or ratio scales) need continuity considerations. Declaring a topology is useful if continuity or convergence arguments are relevant (e.g. to say a sequence of states approaches a limit state). By default, without declaration, no topological structure is assumed on the space.
Lexical note: Here “distance metric” strictly means a mathematical distance function (or a generalized distance such as a pseudometric or quasi‑metric) on the state space. This is not to be confused with metrics as performance measures in MM‑CHR. In the Tech register, avoid the noun metric; refer to U.DHCMethod/U.DHCMethodRef for measurement templates (see C.16). Any distance overlay on a CharacteristicSpace must not conflict with scale semantics; it is an additional analysis structure, not a redefinition of measurement meaning.
These overlays are entirely optional and have no effect on the core meaning of the space – they exist only to support particular needs (like making dominance, continuity, or distance reasoning possible) in models that require them. If needed, they should be added deliberately by an architectural theory rather than assumed. This way, any ordering or metric properties of states are made explicit instead of relying on hidden or default arithmetic. (Rationale: The CSLC and MM‑CHR rules already govern what operations are allowed on each scale; A.19’s approach is to let higher-level theories layer on an order, topology, or metric when appropriate, so nothing is taken for granted tacitly in multi-dimensional arithmetic.)
Any model of change or dynamics in FPF must declare the state space it operates over. Formally, U.Dynamics.stateSpace SHALL be specified as a reference to a CharacteristicSpace. This creates a typing obligation: the dynamic model can only produce states (and trajectories of states) that lie in the given space. All predicates or predictions in such a dynamics model are understood to quantify over sequences of points in that CharacteristicSpace (with time semantics governed by A.3.3’s time base and laws). Note: A.19 defines only the structure of the state space; it deliberately does not fix any time axis or dynamic law. Those remain the responsibility of the dynamics pattern (A.3.3). A.19 simply ensures there is a well-defined space in which states live, so that dynamics are decoupled from any narrative “stage” and instead treat evolution as movement through this space.
In all normative references, definitions, and identifiers related to this pattern, the specification uses the canonical measurement terminology: Characteristic, Scale, Level, Coordinate, CharacteristicSpace, slot, basis. Legacy terms like “axis”, “dimension”, or “point” are forbidden in Technical/Formal registers of the spec (per A.17’s lexical rules). They may appear at most once in explanatory Plain language as mapped aliases to aid understanding (and if used, must be explicitly identified as equivalent to the official terms). In this pattern, we consistently use “slot” or “basis element” (never “axis”) to refer to a component of a space, and “Characteristic” (never “dimension”) to refer to the measured aspect. This lexical discipline ensures clarity and consistency across the framework (see A.17 and C.16 L-rules for the formal policy on terminology).
Design rule — read invariants, not labels. Eligibility, comparability and acceptance SHALL be decided on quotients by ≡_UNM (or on explicitly Normalization‑fixed charts), not on raw labels.
Minimal obligations:
- Name the quotient or fix. If a checklist predicates over a normalization‑variant property, it MUST name the NormalizationFix (which UNM.NormalizationMethod(s) and chart are assumed) and thus the ≡_UNM class.
- Declare NormalizationMethod class. Every normalization used MUST name its NormalizationMethod class (affine / order‑preserving / LUT with uncertainty) and validity window.
- Join/equality only on invariants. Equality checks and joins across spaces MUST target invariant forms (the ≡_UNM quotient or a declared Normalization‑fixed representation), never raw un‑fixed coordinates.
Use the weakest safe structure required by the argument (pre‑order → semi‑metric → metric).
- If a distance overlay is declared, any acceptance predicate or KPI defined over a CharacteristicSpace SHALL be non‑expansive (Lipschitz ≤ 1) w.r.t. the published
don the declared domain (raw coordinates or NCVs, as specified), or else state an explicit margin that absorbs any expansion. - If only an order overlay is declared, any acceptance predicate/KPI SHALL be isotone w.r.t. the declared product order.
Minimal obligations:
- Publish the metric (if used). If a distance overlay is used, the space MUST publish the distance function
d(including any weights/parameters) and its declared domain of applicability. - Bound expansion. Any acceptance predicate/KPI that relies on
dMUST be shown non‑expansive (Lipschitz ≤ 1); otherwise an explicit expansion bound and compensating margin MUST be stated. - State error & commutation. If a metric is used together with NormalizationFix, the specification MUST state (a) the maximum tolerated measurement/calibration error and (b) whether
dcommutes with the NormalizationFix (or provide a disclaimer and additional guard if it does not).
Memory hook: We compare only what lies in the same space (or is translated into a common space via a declared mapping), and we only certify a holon’s state based on observable coordinates in that space (using a defined checklist). Anything else is just storytelling.
To make state-space reasoning practical across different contexts and models, this section provides the key operators and criteria related to CharacteristicSpaces:
-
Space operations – how to derive a Subspace, establish an Embedding, or form a Product of spaces. These enable us to restrict a space to fewer slots, to map one space into another (with unit conversions, etc.), or to combine spaces (e.g. for composite models).
-
Comparability regimes – two allowable ways to compare states: (a) coordinatewise, which requires strict sameness of space and units; or (b) normalization-based, which uses declared transformations to reconcile differences. We define when each applies and how to apply it properly.
-
RSG integration – how formal state certification (via checklists in a Role’s state graph) ties into the CharacteristicSpace: ensuring that whenever we declare a system “Ready” or “Degraded”, it’s based on snapshot coordinates in a space. We also outline how to push or pull state definitions along space embeddings (so different contexts can translate states).
-
Archetypal examples – “worked mini-schemas” illustrating typical usage in complementary CN‑frames (Operational, Assurance, Alignment). These examples show minimal models mixing entity and relational slots, how data might be structured, and how cross-context alignment works in practice.
Terminology note: We often denote a CharacteristicSpace abstractly as CS. Formally, one can describe a CS as a tuple
⟨I, basis⟩where I is the index set of slots and basis is the set (or ordered list) ofslot_ipairs. When a CharacteristicSpace is attached to a specific Role in a specific Context (see A.2, A.2.5), we may call it an RCS (Role CharacteristicSpace) – essentially the state space for that role’s state machine within that bounded context. Individual states of a role live in an RSG (RoleStateGraph, A.2.5), and a StateAssertion is a certified claim that at a given time window, the holon’s RCS coordinates satisfy the checklist for a particular state.
To support model composition, we define operations on CharacteristicSpaces in a notation-independent way (so these can be implemented in any tooling or notation). All these operations are assumed to occur within a single context (within one U.BoundedContext) unless otherwise noted:
5.2.1.1 Subspace – Projection π_S : CS → CS|_S.
Given a CharacteristicSpace CS with basis I (slots) and a chosen subset of slot indices π_S takes any state x in the original space and projects it onto the coordinates indexed by S, effectively discarding the other coordinates. This operation is straightforward: if π_S ∘ π_S = π_S) and, if an order or other structure is defined solely on the subspace’s slots, π_S preserves that structure (e.g. it will reflect any order that depends only on slots in S).
5.2.1.2 Embedding – Injection ι : CS₁ ↪ CS₂.
An embedding is a structure-preserving injection from one space CS₁ into another space CS₂. It consists of two parts: (a) a mapping of slots from CS₁ to slots of CS₂, and (b) for each such slot, a NormalizationMethod (function that translates coordinates from CS₁’s scale into CS₂’s scale when the scales or units differ). Formally, let CS₁ have basis I₁ and CS₂ have I₂. An embedding defines an injective function m: I₁ → I₂ that identifies each slot of CS₁ with a corresponding slot in CS₂. For each slot i ∈ I₁, where its scale or unit differs from the target m(i) in CS₂, we provide a NormalizationMethod function U.BoundedContext (i.e., both CS₁ and CS₂ are in the same context). Using an embedding across contexts requires an Alignment Bridge (see F.9) with an associated congruence‑loss policy. Normalization declaration duties (MUST): Each NormalizationMethod MUST (i) state monotonicity w.r.t. the slot’s polarity; (ii) publish a validity window (value range and time applicability); and (iii) identify its NormalizationMethod class (ratio:scale / interval:affine / ordinal:monotone / nominal:categorical / tabular:LUT(+uncertainty)). NormalizationMethods used in enactment gates MUST be current; expired editions require renewal or an explicit Waiver (see C.16). In other words, you cannot assume one context’s space fits into another’s without an explicit Bridge; any attempt to do so must treat it as a cross‑context alignment with potential loss.
Method‑class note. For ratio scales use positive‑scalar conversions; for interval scales use affine transforms; for ordinal scales use order‑preserving monotone maps; for nominal scales use categorical re‑maps; LUT with uncertainty may be used where declared with evidence.
5.2.1.3 Product – Combination CS₁ ⊗ CS₂ = CS⊗.
The product of two spaces CS₁ and CS₂ is a new space CS⊗ that effectively contains all slots of CS₁ and all slots of CS₂. If CS₁ has index set I₁ and basis slots {slot₁…} and CS₂ has I₂, then
A state label like "Ready", "Authorized", "Degraded", etc., in an RSG is an intensional category (defined by a checklist of conditions – see A.2.5). Determining whether the states of two holons are comparable (e.g. whether one is “better” or “worse” than the other in some multi-criteria sense) depends on where their state coordinates live and how we map those coordinates to a common basis. There are two admissible comparability regimes in FPF:
Two states can be compared coordinatewise only under strict conditions. Essentially, we require the states to be expressed in the same measurement space, with the same units and scales, and using the same state definitions. Formally, coordinatewise comparison is allowed only if all of the following hold:
-
Same space. The two holders’ state snapshots lie in the exact same CharacteristicSpace (and, if relevant, the same RCS attached to a Role in a given Context). It’s not enough that they have similarly named characteristics; they must share the actual defined space (same slots with same definitions).
-
Scale congruence. For each slot being compared, the scale type, unit, and polarity orientation are identical. For example, if comparing temperature readings, both must be on the same scale (say, °C on a ratio scale with “higher = hotter” orientation). No unit mismatches or differing interpretations can be present.
-
State-definition congruence. The states or status labels themselves must be defined in terms of the same checklist criteria applied in the same space. In other words, if we are comparing whether one system is “Ready” and another is “Ready”, both instances of “Ready” must derive from the same formal definition (same thresholds, same checklist logic) over those coordinates. If one context’s "Ready" means something different, you cannot assume they correspond.
When these conditions are met, one can define a coordinatewise preorder over states. Common patterns include:
-
Dominance: For a given set of “higher is better” slots, we say state x ≼coord state y if and only if for every relevant slot a, the coordinate
$a(x) \le a(y)$ (after orienting all slots to the declared polarity for that slot). In other words, y is as good or better on all enforced criteria. This defines a Pareto-like ordering (often partial, not total). -
Threshold band inclusion: If states are defined by meeting certain thresholds (e.g. State Y means all metrics above specific levels), then we might say x ≼coord y if x meets every threshold that defines y’s state. For instance, if state y = “High Performance” requires speed > 100 and accuracy > 90%, then x is “no less than y” if x also exceeds those thresholds.
By default, no comparability is assumed unless proven. If any of the above congruence conditions fails, one must not fall back to ad-hoc comparisons (like matching by name or normalizing without declaration). Either switch to a normalization-based regime or declare the states incomparable.
When two state vectors do not meet the strict conditions for coordinatewise comparison (e.g. they come from different spaces, or same conceptual Characteristics but measured on different scales/units, or defined in different contexts), the only sanctioned way to compare them is via a NormalizationMethod‑based mapping under UNM into a common space. The idea is: normalize, then compare. Specifically:
If we have state x in space CS₁ and state y in space CS₂ (possibly the same space, possibly not), and if direct coordinatewise comparison is not valid, we must introduce a set of per‑slot NormalizationMethods to translate one state’s coordinates into the frame of the other. A NormalizationMethod m_a: Dom(a_src) → Dom(a_tgt) is a monotonic transformation (non‑decreasing w.r.t. the declared polarity) that converts values from the source slot’s domain to the target slot’s domain and yields an NCV in the target domain. Each method is tailored to its Characteristic/Scale pair and respects the scale types (e.g., positive‑scalar conversion for ratio scales, affine conversion for interval scales, an order‑preserving mapping for ordinal scales, or a categorical re‑mapping for nominal scales). Collectively, a set of methods {m_a} for all slots forms a vector‑level normalization function N: CS₁ → CS₂ that lands within the same ≡_UNM class of charts in the target Context.
Comparability rule (normalize‑then‑compare). We say x ≼normalization y if, after applying the normalization function N to x to translate it into space CS₂, the resulting point N(x) (a vector of NCVs) is ≼coord y in the target space. In other words, we don’t compare x and y directly; we compare N(x) to y, where both are now expressed in the same space (CS₂) with the same units and definitions. Normalization‑based comparison thus reduces to the coordinatewise case after a transformation.
If the normalization crosses context boundaries (i.e., CS₁ and CS₂ are in different bounded contexts), then by FPF policy this mapping must be treated as a formal Bridge alignment with an associated congruence‑loss (CL) level. In such cases, any conclusions drawn carry an assurance penalty: the confidence in comparability is discounted according to the worst-case loss of meaning along the mapping. (See also B.3’s Φ(CL) rule for how a CL penalty factors into trustworthiness scores.)
Auditability. Each NormalizationMethod should be fully specified and transparent. At minimum, one should document the functional form or mapping table being used and the intended domain of validity (NormalizationMethodDescription). In the measurement architheory (C.16), normalization comes with calibration evidence/rationale and a note of its valid range or conditions. (For example, a method translating lab scores to field scores might note it’s valid only for a certain operating range.) While A.19 does not require recording these details, it assumes that such evidence and bounds are handled by the measurement framework (MM‑CHR) outside the core space definition. The key here is that no comparison is magic – if values differ in scale or context, a declared monotonic transformation must bridge them, and its limitations should be known.
Mnemonic: “Never compare before you land both points in the same well-typed space.” In other words, always map measurements to a common basis (same CharacteristicSpace and units) before attempting to say one state is ≥ or ≤ another. Directly comparing raw numbers from different scales or contexts is not allowed.
To connect the abstract concept of a space of metrics with the operational concept of states (like “Ready” or “Degraded”) in a Role’s lifecycle, we introduce a certifier function that evaluates state predicates against coordinates:
certify(Role, Context): Snapshot( RCS[Role,Context], Window ) ──→ {StateAssertion}
This is a conceptual sketch: given a snapshot of all relevant coordinates for a Role (in its RCS) over some time window, the certifier produces a set of StateAssertions that are deemed true in that window. Each StateAssertion claims that the holder is in a particular state (e.g. “Ready”) during the window, backed by evidence.
5.2.3.1 From CS snapshot to StateAssertion (design → run). Each possible state s in a Role’s RSG has an associated Checklist (s) – a design-time artifact (see A.2.5 §8.1) which is a predicate defined over the RCS’s coordinates (and possibly other contextual observables). For example, a state “Degraded” might have a checklist like “[temperature < 50 °C] AND [pressure > 5 bar] for 10 minutes”. When the system is running, we take an Observation of the current coordinates (a snapshot of the RCS at a given time or over a time window) and evaluate the checklist. A StateAssertion(holder, s, Window) is then a record that the checklist for state s has been satisfied by the observed data in that interval. In other words, it’s a certified evaluation that “state s holds true for this holon at this time.” Only observable, measurable facts go into these predicates (no subjective judgments), and each assertion is traceable to the specific evidence (observations) that support it. The Role’s Green-Gate Law (A.2.5 §8.4) then says that a Role can proceed with an enactment (e.g. performing work) if and only if there is a StateAssertion showing the holon to be in an enactable state at that time. This connects measurement to action: you can only act if you have evidence you’re in the right state to act.
Evidence kind & window. Every StateAssertion SHALL record evidence_kind ∈ {observation, prediction}, the window [t_from, t_to], and, if prediction, the horizon Δt relative to the observation base. Use of prediction in enactment gates is permitted only under the DYN/TIME constraints captured in CC‑A19.17–A19.18; otherwise a fresh observation is required.
5.2.3.2 Translating state definitions across embeddings. If we have an embedding ι: RCS₁ ↪ RCS₂ (for example, RCS₁ is a subspace or a different version of RCS₂), we might want to reuse or compare state definitions between the two. There are two directions to consider:
- Pulling a checklist (reuse state criteria from a larger space in a smaller space): Given a checklist defined on RCS₂ (the larger or target space), we can pull it back via the normalization map N of the embedding to get a predicate on RCS₁. This derived checklist (Checklist₂ ∘ N) lets us apply the RCS₂’s state definition to a holon that only has RCS₁ measurements. This is useful when a consumer context wants to evaluate whether a producer (with fewer characteristics or different units) meets the consumer’s state definitions. Essentially, the consumer asks: “If I map the producer’s metrics into my space, does it satisfy my state criteria s?”
- Pushing an assertion (honor a producer’s certified state in a larger space): If a holon has a StateAssertion for state s’ in RCS₁, can we treat it as evidence for state s in RCS₂? This is only valid under a strict condition: the checklist for state s in the larger space, when composed with the normalization mapping N, must logically imply the checklist s’ in the smaller space (or vice versa, depending on which state corresponds to which). In practice, this often requires a proof of refinement: that meeting state s (in big space) guarantees state s’ (in small space), or that state s’ (in small) is sufficient for state s (in big space) given the normalization translations. If that condition is met (or a policy waiver is granted in lieu of proof), then an assertion in the smaller space can be pushed up to count as an assertion in the larger space. This mechanism allows, for example, a component’s certified state to satisfy a system-level state requirement, provided the relationship is formally established.
5.2.3.3 Certification interface (pointer). Operational interface examples and minimal data stubs are informative and live in A.19.D1 (“Certification Interface Example”). Pattern A.19 only constrains conceptual obligations; no storage/ID scheme is mandated here.
(In summary, embeddings not only allow numeric comparability, but also allow state definitions and certifications to be systematically translated between contexts, ensuring consistency in how we interpret “Ready”, “Failed”, etc., across different models.)
When comparing states or metrics across different bounded contexts (different “context of meaning”), additional rules apply to maintain semantic integrity:
-
5.2.4.1 Direction & loss (Bridges). Suppose we want to claim that “Holon X in Context B is in state Ready as defined in Context A.” This requires an explicit Alignment Bridge declaration that maps the RCS of (Role, Context B) to the RCS of (Role, Context A) (or maps State B to State A). Such a Bridge (see F.9) will specify the correspondence of Characteristics (and the necessary NormalizationMethods under UNM) and a congruence‑loss (CL) level indicating how much fidelity is lost in translation. Critically, these Bridges are one-directional mappings unless explicitly made bidirectional. Just because we can interpret B’s state as an A-state does not mean we can go the other way without another mapping. The Bridge makes the mapping and any loss explicit. Without a declared Bridge, cross-context state comparisons or substitutions are not valid – there is no implicit global state space. The statement above, for instance, would only hold if we have something like “Bridge B→A (with defined NormalizationMethods) such that X@B can be viewed in A’s terms.” The direction matters: “B satisfies A’s Ready” does not imply the converse unless another bridge (A→B) is defined.
-
5.2.4.2 Confidence penalties for mapped comparisons. Whenever a normalization-based comparison crosses Contexts (via a Bridge), assurance MUST apply the penalty Φ(CL) as defined in B.3 (CL is ordinal there). For episteme‑specific compositions, B.1.3 instantiates the same policy. This pattern does not restate the scale or Φ; it defers to B.3. For example, a safety argument that relies on a cross-context comparison might need to downgrade its certainty or include an extra safety margin. This penalty MUST be declared as part of the assurance argument for the comparison (stating the Bridge used and its CL), so that the Φ(CL) discount can be reasoned and applied. No implementation‑level storage format or identifier is mandated by this pattern.
-
5.2.4.3 Declare “incomparable” when appropriate. If for some critical Characteristic there is no valid NormalizationMethod to translate measurements between two contexts (e.g. the scale types are fundamentally different, or the measurement’s meaning doesn’t carry over), then the framework insists that we declare the states or metrics incomparable rather than attempting any fudge. No comparison should ever default to “close enough by name” or other heuristics. For instance, if one context measures “User Satisfaction” qualitatively and another quantitatively, and no monotonic mapping can be justified, one must simply say a user satisfaction state in context A cannot be compared to one in context B. Mark it incomparable and avoid any misleading conclusions. This rule guards against the natural temptation to compare things just because they have the same label or general intent, when in fact their measurement basis is different.
Canonical evaluation chain (notation‑neutral):
raw coords → Normalize (UNM.NormalizationMethod) → Quotient / NormalizationFix → (optional) Indicatorization (via IndicatorChoicePolicy) → (optional) Order/Distance overlay → Evaluate Checklist → StateAssertion → Green‑Gate
Strict distinction. Steps may be co‑implemented, but are logically distinct and MUST be referenceable in assertions (NormalizationMethod/UNM name or formula, overlay kind). A gate is invalid if any required step lacks a current, valid referent (e.g., expired NormalizationMethod edition).
Spaces: Sub (projection), Emb (embedding), Prod (product), Quot (quotient by declared equivalence), NormalizationFix (fix to a named chart/edition).
States/criteria transport: Pull (pull checklist via embedding/NormalizationMethod), Push (push assertion along embedding with proof/waiver), Indicatorize (apply IndicatorChoicePolicy to select Indicators), Align_B (cross‑context alignment via Bridge with CL), Fold_Γ (admissible aggregation/accumulation per B.1, with WLNK/MONO constraints).
OP‑1 (Normative). If Align_B is used in gating, the Bridge used and its CL MUST be declared in the assurance argument; the corresponding Φ(CL) penalty is applied per B.3. Silent cross‑context reuse is forbidden. (A.19 does not mandate any storage/ID scheme.)
Formality anchors & operational segregation (normative). A.19 aligns with C.2.3 Unified Formality Characteristic (F). The legacy tier labels T0/T1/T2 are deprecated; speak F directly and treat operations separately (see E.10 for registers). — F-Surface (recommended F ≥ F3). Obligations are declarability and arguability: the author can name the CharacteristicSpace (basis/slots as (Characteristic, Scale) pairs), state the comparability regime (coordinatewise or normalization-based), and express a state’s checklist in observable coordinates. No storage formats, IDs, or operational provenance are required. — F-Predicates (F ≥ F4 when predicate-like). As above, plus explicit slot/NormalizationMethod names and stated overlays (order/metric). When acceptance conditions are written as typed predicates over coordinates, declare F ≥ F4. Remains notation-neutral and storage-agnostic. — Operational bindings (not part of F). When automatic checking/assurance is required, use A.19.D1 / C.16 / B.3 for IDs, validity windows, waivers, and logs. These raise R/TA in the trust calculus and do not change F unless the expression form changes (see C.2.3 orthogonality).
The following checklist summarizes the normative requirements introduced by Pattern A.19. An implementation or model conforms to A.19 if and only if all these conditions are met:
Spaces & mappings
CC‑A19.1. Any defined Subspace, Embedding, or Product of CharacteristicSpaces MUST explicitly list the involved slots and their metadata (scale type, unit, polarity). No comparability or merging is allowed purely by matching names or assuming correspondence – it must be declared.
CC‑A19.2. Every Embedding ι: CS₁ ↦ CS₂ MUST provide a well‑defined NormalizationMethod for each slot where CS₁’s slot differs in scale/unit from CS₂’s. Each method MUST (a) be monotonic w.r.t. the declared polarity and scale type; (b) publish a validity window (value range and time applicability); and (c) name its method class (affine / order‑preserving / LUT). (Identity suffices where scales are identical.)
CC‑A19.2a. Scale‑class guard. Ratio conversions SHALL be positive‑scalar; interval conversions SHALL be affine; ordinal conversions SHALL be order‑preserving; nominal conversions SHALL be categorical re‑maps; LUT with uncertainty MAY be used where declared with evidence.
Comparability
CC‑A19.3. Coordinatewise comparability (≼_coord) is permitted only when the states being compared share the same CharacteristicSpace, with identical scale metadata on each compared slot, and using the same state definition criteria. If these conditions aren’t fully satisfied, an implementation MUST NOT attempt direct coordinatewise comparison; it should either apply a normalization‑based method or report the items as incomparable.
CC‑A19.3a. Use of Indicators in any checklist/assertion MUST cite an IndicatorChoicePolicy (edition). Treating any NCV as an Indicator without a declared policy is forbidden.
CC‑A19.4. Normalization‑based comparability (≼_normalization) MUST be done by first normalizing all relevant coordinates of the source state into the target state’s space via the declared NormalizationMethods, and only then comparing in that common space. In other words, two states can be compared under ≼_normalization only by producing an image of one in the other’s space (G(x)) and using ≼_coord on the result. Each NormalizationMethod MUST be explicitly defined (no implicit or “on the fly” conversions).
CC‑A19.5. Any cross-context state comparison or substitution MUST cite a corresponding Alignment Bridge (F.9) with an explicit CL (congruence-loss) level. If such a Bridge is used in an assurance or decision-making context, the model MUST apply the appropriate confidence reduction (Φ(CL) penalty per B.3) to reflect the loss. Cross-context comparisons without a Bridge (i.e. assuming equivalence by name or convention) are forbidden.
Certification & enactment CC‑A19.6. Every StateAssertion MUST identify at least: the specific state being asserted (by name), the associated checklist or criteria set (by name), and the observation window. Furthermore, if the evaluation involved cross‑space mapping, it MUST declare which NormalizationMethod(s) or Bridge were applied. This ensures the decision can be examined in review; A.19 does not mandate any storage/ID scheme.
CC‑A19.7. The Green-Gate enactment rule (A.2.5) SHALL be enforced: a transformative action (U.Work) by a RoleAssignment is only allowed if there exists a contemporaneous StateAssertion showing the holon in a state that is marked enactable. If a StateAssertion has been translated from another context or space, it is valid for gating only if it was obtained through declared Embeddings/Bridges (no untracked inferences). This ensures no work is done under an unverified or mis-mapped state condition.
CC‑A19.8. All Checklist definitions for states MUST be formulated in terms of observable predicates on the RCS (and known context events) – no hidden workflows or implicit time sequencing inside a checklist. A checklist should read like a static predicate (even if it’s about a duration of some condition). If temporal order or multi-step processes are involved in achieving a state, those must be modeled via explicit Methods/Work or via an aggregation logic (e.g., using the Γ (Gamma) patterns in B.1 for process sequencing), rather than being baked into the state’s definition. Use of Indicators in any checklist MUST cite an IndicatorChoicePolicy edition; treating any NCV as an Indicator without policy is forbidden.
Anti‑drift CC‑A19.9. If a NormalizationMethod/UNM or a state checklist is updated or calibrated differently in a new version, previous StateAssertions MUST NOT be retroactively modified. One must close out or mark the old assertions with their valid time window and start issuing new assertions under the updated definitions. In other words, historical records remain as they were (tied to the definitions at that time), and any change in criteria results in a new context or version for future assertions. This prevents retroactive truth-changing and maintains integrity of historical data. CC‑A19.10. If any critical slot in a comparison lacks a defensible monotonic NormalizationMethod (meaning you cannot find any reasonable way to translate that characteristic between two spaces without excessive loss or ambiguity), then the comparison MUST be reported as incomparable. The system should not attempt any unofficial workaround (like simply comparing whatever is available or ignoring that dimension). This rule applies even if all other slots have gauges – one missing or irreconcilable aspect is enough to force an “incomparable” verdict, unless the decision-makers explicitly accept a loss via a Bridge with stated limitations.
Quotients & Normalization‑fix (QNT) CC‑A19.11. Equality checks and joins across spaces MUST target invariant forms (on a quotient or declared NormalizationFixed chart), not raw coordinates. CC‑A19.12. If a checklist predicates on a normalization‑variant property, it MUST name the NormalizationFix (which UNM.NormalizationMethod or chart is assumed). CC‑A19.13. All used NormalizationMethod classes (affine / monotone / LUT…) MUST be named in the bounded context’s glossary.
Metric discipline & calibration (MET)
CC‑A19.14. If a distance overlay is used, acceptance predicates/KPIs over a CS SHALL be non‑expansive (Lipschitz ≤ 1) w.r.t. the published d on the declared domain (raw coordinates or NCVs), or declare a compensating margin; otherwise they SHALL be isotone w.r.t. the declared product order.
CC‑A19.15. Any distance used in state/acceptance checks MUST carry max tolerated error and, where claimed, a Lipschitz bound for the NormalizationMethod composition in use.
CC‑A19.16. Cross‑CN‑frame inputs SHALL name the normalization transform and its validity window; expired transforms are invalid for gating unless waived explicitly.
Dynamics & time (DYN/TIME)
CC‑A19.17. Every temporal guard MUST specify the window [t_from, t_to] and evidence_kind ∈ {observation, prediction}; if prediction is used for gating, the conditions in § 5.2.3.1 (Evidence kind & window) MUST hold.
CC‑A19.18. Any dynamics map Φ_{Δt} used in comparison/gating MUST be non‑expansive (Lipschitz ≤ 1) under the declared distance overlay and commute with NormalizationFix; otherwise observation is required.
Certification (CERT)
CC‑A19.19. StateAssertions MUST state the current NormalizationMethod/UNM and overlay artifacts used (by name or formula) and the evidence_kind; assertions relying on expired NormalizationMethod/UNM are invalid for gating unless an explicit Waiver SpeechAct is declared per policy. (A.19 imposes no requirement on IDs or storage.)
CC‑A19.20. The certification pipeline steps (Normalize (UNM.NormalizationMethod); Quot/Fix_normalization; overlay; evaluate; assert) are logically distinct and MUST be reconstructable in argument/review; collapsing steps without clearly stated referents violates A.19. (No specific persistence format is implied.)
Operators (OP)
CC‑A19.21. Use of Align_B in gating MUST declare the Bridge used and propagate CL into assurance (B.3). Cross‑context comparison without a Bridge is forbidden. (No requirement to store an ID is imposed by A.19.)
The following are common modeling mistakes (“anti-patterns”) related to measurement spaces, and how to correct them:
-
“Same label ⇒ comparable.” ✗ Assuming Ready@contextA ≥ Ready@contextB just because both states are called "Ready". ✓ Explicitly normalize and bridge contexts: Define an Alignment Bridge (B→A) and appropriate NormalizationMethods for the underlying metrics. Then compare by first translating one state’s coordinates (compute N(x) as NCVs in the target space) and using
≼_coordon the result. -
“Compare before landing.” ✗ Comparing values directly across different scales, e.g. Drift_A = 5°C vs Drift_B = 5°F as if they were the same. ✓ Normalize to common units first: e.g., apply the Fahrenheit‑to‑Celsius NormalizationMethod m(T_F) = (T_F − 32) × 5/9 to convert all data to °C, then compare the drift values. Always normalize into one space before comparing magnitudes.
-
“Checklist = workflow.” ✗ Defining a state’s checklist with an implied sequence: “State ‘Ready’ requires doing Step 1 then Step 2…” ✓ Keep checklists declarative: A Checklist should represent a state of the system (a condition) – essentially state evidence – not a sequence of actions. If order or process matters, model that explicitly via a MethodDescription or by using a Γ (Gamma) aggregator for process logic. In other words, state = “Ready” might require conditions A and B to be true (regardless of how you got there), whereas the procedure to get ready (do Step1 then Step2) should be a separate method or playbook.
-
“Retro-fix past assertions.” ✗ Going back to edit or reinterpret old StateAssertions after changing a threshold or gauge (e.g. “We updated the criteria, let’s ‘fix’ last quarter’s records to match”). ✓ Never alter historical assertions: Leave history as‑is. If criteria change, issue new assertions under the new criteria going forward, and if needed, explicitly version the NormalizationMethod/UNM or checklist. Past assertions remain valid for the old version and their time; new ones apply henceforth. This ensures auditability and avoids erasing or rewriting what was true under earlier standards.
Scope. This CN‑frame Algebra & Normalization Discipline extends A.19 by fixing the governance Standard for CN‑frames, defining a conformance checklist and regression harness, and providing didactic one‑pagers and anti‑patterns so teams can introduce CN‑frames without tool lock‑in. The mandatory pattern structure and authoring discipline from Part E (Style Guide, Tell‑Show‑Show, checklists, DRR, guard‑rails) are applied throughout.
A.19 established a substrate‑neutral picture:
- a CN‑frame = (Context‑local) CharacteristicSpace (CS) + chart (coordinate patch + units) + UNM/Normalization (admissible re‑parameterizations that preserve meaning and generate ≡_UNM over charts);
- operators (subspace, product, pullback/pushforward) and comparability (coordinatewise vs normalization‑based (normalize‑then‑compare));
- RSG touch‑points: role readiness (RSG states) are certified against CS via checklists over observable characteristics;
- entity/relational mixtures across CN‑frames via minimal schemas and bridges.
Terminology guard. CN‑frame is the lens (I); CN‑Spec is the governance card (S) that fixes admissible charts/normalizations/comparability/Γ‑fold for that lens in one U.BoundedContext; CN‑Description is the didactic surface (D) with worked examples and anti‑patterns.
Lexical guard (map/Map). Lowercase map is a mathematical function; capitalized Map (e.g., DescriptorMap) is a Part‑G method type (encoder) and is not a NormalizationMethod or NormalizationMethodInstance.
A.19.D1 makes this operational and auditable.
Absent a governance layer, four failure modes recur:
- Chartless numbers. Measures move between teams without units, reference states, or declared normalization → illusory comparability.
- Hidden normalization flips. Re‑parameterisations (e.g., normalising by batch size) silently alter meaning; trend lines lie.
- CN‑frame sprawl. Every initiative mints a new “dashboard dimension”; semantics diverge; assurance collapses.
- Un‑bridgeable reports. Cross‑team roll‑ups average incongruent CN‑frames, violating the weakest‑link (WLNK) discipline from Γ and B.3.
| Force | Tension we must balance |
|---|---|
| Universality vs nuance | One Standard for robotics, safety, finance — yet leave each context’s idioms intact. |
| Speed vs audit | Light ceremony for on‑ramp; hard guarantees for assurance and SoD. |
| Local truth vs federation | Keep CN‑frames meaning‑local; still enable explicit bridging across Contexts. |
| Minimalism vs safety | Few mandatory slots; enough structure to forbid silent gauge drift. |
A CN‑frame is governed by a compact, notation‑free card:
CN‑Spec {
name : CN‑frameName // local to Context
context : U.BoundedContext // edition/version included
cs_basis : [{
slot_id : <tech-token>, // stable slot id (basis name)
characteristic : <U.Characteristic>, // per A.17 / A.18
scale : { type: nominal|ordinal|interval|ratio, unit?: <U.Unit>, bounds?: <...> },
polarity : up|down|target-range, // comparison orientation
// if needed: missingness?, admissible_domain? (MM‑CHR‑consistent metadata)
}]
chart : { reference_state, coordinate_patch, measurement_protocol_ref }
normalization : { UNM_id?, methods: [NormalizationMethodId], instances?: [NormalizationMethodInstanceId], invariants, admissible_reparameterizations, method_descriptions: [NormalizationMethodDescriptionRef], fix?: <NormalizationFixSpec> }
comparability : { mode ∈ {coordinatewise, normalization-based}, minimal_evidence }
indicator_policy? : { IndicatorChoicePolicyRef, scope, edition }
acceptance : { checklist_for_admission, window, evidence_anchors } // gates RSG state checks
aggregation : { Γ_fold, WLNK/COMM/LOC/MONO choices, time_policy } // safe roll-up recipe
alignment? : { bridges_to_other_contexts, CL_levels, loss_notes } // optional
lifecycle : { owner_role, DRR_links, deprecation_plan }
}
Reading: A CN‑frame is a context‑local lens with declared characteristics, units, a chart to read them, a UNM/Normalization that states what “doesn’t matter,” explicit rules for when a datum counts and how many can be safely folded, and (optionally) an IndicatorChoicePolicy to select Indicators. Not every Characteristic (even with an NCV) is an Indicator; Indicators arise from policy.
NormalizationKinds, MetricSpec and QuotientSpec are CN‑frame‑level governance metadata; per A19‑CS‑5 the kernel CharacteristicSpace carries no NormalizationMethods or composition; normalization lives in MM‑CHR.
L‑CN‑Spec‑NORM‑IDs: where stored artifacts are referenced (e.g., in assurance logs), use NormalizationMethodId/NormalizationMethodInstanceId; avoid generic “map”/“gauge” nouns.
Each U.BoundedContext keeps a CN‑frame Registry (VR):
- canonical names and editions;
- SoD hooks (who can edit CN‑Spec, who can certify admission);
- deprecation map (what replaces what, when).
Cross‑context reuse occurs only via explicit Alignment Bridges (F.9) between CN‑Specs:
Bridge CN‑frameA@Context1 → CN‑frameB@Context2
CL: {3|2|1|0}
kept_characteristics: [...]
lost_characteristics: [...]
transform: {pullback | pushforward | re-scaling | re-binning}
extra_guards: {additional evidence / reviewer role / waiver speech-act}
CL policy (reference). CL levels and the penalty Φ(CL) are defined in B.3 (CL is ordinal; do not average). **Mechanism authors SHALL declare crossings in the Transport clause of A.6.1 U.Mechanism; penalties from scope/kind/plane route to R/R_eff only (never to F/G). This CN‑Spec may add operational guards per level (e.g., “extra reviewer at CL=1”, “waiver at CL=2”), but it does not redefine the scale or Φ. For episteme‑specific frames, see also B.1.3.
Pass these and your CN‑frames are fit for assurance and cross‑team composition.
CC‑A19.D1‑1 (Local scope). Every CN‑frame MUST live inside a declared U.BoundedContext (with edition). Names are local; same label in another Context ≠ same CN‑frame.
CC‑A19.D1‑2 (Units & polarity). Each characteristic in cs_basis MUST declare unit/scale and polarity (↑ better / ↓ better / target range). No unlabeled magnitudes.
CC‑A19.D1‑3 (Chart). chart MUST name the reference state, coordinate patch and measurement protocol (U.MethodDescription) to make numbers reproducible.
CC‑A19.D1‑4 (Normalization). normalization MUST list the admissible transforms (e.g., affine rescale, monotone map) and the invariants they preserve (what comparability means).
CC‑A19.D1‑5 (Comparability mode). comparability.mode MUST be either coordinatewise (same chart & units) or normalization‑based (after normalization by the declared UNM into the same ≡_UNM class). Mixed/implicit modes are prohibited.
CC‑A19.D1‑6 (Admission checklist). acceptance.checklist_for_admission MUST be observable and time‑bounded; each datum admitted to the CN‑frame SHALL cite a StateAssertion or equivalent U.Evaluation.
CC‑A19.D1‑7 (Aggregation discipline). aggregation.Γ_fold MUST specify WLNK/COMM/LOC/MONO choices and the time policy (e.g., average of rates vs integral of counts). No free‑hand averages.
CC‑A19.D1‑8 (Bridge‑only reuse). Cross‑context consumption MUST cite a Bridge with CL and loss notes; coordinate‑by‑name without a Bridge fails. If the data participate in gating/assurance, apply Φ(CL) per B.3; this CN‑Spec does not restate Φ.
CC‑A19.D1‑9 (SoD & roles). Editing CN‑Spec and admitting data MUST be performed by different roles (⊥ enforced): CN‑frameStewardRole ⊥ CN‑frameCertifierRole inside the same context.
CC‑A19.D1‑10 (Lifecycle & DRR). Every CN‑Spec MUST carry an owner role, a deprecation plan, and links to DRR entries for rationale and changes (Part E.9).
CC‑A19.D1‑11 (Anchors & lanes for comparability). Any admission into a CN‑frame that is later used for comparison/aggregation SHALL cite the corresponding A.10 EvidenceRole anchors for each characteristic, with assuranceUse lane tags {TA, VA, LA} and validity windows (where applicable), so that the SCR can report lane‑separated contributions and freshness (B.3). Absence of anchors for a required characteristic renders items incomparable.
CC‑A19.D1‑12 (Notation independence). CN‑Spec content MUST NOT depend on a tool or file format; semantics precede notation (E.5.2 Notational Independence).
CC‑A19.D1‑13 (Lexical guard‑rails). characteristic names and role labels MUST follow the Part E lexical discipline (registers, twin labels; no overloaded “process/service/function”).
| Benefit | Why it matters |
|---|---|
| Auditable comparability | Chart + declared normalization (UNM + NormalizationMethods) make “same number” meaningful; silent re‑basings become explicit, reviewable choices. |
| Safe roll‑ups | Γ‑folds with WLNK/COMM/LOC/MONO stop optimistic averaging and preserve invariants. |
| Pluralism without incoherence | Bridges with CL and loss notes allow federation without pretending to global sameness. |
| RSG‑ready | Admission checklists let RSG states reference CN‑frame‑backed facts (e.g., Ready requires characteristics within bounds). |
The CN‑Spec aligns A.19.D1 with Part E: it packages Tell‑Show‑Show, Conformance Checklists, and DRR‑backed change, while honouring DevOps Lexical Firewall, Unidirectional Dependency, and Notational Independence so that semantics never depend on tooling. It also operationalises B.3 Trust & Assurance by making CL penalties and WLNK folds first‑class.
Same slots, three arenas; no tooling implied.
cs_basis: BeadWidth[mm] (target 6.0±0.2), Porosity[ppm] (↓), SeamRate[1/min] (↑ until limit)chart: reference jig, fixture ID, torch type;MethodDescription#Weld_MIG_v3normalization: affine rescale on gray‑level calibration → invariant = physical porositycomparability: normalization‑based (UNM) (calibration tables applied)aggregation: WLNK on quality (min‑bound), COMM on counts, time = per‑shift histograms- RSG hook:
WelderRole.Readyrequires Porosity ≤ 500 ppm & BeadWidth within ±0.2 mm admitted by this CN‑frame.
cs_basis: P50Latency[ms] (↓), P99Latency[ms] (↓), Load[req/s]chart: client vantage, trace sampler v4;MethodDescription#HTTP_probe_v4normalization: monotone time‑warp compensation for collector skew; invariant = percentile ordercomparability: normalization‑based (UNM) with declared normalizationaggregation: MONO on latency (max of mins), WLNK across services- RSG hook:
DeployerRole.Activegated if P99 < declared SLO over the admission window.
- cs_basis:
- slot_id: ΔBP characteristic: BloodPressureChange scale: { type: ratio, unit: mmHg } polarity: down
- slot_id: AdverseRate characteristic: AdverseEventRate scale: { type: ratio, unit: "%" } polarity: down
- slot_id: Age characteristic: Age scale: { type: ratio, unit: years } polarity: neutral
chart: cohort definition;MethodDescription#TrialProtocol_v5normalization: case‑mix adjustment (propensity score); invariant = adjusted ΔBPcomparability: normalization‑based (UNM) (post‑adjustment)aggregation: LOC on subcohorts; WLNK on safety outcomes- RSG hook:
EvidenceRole.Validatedadmission requires CN‑frame acceptance; Assurance pulls CL from any Bridge used.
To illustrate how CharacteristicSpace is used in practice, below are simplified schema snippets for three typical CN‑frames: an Operations view (run-time state and action gating), an Assurance view (evidence and cross-context comparison), and an Alignment view (design-time consistency across contexts). These examples mix entity-based and relational Characteristics and demonstrate how state spaces, gauges, and bridges appear in a model. (The notation is mix of a graph/entity diagram and a relational table outline for clarity. PK = primary key, FK = foreign key.)
8.4.1 Operations CN‑frame — Run-time gating & enactment
Entity graph view:
Holder (System) ── playsRoleOf ──> Role@Context ── has ──> RCS (slots…) RSG (Role@Context) ── owns ──> State (◉ status) Checklist (of State) ── testedBy ──> Evaluation ── yields ──> StateAssertion Work ── performedBy ──> RoleAssignment Work ── isExecutionOf ──> MethodDescription
In the above, a Holder (a system instance) plays a Role in some Context, which has an attached RCS (a set of slots defining its characteristic space). That Role’s RSG owns various possible State entries (each state could be, e.g., Ready, Waiting, Degraded, etc.). Each State has a Checklist which is tested by an Evaluation process, resulting in a StateAssertion (pass/fail) at runtime. Meanwhile, Work instances (concrete operations) are performed by the RoleAssignment and correspond to some MethodDescription (procedure). The “gate” for Work is that a StateAssertion for an enactable state must exist.
Relational stub: (illustrating how data might be stored)
| Table | Key Columns (essential) |
|---|---|
| ROLE_ASSIGNMENT | RA_ID (PK); HOLDER_ID; ROLE_ID; CONTEXT_ID; WINDOW_FROM, WINDOW_TO |
| RCS_SNAPSHOT | SNAP_ID (PK); RA_ID (FK to ROLE_ASSIGNMENT); WINDOW_FROM, WINDOW_TO; CHAR_ID; VALUE; UNIT; SCALE_TYPE |
| RSG_STATE | STATE_ID (PK); ROLE_ID; CONTEXT_ID; NAME; ENACTABLE (bool) |
| CHECKLIST | CHK_ID (PK); STATE_ID (FK to RSG_STATE); PREDICATE_TYPE; PREDICATE_SPEC |
| STATE_ASSERTION | SA_ID (PK); RA_ID (FK); STATE_ID (FK); CHK_ID (FK); WINDOW_FROM, WINDOW_TO; VERDICT (pass/fail); NORMALIZATION_ID?; BRIDGE_ID? |
| WORK | WORK_ID (PK); RA_ID (FK); METHODDESC_ID (FK to MethodDescription); WINDOW_FROM, WINDOW_TO; (other fields like results or references) |
In this schema: an RCS snapshot table might log individual coordinate values (VALUE) for each Characteristic (CHAR_ID) in a given RoleAssignment, with their units and scale type noted (to ensure we know what the number means). The StateAssertion ties a RoleAssignment to a state checklist and says whether it passed, including references to any gauge or bridge if cross-context or scaled comparisons were involved. The gate logic for enactment can then be a query like: “Is Work W admissible now?” – which joins through ROLE_ASSIGNMENT to find the latest StateAssertion for that RA where ENACTABLE=true and VERDICT=pass.
8.4.2 Assurance CN‑frame — Evidence freshness & mapped comparisons
Entity graph view:
NormalizationMethodInstance ── appliesTo ──> Characteristic (each instance is a scale‑appropriate, monotone transform within UNM) Bridge (ContextB → ContextA) (Alignment Bridge between contexts, with CL and loss notes) StateAssertion ── uses ──> {NormalizationMethodInstance, Bridge} (if a state comparison crossed contexts)
This view highlights that in the assurance context, we keep track of how we mapped or compared states:
- A NormalizationMethodInstance represents a monotone, scale‑appropriate mapping from one Characteristic’s scale to another’s under UNM (editioned, with validity window).
- A Bridge between Context B and Context A (for corresponding roles or states) carries a CL rating and possibly notes on what is “lost in translation.”
- A StateAssertion may use a NormalizationMethodInstance or a Bridge, meaning that assertion was reached by translating data via that instance or comparing across that bridge.
Relational stub:
| Table | Key Columns (essential) |
|---|---|
| NORMALIZATION_METHOD | NORMALIZATION_METHOD_ID (PK); CLASS (ratio:scale |
| NORMALIZATION_INSTANCE | NORMALIZATION_INSTANCE_ID (PK); NORMALIZATION_METHOD_ID (FK); SRC_CHAR_ID; TGT_CHAR_ID; `FORMULA_SPEC |
| BRIDGE | BRIDGE_ID (PK); FROM_ROLE@CTX; TO_ROLE@CTX; CL (congruence-loss level, e.g. 0–3); NOTES (description of losses/adjustments) |
| ASSURANCE_EVENT | AE_ID (PK); SA_ID (FK to StateAssertion); EFFECT (enum: penalty_applied, evidence_refreshed, etc.); DETAILS |
In this stub, NORMALIZATION_MAP defines each cross-scale mapping that has to be accounted for (with FORMULA_SPEC describing how to compute the target value from the source). The Bridge table enumerates official Bridges between contexts (for example, bridging a “Ready” state in an engineering context to “Ready” in an operations context, with CL indicating how fully comparable they are). An ASSURANCE_EVENT log could record when a penalty was applied due to a low-CL Bridge or when an assertion was refreshed or invalidated due to new evidence or time lapse. (For instance, policy might say if a critical state assertion uses a Bridge with CL < 2 in a safety context, a special waiver or secondary approval is needed – that could be represented as an event requiring a Waiver action.)
8.4.3 Alignment CN‑frame — Design-time reuse of states across Contexts
Entity graph view:
Checklist(ContextA.State) ← pull(N) — Checklist’(ContextB.State’) (pull a checklist via NormalizationMethodInstance N) Refinement π : RSG(Role' ≤ Role) (RSG refinement mapping, e.g. Role' is a subtype of Role)
This view covers how design-time alignment happens:
-
A Checklist’ for a state in Context B can be pulled via a NormalizationMethodInstance into Context A to become a derived Checklist for a state in Context A. This is effectively what we described in the pull operation: using another context’s criteria in your own space.
-
A Refinement π is shown between RSGs indicating Role’ is a specialized role of Role (e.g. a sub-role or a scenario-specific role) and how their states relate (Role’ might have extra states or more granular distinctions). This refinement should maintain that for each state in Role’ that maps to a state in Role, the entails/implication relation holds for enactability.
Relational stub:
| Table | Key Columns |
|---|---|
| RSG_REFINEMENT | MAP_ID (PK); ROLEPRIME_ID (FK to Role' in Context B); ROLE_ID (FK to Role in Context A); STATEPRIME_ID (FK to state in Role' RSG); STATE_ID (FK to state in Role RSG); ENTAILS (bool) |
| CHECKLIST_PULL | PULL_ID (PK); SRC_STATE_ID; TGT_STATE_ID; NORMALIZATION_INSTANCE_ID (FK to NormalizationMethodInstance used); VERSION? /* and perhaps timestamp */ |
In this stub, RSG_REFINEMENT maps states of a sub-role to states of a super-role, with an ENTAILS flag indicating if being in the sub-state guarantees being in the super-state. Every refinement mapping should ensure at least one enactable state in the sub-role corresponds to an enactable state in the super-role (or else the sub-role would allow something the super-role doesn’t – that’s an alignment lint check). The CHECKLIST_PULL table records that a state from one context has had its checklist pulled into another context via a gauge (identified by GAUGE_ID). This is a design artifact saying “State X in context A is defined by applying gauge G to State Y in context B’s criteria.” A version or validity field might ensure we know which edition of the checklist or gauge was used.
| Anti‑pattern | Symptom | Why it hurts | Fix (CN‑Spec slot) |
|---|---|---|---|
| Chartless number | “Latency = 120” | No unit/vantage → untestable | Fill cs_basis + chart |
| Normalization smuggling | Quiet “per‑unit” normalisation mid‑stream | Trend reversal | Declare NormalizationMethod/NormalizationMethodInstance + keep invariant |
| Bridge‑by‑name | Reusing labels across Contexts | False comparability | Author Bridge with CL + loss |
| Free‑hand averaging | Arithmetic mean on bounded risks | Violates WLNK | Declare Γ_fold with WLNK |
| CN‑frame sprawl | Ten nearly‑identical CN‑frames | Cognitive debt | Use Registry + DRR; prefer reuse |
| Role conflation | Same person edits CN‑Spec & certifies data | SoD breach | Enforce CN‑frameSteward ⊥ CN‑frameCertifier |
- Numbers travel with their Context. Always cite
Context@Edition. - If the normalization is not declared, the trend is fiction.
- WLNK beats wishful means. Use weakest‑link folds for safety.
- Admit → Assert → Act. (CN‑frame admission → RSG StateAssertion → Method step).
- Bridge or bust. Cross‑context = Bridge with CL and loss notes.
- Steward writes, Certifier admits. (SoD by design.)
- Charts are recipes. Name the
MethodDescriptionthat made the number. - Deprecate in the open. CN‑frame cards carry DRR & retirement plans.
- Keep characteristics few, meanings sharp. Prefer ≤ 7 characteristics per CN‑frame.
- No tooling names in Core. Semantics first; notation later.
- Use method/instance IDs, not “gauge/map”. Prefer
NormalizationMethodId/NormalizationMethodInstanceId.
These are concept‑level checks; notation‑agnostic.
- SCR‑A19.4‑S01 (Completeness). **CN‑Spec has all mandatory slots;
cs_basisinclude unit/scale/polarity;chartreferences aMethodDescription. - SCR‑A19.4‑S02 (Normalization clarity).
normalizationlists UNM/NormalizationMethod(s) with validity windows, method descriptions, and named invariants. - SCR‑A19.4‑S03 (Comparability test). Provide one worked example showing coordinatewise or normalization‑based comparison end‑to‑end (with evidence anchors).
- SCR‑A19.4‑S04 (Γ‑fold audit). Aggregation rule spells out WLNK/COMM/LOC/MONO choices; reviewer reconstructs result on a toy set.
- SCR‑A19.4‑S05 (SoD). Distinct
RoleAssignmentsforCN‑frameStewardRoleandCN‑frameCertifierRoleexist; windows do not overlap. - SCR‑A19.4‑S06 (Aboutness & anchors surfaced). For each CN‑Spec characteristic used in the worked example, cite the corresponding CHR Characteristic name and the evidence anchor(s) (A.10) that make the reading observable in this Context.
- RSCR‑A19.4‑R01 (UNM edit). On changing
normalization(UNM/NormalizationMethod), flag all downstream Bridges for CL re‑assessment; re‑run example comparisons. - RSCR‑A19.4‑R02 (Slot surgery/Basis surgery). Adding/removing/renaming slot/basis requires a new edition; old data remain valid for their edition.
- RSCR‑A19.4‑R03 (Chart drift). Updating measurement protocol bumps edition; historic Work keeps old edition link.
- RSCR‑A19.4‑R04 (Fold change). Any change to
Γ_foldinvalidates cached roll‑ups; re‑compute or mark as superseded. - RSCR‑A19.4‑R05 (Bridge health). After either side’s edition change, re‑validate Bridge CL and loss notes before accepting Cross‑context data.
- RSCR‑A19.4‑R06 (Deprecation rule). On deprecating a CN‑frame, Registry lists its successor; bridges re‑targeted or retired.
- A.2 / A.2.5 (Roles / RSG). RSG checklists quote CN‑Spec.acceptance; enactment gates rely on admitted CN‑frame data.
- B.1 (Γ‑algebra). CN‑Spec’s
Γ_foldinstantiates Γ_ctx/Γ_time/WLNK/MONO choices explicitly. - B.3 (Assurance). Bridge CL enters the R term; WLNK protects safety roll‑ups.
- C.6 / C.7 (LOG‑CAL / CHR‑CAL). Units, scales, and measurement templates come from CHR; proofs about folds live in LOG‑CAL.
CN‑frame: <Name> Context: <Context/Edition>
characteristics:
- <CharacteristicName> : <Unit/Scale> [Polarity: up|down|target-range]
Chart:
reference_state: <text>
coordinate_patch: <domain/subset>
measurement_protocol_ref: <MethodDescriptionId>
Normalization:
UNM: <UNMId?>
methods: [<NormalizationMethodId>...]
method_descriptions: [<NormalizationMethodDescriptionRef>...]
invariants: [<property>...] # what ≡_UNM preserves
fix?: <NormalizationFixSpec> # canonical representative of the ≡_UNM class
Indicators (optional):
policy_ref: <IndicatorChoicePolicyRef>
resulting_indicators: [<IndicatorId>...] // selection is policy‑defined; NCVs alone do not make an Indicator
Comparability:
mode: coordinatewise | normalization-based
minimal_evidence: <what must be observed to compare>
Aggregation:
fold: <Γ_fold expr> time_policy: <window, statistic>
WLNK/COMM/LOC/MONO: <declared choices>
Acceptance:
checklist: [<observable criterion>...]
window: <ISO 8601 interval>
evidence_anchors: [<Observation/Evaluation ids>...]
Alignment (optional):
bridges: [<BridgeId, CL, kept/lost characteristics, extra guards>...]
Lifecycle:
owner_role: <Role>
DRR_links: [<DRR ids>...]
deprecation_plan: <short note>
** Certification data Standard (minimal).** (For implementation completeness, not a user-facing concern.) At minimum, any state certification function should take as input: a specific holon (holder) in a given Role and Context and a time window, and it should have access to a snapshot of all relevant RCS coordinates for that Role (plus any other needed observations or speech‑act logs in that window). It should output a StateAssertion object that includes: the target state’s identifier, the checklist (or checklist ID) used, the verdict (pass/fail); evidence_kind ∈ {observation, prediction}; the window [t_from, t_to]; for prediction also the horizon Δt and the normalization snapshot used; and references to the supporting observations. If any NormalizationMethods or cross‑context Bridges were used, those MUST be referenced (IDs) so that any CL penalty (B.3) can be applied deterministically. This is to ensure traceability and to apply any assurance penalties. Invariants for this process include: no “secret” criteria (all checklist conditions must be based on observable data, not insider knowledge), versioning of checklists (if a checklist changes version, new assertions are tagged to the new version; old assertions aren’t retroactively modified), and immutability of past assertions (once recorded, a StateAssertion isn’t edited after the fact – if it was wrong or conditions changed, a new assertion is issued instead of altering history).
A.19.D1 gives A.19 some teeth: a CN‑Spec you can put on one page, a Registry that stops sprawl, Bridges that carry explicit loss, and a checklist + harness that make comparability auditable. It obeys the mandatory pattern structure of Part E (style, checklists, DRR, guard‑rails) while remaining tool‑agnostic and context‑local.
FPF views reality as a nested holarchy: parts → assemblies → systems → ecosystems; axioms → lemmas → theories → paradigms (this is only example, exact levels of holarhy as hierarhy of holons is not defined and project-depended). Each level is a U.Holon that becomes the part of a wider holon one tier up — but only after an explicit act of construction has glued the parts together. That act is performed by a physical Transformer playing TransformerRole executing a method over an explicit Dependency Graph. Without a domain‑neutral law of composition binding these moves, the logical ladder between scales would break, violating the core rule Cross‑Scale Consistency.
If each discipline (or project team) invents its own way of “adding things up”, four lethal pathologies appear:
- Compositional Chaos — identical parts aggregated by two tools yield different wholes; parallel work becomes impossible.
- Brittle Dashboards — system‑level KPIs lie because the roll‑up silently hides the weakest component.
- Invalid Extrapolation — proofs that hold locally break globally; safety cases collapse on integration day.
- Emergence as Magic — genuine synergy (“whole > sum parts”) is indistinguishable from a modelling error.
All four are witnessed in post‑2015 incidents, from micro‑service outages to meta‑analysis retractions.
| Force | Tension | |
|---|---|---|
| Universality vs Specificity | One algebra must work for pumps, proofs and policies ↔ each domain owns quirky edge‑cases. | |
| Determinism vs Emergence | Predictable, order‑free folds ↔ need to legitimise authentic novelty. | |
| Safety vs Synergy | Conservative Weakest‑Link bound ↔ modelling genuine redundancy wins. | |
| Simplicity vs Fidelity | Five rules managers can remember ↔ enough depth for formal proof. | |
| Auditability vs Overhead | Machine‑checkable Standard ↔ authors must show their invariants. |
FPF freezes one universal operator, Γ, and binds it to five non‑negotiable invariants. Compliance with the quintet is the ticket that lets any calculus, in any future discipline, plug into the holarchy.
Γ : (D : DependencyGraph, T : U.TransformerRole) → U.Holon
D— a finite, acyclic graph of sibling holons at level k.T— an externalU.TransformerRole(not a node ofD); see A.12. Result: a new holon at level k + 1 whose boundary encloses every node ofD.
Because Γ is externalised through T, the provenance chain stays intact, satisfying the Transformer Principle;
| Code | Invariant | Plain‑English headline | Why it matters | |
|---|---|---|---|---|
| IDEM | Idempotence | One part alone stays itself. | Anchors recursion; stops base‑case drift. | |
| COMM | Local Commutativity | Swap independent parts, nothing changes. | Enables divide‑and‑conquer builds. | |
| LOC | Locality | Which worker or rack runs the fold is irrelevant. | Guarantees reproducible distributed runs. | |
| WLNK | Weakest‑Link Bound | No claim may exceed the frailest part. | Keeps dashboards honest; caps hidden risk. | |
| MONO | Monotonicity | Improving any part never hurts the whole. | Justifies “fix the bottleneck” optimisation. |
Mnemonic for managers: S‑O‑L‑I‑D → Same, Order‑free, Location‑free, Inferior‑cap, Don’t‑regress.
Archetypal Grounding
The Invariant Quintet is not an abstract mathematical construct; it is a formalization of common-sense physical and logical realities that manifest across all domains.
| Invariant | U.System — Pump Skid Assembly |
U.Episteme — Scientific Meta-Analysis |
|---|---|---|
| IDEM | An assembly of a single pump is just that pump, with its original specifications. | A review of a single study is just that study, with its original conclusions and evidence level. |
| COMM / LOC | Welding two independent pump modules to the skid in a different order or in different assembly bays results in an identical final product. | The conclusions of a meta-analysis are independent of the order in which two unrelated studies were added to the evidence pool. |
| WLNK | The maximum pressure rating of the entire pump skid is limited by the pressure rating of its weakest pump or connector. | The overall reliability of a synthesized theory is capped by the reliability of its least-supported foundational claim. |
| MONO | Replacing a standard motor with a more powerful, efficient one can only improve or maintain the skid's overall performance; it cannot make it worse. | Adding a new, high-quality study to a meta-analysis can only strengthen or maintain the overall confidence in its conclusion, never weaken it (unless it introduces a conflict). |
- Post‑2015 physics shows that renormalisation flows stabilise if and only if idempotence, locality and monotone bounds hold (Goldenfeld & Ho 2018).
- Distributed‑data research (Spark 3, Flink 1.19) proves COMM + LOC are prerequisites for deterministic sharding.
- Safety cases in aviation and ISO 26262 rewrote their risk roll‑ups around Weakest‑Link after 2021 audit failures.
Thus the quintet is simultaneously empirically vetted, mathematically minimal, and cognitively teachable.
Real redundancy can push a system above the WLNK ceiling (e.g., RAID 6 survives two disk deaths). FPF treats this not as a rule break but as a Meta‑Holon Transition (MHT): the redundant set is promoted to a fresh holon tier, and the quintet re‑applies there. The algebra stays pure; emergence becomes explicit, auditable design space. Details live in Pattern B.2 Meta‑Holon Transition (MHT): Recognizing Emergence and Re‑identifying Wholes (next in cluster).;
The core signature of Γ never changes, but each discipline supplies a flavour that instantiates the quintet with domain‑appropriate mathematics and measurement units.
| Flavour | Typical domain | Dropped / relaxed invariants | Added compensating rules | Canonical reference model (post‑2015) |
|---|---|---|---|---|
| Γ_sys | Physical & cyber‑physical systems | None | – | ISO 15926‑2024 Plant Data roll‑up; NASA 2023 Integrated Hazard Model |
| Γ_epist | Knowledge graphs, meta‑analysis | None | Provenance weighting (PW‑1), Citation transparency (PW‑2) | OntoCommons 2024 audit trail |
| Γ_time | Time‑series forecasting, digital twins | COMM → partial; LOC waived | Coverage completeness (TS‑1), Temporal alignment (TS‑2) | EU Battery Passport 2025 reliability stack |
| Γ_ctx | Order‑sensitive processes, quantum pipelines, social surveys | COMM & LOC waived | Reproducibility hash (CTX‑1), Partial‑order soundness (CTX‑2), Observer log (CTX‑3) | CERN HL‑LHC workflow 2024 |
Didactic hint for managers: choose the flavour whose examples look like your own dashboards; then verify your tooling honours its extra rules.
- Parts: 72 nacelles, 72 towers, 1 export cable set.
- Graph: acyclic; each nacelle depends on its own tower, all depend on cable.
- Fold: Any parallel assembly order is legal → COMM, LOC.
- WLNK check: weakest nacelle (load factor = 0.91) bounds farm output ≤ 0.91 × rated.
- Upgrade test: swapping one nacelle to 0.95 raises farm bound — satisfies MONO.
Result: farm holon inherits predictable capacity curve; financiers can quote risk‑adjusted yield without bespoke simulation.
- Parts: 38 peer‑reviewed trials, 12 preprints.
- Graph: dependency edges encode shared cohorts; no cycles.
- Fold: trials merged irrespective of ingestion order → COMM; distributed evaluators may differ, but provenance hashes equalise weighting → LOC.
- WLNK: overall certainty cannot exceed the lowest GRADE score among included trials.
- Emergence: discovery of a consistent age‑interaction effect violates WLNK; reviewers declare MHT, elevating the combined dataset to a new holon “Evidence v2” with age‑stratified potency as a novel attribute.
Result: regulators see a transparent promotion of evidence tier rather than a hidden statistical artefact.
COMM holds only across non‑overlapping windows; LOC is waived because regional sensors differ in latency. Additional TS‑1/TS‑2 rules ensure gaps are filled before aggregation. Engineers iterate locally yet obtain one coherent five‑year projection.
| ID | Check | How to demonstrate (engineer‑manager view) | Typical evidence artefact |
|---|---|---|---|
| CL‑1 | Declare flavour (Γ\_sys, Γ_epist, …) |
Front‑page spec line | Pattern header |
| CL‑2 | Show quintet proof | Table mapping each invariant → test or theorem | PDF appendix, automated notebook |
| CL‑3 | Graph acyclicity | Static analysis or domain rule | Screenshot of tool report / manual argument |
| CL‑4 | External Transformer | Name the role (Standardor, editorial board, orchestration node) | Organogram, RACI sheet |
| CL‑5 | Emergence pathway | State MHT trigger criteria | Flowchart, decision table |
A proposal that skips any line of the checklist fails pattern B.1 and must iterate before peer review.
| Benefit (managerial) | Pay‑off path | Trade‑off | Mitigation |
|---|---|---|---|
| Clear risk ceiling at every roll‑up (WLNK) | Faster go/no‑go gates | May look pessimistic | Highlight redundancy, then invoke MHT |
| Parallel engineering without merge hell (COMM + LOC) | Shorter critical path | Requires origin hash discipline | Provide reference script templates |
| Continuous improvement strategies justified by MONO | Lean upgrade budgets | Cannot model negative synergies | Attach incentive to detect MHT events |
| Audit trail readable by non‑experts | Easier certification | Extra documentation overhead | Auto‑generate provenance footers |
The Invariant Quintet is the "renormalisation law" of FPF. It translates deep principles from physics, computer science, and engineering into a universal, algebraic Standard that governs composition in any domain.
Physics & Renormalisation: The invariants mirror the laws of renormalisation group (RG) flows. IDEM, COMM, and LOC ensure that the aggregation is a well-behaved coarse-graining operation, while WLNK acts as a conservative bound on energy and risk, preventing "free lunch" synergies from appearing by mere arithmetic.
- Distributed Systems: The COMM and LOC invariants are the formal prerequisites for modern, large-scale distributed computing. Systems like Spark and Flink rely on the guarantee that data can be processed on independent workers in any order, and the final result will be deterministic.
- Systems Engineering & Safety: The WLNK and MONO invariants are cornerstones of safety-critical design. Fault-tree analysis and reliability engineering are built on the WLNK principle that a system is no stronger than its weakest link. The MONO principle provides the formal justification for iterative improvement ("Kaizen"): it guarantees that a local fix will not cause a global regression.
By elevating these cross-disciplinary insights to the level of a mandatory, constitutional Standard, FPF ensures that all composition within the framework is predictable, auditable, and physically plausible. It transforms aggregation from an ad-hoc, domain-specific art into a universal, repeatable science.
| Anti-Pattern | Symptom | Conceptual Fix |
|---|---|---|
| Averaging Risk | A dashboard shows a high overall reliability score for a system by averaging a high-reliability component with a low-reliability one. | Enforce the WLNK invariant. The aggregate reliability must be min(R_parts), not avg(R_parts). |
| Order-Dependent Builds | The same set of software architheories produces a different final build depending on the compilation order. | Enforce COMM/LOC. Identify the hidden dependency between the architheories and either remove it or make it explicit, moving to Γ\_ctx if necessary. |
| Improvement Paradox | A team replaces a component with a better one, but a system-level KPI gets worse. | Enforce MONO. This indicates a hidden, negative coupling. The model must be updated to make this coupling an explicit constraint or interaction. |
| Synergy by Narrative | A claim is made that the whole is greater than the sum of its parts, without a formal mechanism. | This violates WLNK. If the synergy is real (e.g., due to redundancy or a new feedback loop), it must be modeled as a Meta-Holon Transition (Pattern B.2). |
- Builds on: Holonic Foundation, Transformer Principle, Open‑Ended Kernel.
- Enables: Meta‑Holon Transition (B .2), Calculus of Trust (B .3), Holonic Lifecycle Patterns (Cluster C).
- Refined by: Flavour sub‑patterns B .1.2 – B .1.4.
- Exemplifies: Pillars Cross‑Scale Consistency, State Explicitness, Ontological Parsimony.
Take‑home maxim: “Aggregation is never neutral; Γ makes its politics explicit and testable.”
In FPF, every aggregation is a material act:
Γ : (D : DependencyGraph, T : U.TransformerRole) → H′ : U.Holon
D is the only admissible input shape for Γ. It must capture part–whole structure faithfully (A.1, A.14) while staying neutral to order (handled by Γ_ctx / Γ_method), time (Γ_time), and accounting (Γ_work). If D is sloppy—mixing kinds of relations or scopes—Γ becomes unpredictable and the Quintet invariants (IDEM, COMM, LOC, WLNK, MONO) fail in subtle ways.
This pattern normatively defines DependencyGraph, the mereological vocabulary allowed on its edges, and the guards that make Γ provable and comparable across domains.
Without a disciplined DependencyGraph, four pathologies recur:
- Relation drift: Edges blur composition with mapping (e.g., “represents”), or confuse collections with parts. Aggregations then mix algebraic regimes (sums where mins are required, etc.).
- Boundary blindness: Cross‑holon influences are drawn as parts, bypassing explicit
U.BoundaryandU.Interaction. This corrupts locality (LOC) and defeats reproducible folding. - Temporal conflation:
design‑timeandrun‑timeholons appear in one graph; simulations then “prove” facts about a blueprint using live telemetry. - Hidden cycles: Self‑dependence enters through aliasing (e.g., a team is a member of itself via “units of units”). Γ cannot topologically fold such graphs.
| Force | Tension |
|---|---|
| Universality vs. Precision | One schema for systems and epistemes ↔ different composition logics (physical vs. conceptual) must be kept distinct but compatible. |
| Parsimony vs. Expressiveness | Keep the vocabulary small (A.11) ↔ include the minimal extra relations that engineers actually use (A.14). |
| Locality vs. Connectivity | Preserve boundary‑local reasoning (LOC) ↔ still allow explicit external influences via interactions, not parthood. |
| Static clarity vs. Dynamic use | Graphs must be easy to read and verify ↔ also feed Γ_ctx, Γ_time, Γ_method, Γ_work without duplications or mismatches. |
Definition.
DependencyGraph D = (V, E, scope, notes)
-
V (nodes): each
v ∈ Vis aU.Holonwith:holonKind ∈ {U.System, U.Episteme}temporalScope ∈ {design, run}(A.4) — single, uniform per D- a declared
U.Boundary(A.14) - optional characteristics (e.g., F–G–R, CL, Agency metrics) for use by downstream patterns (B.1.2/3; B.3; A.13)
-
E (edges): each
e ∈ Eis a mereological relation from the normative vocabularyV_rel(below). -
scope: the uniform temporal scope of the entire graph (
designorrun). -
acyclicity:
DMUST be a DAG. Any cycle requires refactoring or elevation to a Meta‑Holon (B.2).
Strict distinction (A.15).
DependencyGraphencodes part–whole only. Order goes to Γ_ctx/Γ_method. Time evolution goes to Γ_time. Resource spending goes to Γ_work. Cross‑boundary influence goes toU.Interaction(not parthood).
Only the following four mereological relations are allowed in E (A.14):
| Family | Relation | Short intent | Where it belongs |
|---|---|---|---|
| Structural | ComponentOf | physical/machined part in an assembly | Γ_sys |
| ConstituentOf | logical/content part in a conceptual whole | Γ_epist | |
| Quantity & Phase | PortionOf | quantitative fraction of a homogeneous stock or carrier | Γ_sys / Γ_work |
| PhaseOf | temporal phase/slice of the same carrier | Γ_time / Γ_work |
Not in V_rel (by design):
SerialStepOf,ParallelFactorOf— order/concurrency edges of Γ_method/Γ_ctx; not parthood; keep them out ofE(see § 4.1 A.15 and Part B.1.5).MemberOf— non‑mereological collective membership; model in Γ_collective (B.1.7), not inE(see § 9).RepresentationOf,MapsTo,Implements— these are mappings, not parthood; model them at the value level (A.15) or asU.Interactionbetween holons.RoleBearerOf— links aU.Systemto aU.Role; not parthood (see A.12, A.15).- Any “is‑a” (
subClassOf) taxonomic relation — orthogonal to parthood.
| Relation | Axioms (informal) | Guards / When to use |
|---|---|---|
| ComponentOf | anti‑symmetric; transitive; acyclic | Physical assemblies; interfaces compose via BIC (B.1.2). Do not use for collections or pipelines. |
| ConstituentOf | anti‑symmetric; transitive; acyclic | Conceptual or formal wholes (papers, proofs, specifications). Do not use for material parts. |
MemberOf (outside V_rel) |
not transitive; anti‑symmetric (w.r.t. same collection); acyclic | Sets/teams/libraries; the whole is a collective holon. Not admissible in E; model via Γ_collective (B.1.7). Use PortionOf for homogeneous stocks. |
| PortionOf | anti‑symmetric; additive; acyclic | Quantitative partitions of a homogeneous carrier (mass, volume, bytes). Requires an extensive attribute. |
| PhaseOf | anti‑symmetric; covers a timeline; acyclic | Time‑slices of the same carrier identity. Use only with explicit carrier and non‑overlapping intervals. |
Carrier identity for
PhaseOf. The “same thing across phases” must be explicit (e.g., this frame across heat/dwell/quench; this theory across revisions). If identity changes, you are modelling a Transformer creating a new holon (A.12) — not a phase.
Use this one‑page decision to pick the edge correctly:
-
Is it a part–whole relation at all? If it is mapping, influence, or reference → not parthood. Use
U.Interactionor value‑level links (A.15). -
Is it physical vs. conceptual composition? Physical assembly → ComponentOf. Conceptual/content inclusion → ConstituentOf.
-
Is it a collection? If the “whole” is a collection/collective → MemberOf (outside
E, route to Γ_collective (B.1.7)). Note: a team’s members areMemberOf(outsideE); the team’s tools are likelyComponentOf. -
Is it order‑sensitive execution? If step order changes semantics → route to A.15 (ordered relations) and aggregate with Γ_ctx / Γ_method. Do not encode order as parthood in this section.
-
Is it a quantitative fraction of a homogeneous stock? If yes → PortionOf (requires an extensive attribute; use in Γ_sys / Γ_work).
-
Is it the same carrier across time? If yes → PhaseOf (then aggregate with Γ_time / Γ_work).
Common anti‑patterns and the fix • Using MemberOf for material stocks → replace with PortionOf. • Drawing cross‑boundary “parts” → replace edge with U.Interaction plus
ComponentOfinside each holon. • Using ConstituentOf for a module cage or bracket → that is ComponentOf. • Treating representation (file ↔ thing) as parthood → keep as value‑level mapping (A.15), not inD.
Purpose. Provide a minimal constructional generator for structural mereology that keeps the kernel small (C-5), aligns with A.14 (Portions/Phases/Components discipline), and feeds Working-Model layer publication in LOG without importing tooling or notations.
Operators (aggregators).
Γ_m.sum(parts : Set[U.Entity]) → W : U.Holon // for each p ∈ parts assert internal U.KernelPartOf(p, W)
Γ_m.set(elems : Multiset[U.Entity]) → C : U.Holon // for each e ∈ elems assert internal U.KernelPartOf(e, C) // outward MemberOf remains a non‑mereological signal per A.14 (does not build holarchies)
Γ_m.slice(ent : U.Entity, facet : U.Facet) → S : U.Holon // assert internal U.KernelPartOf(S, ent) and record facet label
Trace (conceptual, notation‑independent).
Trace = ⟨ op ∈ {sum, set, slice}, inputs, output, notes ⟩
Notes capture boundary tags (A.14), scope (design|run), and any independence declarations used by the Quintet proofs (below).
Invariant footprint on Γ_m traces (inherits B.1 Quintet).
- IDEM — singleton fold returns the part unchanged.
- COMM/LOC — results are invariant under re‑order and local factorisation given an independence declaration (IND‑LOC).
- WLNK — aggregate cannot exceed the weakest limiting attribute among parts; synergy escalates via B.2 Meta‑Holon Transition.
- MONO — improving a part on a monotone characteristic cannot worsen the whole, ceteris paribus.
Exclusions and routing (A.15/A.14).
No parallel or temporalSlice constructor is introduced here; sequence/parallelism live in Γ_ctx/Γ_method, and temporal parts in Γ_time. This preserves the firewall between structure, order and time mandated by A.15/A.14.
Internal proof relation.
U.KernelPartOf names the constructional edges inside traces; it is not part of the public V_rel and appears only in the trace/proof narrative (didactic status [D]).
- Single temporal scope: all nodes in
Dsharedesignorrun. No mixing (“chimera” graphs are invalid). - Declared boundary: every holon in
Dhas aU.Boundary; any cross‑holon influence must be an explicitU.Interaction, not parthood. - Acyclicity: if a cycle is detected, either (a) refactor (e.g., split a collective from an assembly), or (b) escalate to Meta‑Holon Transition (B.2) if a new “whole” with novel properties is intended.
- Order & time routing: do not encode sequence or history with structural edges; route to Γ_ctx / Γ_method / Γ_time explicitly.
- Resource routing: do not encode costs with structural edges; route to Γ_work (B.1.6) across declared boundaries.
Each Γ flavour (Γ_sys / Γ_epist / Γ_method / Γ_time / Γ_work) must attach a small, reusable Proof Kit showing the Quintet on the given D:
- IDEM: singleton fold = identity.
- COMM/LOC: independence conditions + invariance under local reorder/factorisation.
- WLNK: weakest‑link bound (e.g., critical input caps, weakest claim).
- MONO: explicit monotone characteristics (what “cannot get worse” means here).
- System (assembly): a motor ComponentOf a chassis; wiring harness ComponentOf the motor; a crew MemberOf a team holon (the crew is not a component of the chassis).
- Episteme (paper): a lemma ConstituentOf a proof; appendices ConstituentOf the paper; three datasets MemberOf a curated collection; version v2 PhaseOf the same model.
This section provides small, reusable proof obligations you attach to a DependencyGraph D when invoking any Γ‑flavour. Each obligation is minimal—just enough to guarantee the Invariant Quintet for the stated scope and edge set.
Obligation IND‑LOC. Provide a partition of D into subgraphs
{Dᵢ}such that:
- Their node sets are disjoint (no shared holon instances).
- Their boundaries are disjoint (no shared ports) or any shared internal stock is lifted to the parent boundary in notes.
- No edge in
Ecrosses partitions except via explicitU.Interaction(not parthood).
Claim: Under IND‑LOC, Γ’s fold result is invariant to local fold order within and across {Dᵢ}.
Obligation WLNK‑CUT. Enumerate a critical set
C ⊆ V ∪ E(nodes/edges) such that failure (or insufficiency) of any element ofCmakes the aggregation invalid or unsafe in the chosen Γ‑flavour.
Claim: For the target property, the result for the whole is bounded by the minimum (or tightest cap) across C.
Examples:
• Γ_sys → tensile strength cutset along a load path;
• Γ_epist → weakest supported premise in a proof spine;
• Γ_work → availability caps for required inputs across the boundary.
Obligation MONO‑AX. Declare the monotone characteristics (attributes whose improvement cannot worsen the whole) for this call. Specify how “improvement” is recognized.
Claim: If only monotone characteristics change in the direction of improvement while all else is fixed, the aggregate’s target value cannot degrade.
Examples: • Γ_sys → increased component reliability, tighter tolerance; • Γ_epist → stronger evidence, higher formality; • Γ_method → reduced step duration, stronger step assurance; • Γ_time → added non‑overlapping coverage; • Γ_work → higher yield η, reduced dissipation.
Obligation IDEM‑WIT. Provide the singleton case: a subgraph
D₁with one node and no admissible composition edges.
Claim: Γ(D₁) returns that node’s property unchanged.
Obligation SCOPE‑1. Affirm
temporalScope(D) ∈ {design, run}and that all nodes share it. Obligation BOUND‑1. List the U.Boundary for each top‑level holon inVand record any U.Interaction edges that are relevant but not part ofE(to show cross‑boundary influences were not mis‑typed as parthood).
| Γ‑flavour | Independence (IND‑LOC) | WLNK‑CUT (what is “critical”) | MONO‑AX (what cannot make worse) | IDEM‑WIT | Notes |
|---|---|---|---|---|---|
| Γ_sys | Disjoint subassemblies with disjoint interfaces (BIC respected) | Structural cutset on load/flow paths | ↑ component reliability/capacity; tighter bounds | Single module | Use BIC to keep interfaces explicit. |
| Γ_epist | Independent argument subgraphs; no premise reuse across partitions | Weakest premise/claim on entailment spine | ↑ formality; ↑ reliability of sources; ↑ congruence | Single section/lemma | Apply Φ(CL_min) penalty only where mappings/links are weak. |
| Γ_ctx / Γ_method | Parallel branches truly independent (no hidden state) | Slowest/least reliable step on the critical path | ↓ duration; ↑ step assurance; ↑ join soundness | Single step | COMM relaxed to partial orders (NC‑1..3). |
| Γ_time | Non‑overlapping time slices; same carrier identity | Missing slice creates a gap (temporal WLNK) | ↑ coverage; ↑ timestamp precision | Single slice | Phases must cover the window without overlap. |
| Γ_work | Disjoint boundary partitions; shared stocks lifted to parent | Availability caps for required inputs across boundary | ↑ yield; ↓ dissipation; ↑ availability | Single resource with no delta | Keep Boundary Ledger with basis and time window. |
Attach the row(s) you use as the Proof Kit to the Γ call record.
Each row is self‑contained and can be used as a template.
| Aspect | Example |
|---|---|
| Graph | Motor ComponentOf Chassis; Harness ComponentOf Motor; (for method demo only, outside D) QC SerialStepOf Seal; all nodes scope=run; BIC declares ports for power, data. |
| Independence | Two subassemblies: {Chassis, Motor, Harness} and {Cabin} with disjoint interfaces. |
| WLNK‑CUT | Tensile path through front mount + harness connector; weakest tensile rating caps assembly load rating. |
| MONO‑AX | Improving mount alloy or connector strain relief cannot reduce system load rating. |
| IDEM‑WIT | Standalone chassis as singleton: Γ_sys returns chassis unchanged. |
| Routing | SerialStepOf belongs to Γ_method; Γ_sys ignores order and composes structure; Γ_work separately composes energy/material costs through boundary ports. |
| Aspect | Example |
|---|---|
| Graph | Lemma1 ConstituentOf ProofA; DatasetX MemberOf CorpusQ; ProofA ConstituentOf PaperP; scope=design. |
| Independence | Two argument branches that do not reuse premises: {Lemma1 → ProofA} and {Background → Discussion}. |
| WLNK‑CUT | The least supported premise in the entailment path to the main theorem. |
| MONO‑AX | Replacing a weak premise with a stronger one or raising CL of a mapping cannot reduce overall credibility. |
| IDEM‑WIT | Single lemma as singleton: Γ_epist returns it unchanged. |
| Routing | MemberOf for CorpusQ is collection structure; not used to average “truth”. Γ_epist aggregates via min/penalty and produces a SCR for sources. |
| ID | Requirement | Purpose |
|---|---|---|
| CC‑B1.1.1 | D SHALL be acyclic (DAG). |
Ensure foldability. |
| CC‑B1.1.2 | All nodes in D SHALL share a single temporalScope ∈ {design, run}. |
Ban design/run chimeras. |
| CC‑B1.1.3 | All edges in E SHALL belong to the normative V_rel (ComponentOf, ConstituentOf, PortionOf, PhaseOf only). |
Keep mereology crisp and finite. |
| CC‑B1.1.4 | Cross‑holon influences SHALL be modelled as U.Interaction, NOT parthood. |
Preserve locality (LOC). |
| CC‑B1.1.5 | Every top‑level holon SHALL declare a U.Boundary; if Γ_work will be used, a Boundary Ledger SHALL be produced. |
Make results comparable/auditable. |
| CC‑B1.1.6 | If COMM/LOC is claimed, an IND‑LOC independence declaration SHALL be attached. | Make locality explicit. |
| CC‑B1.1.7 | A WLNK‑CUT set SHALL be stated for the chosen Γ‑flavour. | Make caps explicit; avoid optimism. |
| CC‑B1.1.8 | MONO‑AX SHALL enumerate the monotone characteristics used by the Γ‑flavour. | Avoid hidden regress. |
| CC‑B1.1.9 | A IDEM‑WIT singleton case SHALL be shown or referenced. | Ground identity. |
| CC‑B1.1.10 | Order/time/resource SHALL NOT be encoded via structural edges; they must be routed to Γ_ctx/Γ_method, Γ_time, Γ_work respectively. | Maintain A.15 Strict Distinction. |
| CC‑B1.1.11 | If a cycle or a locality violation persists, the modeller SHALL either refactor or declare a Meta‑Holon Transition (B.2). | Make emergence explicit. |
| CC‑B1.1.12 | Any mapping edges (RepresentationOf, Implements, etc.) SHALL be kept outside E (value‑level), or recast as U.Interaction if cross‑boundary. |
Eliminate category errors. |
| Anti‑pattern | Symptom | Replace with |
|---|---|---|
| Collection as stock | Cell_i MemberOf Battery then summing “capacity” via MemberOf |
Use PortionOf for capacity partitions; use ComponentOf for physical pack assembly; keep MemberOf only for the set of cells as a collection holon. |
| External supplier as part | PowerGrid ComponentOf Plant |
Model PowerGrid as an external holon with U.Interaction at the plant boundary; keep plant internals as ComponentOf. |
| Order encoded as structure | Step2 ComponentOf Step1 |
Use SerialStepOf/ParallelFactorOf and Γ_method. |
| History encoded as structure | v2 ComponentOf v1 |
Use PhaseOf for time slices of the same carrier, or a Transformer creating a new holon (A.12) if identity changes. |
| Mapping as parthood | DigitalTwin ConstituentOf Turbine |
Keep the twin as a separate holon; link by U.Interaction and value‑level mapping; do not use parthood. |
| Design/run chimera | Mix of CAD nodes and telemetry nodes | Split into two graphs (design vs run) and connect via a Transformer role if needed. |
Benefits
- Predictable composition: Γ‑folds are reproducible and auditable across domains.
- Cross‑scale clarity: Resource and time additivity are preserved by routing to Γ_work and Γ_time.
- Safer modelling: WLNK cutsets surface true constraints; emergence is not “smuggled in”.
- Didactic simplicity: A small, fixed edge vocabulary makes reviews and onboarding faster.
Trade‑offs / mitigations
- Up‑front discipline: Declaring boundaries and independence requires effort. Mitigation: reuse the Proof Kit templates; keep small, local graphs and compose.
- Refactoring legacy edges: Replacing “generic part‑of” with precise relations can be noisy. Mitigation: use the decision guide (4.4) and anti‑pattern table (9) as a script.
This pattern operationalizes A.14 (Mereology Extension) and A.15 (Strict Distinction) for the universal algebra of B.1. +… By limiting E to four well‑formed mereological relations, we prevent the three recurrent category errors: mapping≠parthood, order/time≠structure, collection≠stock. The Proof Kit converts the Quintet from abstract slogans into concrete obligations that engineers can check in everyday models. Γ‑flavours then remain simple and domain‑appropriate, while proofs remain small and reusable.
- Builds on: A.1 Holonic Foundation; A.14 Mereology Extension; A.15 Strict Distinction; A.12 Transformer Principle.
- Constrained by: B.1 Universal Γ and the Invariant Quintet.
- Used by: B.1.2 Γ_sys, B.1.3 Γ_epist, B.1.4 Γ_ctx/Γ_time, B.1.5 Γ_method, B.1.6 Γ_work.
- Triggers: B.2 Meta‑Holon Transition (MHT): Recognizing Emergence and Re‑identifying Wholes when cycles or WLNK violations indicate a new emergent whole.
- Feeds: B.3 Trust & Assurance Calculus (F–G–R with Congruence) via explicit declaration of monotone characteristics and provenance.
One‑page takeaway. Keep
Da DAG, pick edges from four mereological relations, route order/time/cost to their Γ‑flavours, and attach the four Proof Kit obligations (IND‑LOC, WLNK‑CUT, MONO‑AX, IDEM‑WIT) with scope/boundary notes. Do this, and the Quintet holds with minimal fuss.
► decided‑by: A.14 Advanced Mereology A.14 compliance — Treat PortionOf as Σ‑additive stocks; ComponentOf must respect boundary integration (BIC); PhaseOf is not aggregated here (handled by Γ_time); mapping/representations are not parthood.
Γ\_sys is the default flavour of the universal aggregation operator for everything that engineers can touch, weigh or wire‑up: bridges, battery packs, data‑centre racks, container clusters.
It translates the abstract Invariant Quintet into three physically meaningful fold rules—additive, limiting, boolean—and a Boundary‑Inheritance Standard (BIC) that keeps external interfaces tidy. Together they guarantee that holons built with Γ\_sys obey conservation laws, expose a clean API surface and pass safety audits without manual patching.
Kernel § 6 defines U.System and states that only a Calculus may own an aggregation operator. Sys‑CAL (Part C.1) exports Γ\_sys as its single builder; other CALs (KD‑CAL, Method‑CAL …) reuse the same quintet but swap in domain rules.
Draft 20 Jul 25 already lists default fold policies (Σ, min, ∨/∧) and a cut‑stable axiom; this pattern turns those snippets into a teachable Standard for day‑to‑day system design.
| Field failure | Algebraic root cause |
|---|---|
| “Phantom megawatts” — energy sums higher than fuel input | Temperatures averaged, masses summed; operator ignored conservation. |
| Interface Medusa — hundreds of dangling ports after integration | No rule for boundary promotion vs encapsulation. |
| Safety inversion — upgraded actuator lowered SIL rating of the skid | Intensive property (safety) aggregated by average, not min. |
| Audit hairball — inspector cannot trace which crane load went where | Boundary cuts not stable; provenance leaks. |
All four break Pillars Cross‑Scale Consistency and State Explicitness.
| Force | Pull | Push |
|---|---|---|
| Physical plausibility | Sum masses, conserve energy | Abstraction — keep rules domain‑agnostic |
| Interface clarity | Present one clean API | Fidelity — expose every critical port |
| Safety conservatism | Take worst‑case rating | Performance — allow redundancy gains (via MHT later) |
| Parallel build | Shard assembly, cache results | Boundary realism — stress must still balance across cuts |
Γ\_sys : (D : DependencyGraph\[U.System\], T : U.TransformerRole (plays `AssemblerRole`)) → E\_eff : U.System
- D – finite acyclic graph whose nodes share one temporal scope and obey the four DG rules (Pattern B .1.1).
- T – physically real external system playing
TransformerRole(e.g., crane, welding rig).
| Class | Fold rule | Typical examples | Invariants touched |
|---|---|---|---|
| Extensive | Σ (sum) | Mass, energy, cost | IDEM · COMM · LOC · MONO |
| Intensive / Risk | min (weakest‑link) | Temperature limit, SIL, encryption bits | WLNK · MONO |
| Boolean / Capability | ∨ / ∧ (OR for vuln, AND for must‑hold) | CVE exposure, “Has EmergencyStop” | WLNK |
Rule of thumb for managers: If it adds up in your spreadsheet → Σ; if it caps the system → min; if it is yes/no → logic gate. Defaults match kernel table “Additive flow / Capacity / Boolean capability” .
For every external interaction of every part, Γ\_sys forces a deliberate choice:
- Promote — port becomes part of the new system boundary.
- Forward — port remains on the child but is namespaced by the parent.
- Encapsulate — port becomes internal and disappears from public view.
BIC is the antidote to Interface Medusa: it prevents silent loss of obligations or explosion of unmanaged endpoints.
Given any declared boundary 𝔅,
Γ\_sys(D,C)MUST leave every across‑𝔅 interaction either identical or transformed by a rule that still satisfies the Quintet.
Audience: lead engineer planning a multi‑team build; QA manager preparing an audit; analyst running a quick what‑if. Goal: fold a ready Dependency Graph into one coherent system in five repeatable moves.
| Step | What you do | Why it matters |
|---|---|---|
| 1 · Verify the graph | Run Pattern B .1.1 checklist (acyclic, typed edges, same scope, boundary tags). | Avoid paradoxes before they snowball. |
| 2 · Label attributes | For every property in every node, mark it Extensive, Intensive, or Boolean. Defaults are in Sys‑CAL cheat‑sheet. | The fold rule depends on this label. |
| 3 · Decide the BIC | For each external port, pick Promote / Forward / Encapsulate. Record choice in the interface table. | Keeps APIs intentional and auditable. |
| 4 · Execute Γ_sys | Extensive → parallel Σ; Intensive → propagate min; Boolean → ∧/∨ logic. | Implements the Invariant Quintet. |
| 5 · Run Cut‑Stable test | For each declared boundary 𝔅, compare across‑𝔅 interactions before & after fold. | Confirms that sharding or outsourced work didn’t shift loads or responsibilities. |
If the min rule is exceeded by design (e.g., triple redundancy boosts SIL beyond any part), stop here and initiate Meta‑Holon Transition (Pattern B .2) to formalise emergence.
| Step | Snapshot |
|---|---|
| Graph | 16 modules → 4 strings → pack. Edges ComponentOf. All nodes scope=design. |
| Attribute label | Extensive: energy (kWh), cost; Intensive: cell voltage limit, fire rating (SIL 2); Boolean: “Has self‑heating”. |
| BIC decisions | Main DC output ‑ Promote; per‑string fuse access ‑ Forward; cell balancing ports ‑ Encapsulate. |
| Fold | Σ energy = 628 kWh; min voltage limit = 4.25 V; ∧ self‑heating = true. |
| Cut‑Stable | Across‑string current same pre/post fold. Pass. |
| Outcome | Pack spec delivered to vehicle OEM; audit shows WLNK bound 4.25 V, MONO intact; financial model reads energy Σ for range calc. |
| ID | Question | Pass if… |
|---|---|---|
| CHK‑GC‑1 | All properties classified? | No “unknown” label remains. |
| CHK‑GC‑2 | Any property violate its fold rule? | None; else declare MHT. |
| CHK‑GC‑3 | BIC table complete? | Every external port accounted for. |
| CHK‑GC‑4 | Cut‑Stable test green on all declared boundaries? | Yes. |
| CHK‑GC‑5 | Provenance hash stamped? | E_eff.meta.provenance populated. |
Failing a line means the operator must refactor the graph or escalate to Meta‑Holon before reuse.
| Benefit for project leadership | Secondary effect |
|---|---|
| Plausible mass‑energy books — no “phantom capacity” during tender negotiations. | Vendor bids align faster; fewer change orders. |
| Single‑page interface sheet — the BIC doubles as hand‑over Standard to next tier supplier. | Interface churn caught early; legal exposure shrinks. |
| Safety‑first roll‑up — weakest‑link bound surfaces brittle parts immediately. | QA budget aimed at right module; no gold‑plating. |
| Seamless parallel builds — COMM + LOC proven once, reused by every subStandardor. | Integration rehearsals shortened by weeks. |
- Model‑Based Systems Engineering (MBSE 2023‑2025): Tools like Cameo Systems Modeler automated Σ/min logic via “Property Kind” stereotypes—Γ_sys formalises the same trick.
- Safety audits: ISO 26262‑2 Ed 3 explicitly adopts “minimum of ASIL ratings” rule; our min fold embeds it by design.
- Interface control: Aerospace ICDs (NASA‑7120.5E updates 2024) require a promotion/forward/encapsulate decision tree identical to BIC.
- Cloud operations: Kubernetes 1.30 resource quotas implement additive CPU/memory and min PodDisruptionBudget—industrial proof that the schema scales.
Real‑world convergence across steel, silicon and software shows the rules are not theory nice‑to‑haves; they are what successful projects already do—Γ_sys just makes it explicit, automatic and auditable.
- Builds on: Dependency Graph (B .1.1); Transformer Principle (A.3).
- Enables: Meta‑Holon Transition (B .2); Calculus of Trust (B .3).
- Refined by: Γepist (B .1.3) for knowledge artefacts; Γtime / Γctx (B .1.4) for temporal or context‑sensitive domains.
- Exemplifies: Pillars P‑8 Cross‑Scale Consistency, P‑9 State Explicitness.
Take‑away for engineering managers: “Classify, Standard, fold—then sleep easy knowing the numbers and the interfaces will still match tomorrow.”
► decided‑by: A.14 Advanced Mereology A.14 compliance — Use ConstituentOf for semantic parts; PortionOf only for quantitative splits of texts/data with declared μ (token/byte, etc.); PhaseOf for versions/revisions of MethodDescription/documents; no ComponentOf here.
Plain‑English headline. Γ_epist composes epistemic holons (claims, models, datasets, arguments) into a single episteme while preserving provenance, applying conservative trust bounds (B.3 F/G/R), and penalizing poor conceptual fit via congruence levels (CL). It is not a physical sum; it is a semantic and evidential fold.
- Holonic foundation. In the FPF, a
U.Epistemeis a holon whose identity is knowledge‑bearing (A.1). It can be a statement/claim, a model, a theory, a specification, a dataset with semantics, or a compiled scholarly artifact. - Strict Distinction (A.15). We separate: structure (what the episteme comprises), order (argument flow), time (versioning/phases), work (what was spent to produce/validate it), and values (objectives/criteria). Γ_epist stays in the structure/semantics lane and calls out to Γ_ctx/Γ_time/Γ_work when needed.
- Mereology (A.14). For knowledge composition we primarily use ConstituentOf (logical/semantic parts), UsageOf/ReferenceTo (external reliance), and MemberOf for collections (anthologies, corpora). We do not use ComponentOf (physical) in Γ_epist.
PhaseOfhandles temporal versions of the same episteme; RoleBearerOf is irrelevant here because knowledge does not play a role—it is used by a holon‑in‑role (Transformer) at run‑time (A.12). - Assurance (B.3). Knowledge carries F, G, R (Formality, ClaimScope, Reliability). Integration edges carry CL (congruence level) that penalizes poor fit. Γ_epist must preserve provenance and apply conservative bounds: no “truth averaging,” no silent context hops. Obligations here are mode/assurance‑gated per C.2.1. # [M‑0]
- Order/time flavours. Argument sequences may need Γ_ctx (non‑commutative ordering of premises to conclusion). Knowledge evolution uses Γ_time (versioning, deprecation, update). When composition produces new closure or supervision (e.g., explanatory theory emerges), we declare MHT (B.2).
Naive aggregation of knowledge holons causes recurring failures:
- Trust inflation by averaging. Averaging confidences of conflicting claims creates a falsely “reliable” whole; violates WLNK and B.3 conservatism.
- Provenance erasure. Merges that drop sources, methods, or links break A.10 Evidence Anchoring and make results unauditable.
- Semantic drift. Folding across mismatched concepts without explicit mappings (and their CL) yields incoherent composites that look formal but mean nothing.
- Order blindness. Arguments with essential dependency order (premise ⇒ lemma ⇒ conclusion) are treated as sets; non‑commutativity is lost and results become non‑reproducible.
- Context chimeras. Combining items across bounded contexts (different vocabularies/units/policies) without a Context Reframe (B.2) silently corrupts claims and inflates R.
- Category errors. Importing Γ_sys rules (e.g., “sum truth,” “avg form